Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replicability across function evaluations in PSO #652

Open
andrealati opened this issue Oct 22, 2024 · 1 comment
Open

Replicability across function evaluations in PSO #652

andrealati opened this issue Oct 22, 2024 · 1 comment
Assignees

Comments

@andrealati
Copy link

andrealati commented Oct 22, 2024

Hello,

First, thank you very much for developing and maintaining this library is very useful.

I am using the PSO algorithm to estimate a model with 17 parameters. While experimenting with the algorithm we noticed a replicability issue that we haven't been able to solve. Specifically, we noted a discrepancy between the objective function's values: using identical parameters produces different results inside versus outside the solver.

More precisely we do the following:

  1. at every function evaluation of pymoo_minimize we print the values of the parameters and the associated error.
  2. we then run our objective function at the values printed during the evaluation step in pymoo_minimize and we get a different value for the error, sometimes significantly different.

Is this expected behaviour ? If yes, is there a setting in the solver we can use to address this ?

We tried to replicate the same issue using scipy.minimize but in that case the evaluations within the solver and outside are consistent.

Thank you very much for your help and sorry if there's an obvious solution to this.

Our implementation of the algorithm is as follows:

class PymooProblem(ElementwiseProblem):
	def __init__(self):
		super().__init__(n_var=smm_init.shape[0],
					n_obj=1,
					xl=np.array([bound[0] for bound in p_bounds]),
					xu=np.array([bound[1] for bound in p_bounds])
					)

	def _evaluate(self,x,out,*args,**kwargs):
		print("Inputs:", ["{:.11f}".format(val) for val in x])
		f1 = Objective(x, **kwargs)
		print("F val:", f1)
		out["F"] = [f1]

algorithm = PSO(pop_size=20)
problem_abs = PymooProblem()
res = pymoo_minimize(problem_abs,
					algorithm,
					seed=1,
					verbose=True,
					save_history=True)
@blankjul blankjul self-assigned this Nov 24, 2024
@blankjul
Copy link
Collaborator

blankjul commented Nov 24, 2024

Is it possible your objective function is non-deterministic?

Look at this example

from pymoo.algorithms.soo.nonconvex.pso import PSO
from pymoo.core.problem import ElementwiseProblem
import numpy as np
import pandas as pd

from pymoo.optimize import minimize


class MyProblem(ElementwiseProblem):
    def __init__(self):
        super().__init__(n_var=10,
                         n_obj=1,
                         xl=0,
                         xu=1
                         )

    def _evaluate(self, x, out, *args, **kwargs):
        out["F"] = np.sum(np.square(x))


algorithm = PSO(pop_size=20)
problem = MyProblem()

dx = []

for seed in range(10):
    for i in range(3):
        res = minimize(problem,
                       algorithm,
                       seed=seed,
                       verbose=False)
        opt = res.opt[0]
        dx.append(dict(seed=seed, x=hash(tuple(opt.x)), f=opt.f))

dx = pd.DataFrame(dx)

print(dx)

For each seed I am able to reproduce the result (x and f).
Feel free to modify the code above to make your point. I am not able to run your code because of missing imports.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants