-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimizer option (readability) #83
Comments
Thanks for pointing out this! This is definitely just a documentation problem. We are not re-writing the optimizers, but using external providers. CMA-ES and TensorFlow optimizers, for example, are provided by Qibo. For example, I used to run SGD with the following config: OPTIMIZER="sgd"
BACKEND="tensorflow"
OPTIMIZER_OPTIONS={
"optimizer": "Adam",
"learning_rate": "0.01",
"nmessage": 1,
"epochs": 1000,
} Note, also, that it is better to use tensorflow backend only when using tensorflow SGD. For all the other optimizations, we should use numpy or qibojit. In summary: We should write these instructions into the README.md files and add docstring so that the users know some reasonable parameters values. |
I'd say it's quite confusing that For example I tried to follow the documentation from import cma
cma.CMAOptions() which points to
but this param = params_history[-1]
(
partial_results,
partial_params_history,
partial_loss_history,
partial_grads_history,
partial_fluctuations,
vqe,
) = train_vqe(
deepcopy(ansatz_circ),
ham, # Fixed hamiltonian
optimizer,
param,
tol=tol,
niterations=maxiter, # Show log info
nmessage=nmessage,
loss=objective_boost,
training_options={'maxiter': maxiter}
)
params_history.extend(np.array(partial_params_history))
loss_history.extend(np.array(partial_loss_history))
grads_history.extend(np.array(partial_grads_history))
fluctuations.extend(np.array(partial_fluctuations)) is not terminating
At minimum I'd say there is a logging bug. It's not an issue for the paper submission because you know how to run it but for outside users this is difficult. #101 might be fixing it? |
Yes this is kind of expected (but as you say it has to be clarified)! |
In
train_vqe
inmain.py
, the optimizer options are given by argumentoptimizer_options
. However, the description in thehelp
documentation is unclear (without example code, a general user wouldn't know what to put there) and the default value fornepochs
is unrealistic (100000) andtol
does not help terminate the code.For example, when
optimizer='sgd'
,tol=1e-2
and we run the following codewhich does not specify
optimizer_options
, the code runs almost indefinitely, like so:In the scenario where
optimizer='cma'
(backend='tensorflow'), the loss function fluctuates largely (changes sign)In summary, the default value for
nepochs
in theoptimizers.optimize
function inansazte.py
may need to be more realistic for the general user. It may also be helpful if thehelp
documentation has more detailed descriptions of theoptimizer_options
. Moreover, we may need to see if 'cma' is running correctly?The text was updated successfully, but these errors were encountered: