You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
My goal is to develop new locomotion control algorithms for SUPERball. I have thus a controller with a given number of open parameters that need to be set. The learning library thus seemed appropriate for that. I have a few questions regarding its use.
Everything seems to be running, but I am not sure I am understanding the output correctly. My actions are scaled and returned, but when I try to turn off the learning library and put the "optimized parameters" manually, the resulting distance using the graphical output of tgSimViewGraphics is not close at all to what was being obtained with tgSimView. Why would that be the case?
Additionally, I am not really sure I am understanding the log files correctly. Though the evolutiondefault.csv and scores.csv files seem to update correctly with my series of episodes, the bestParameters ones save parameters that I do not see throughout my run of episodes (I std::cout everything in my console and check that they really do not appear). Shouldn't the best parameters reflect the best values returned by the adapter?
Finally, I am not really sure the results of one generation is being used in the next. There are indeed a few generations where the distance traveled is relatively high, but at the next ones drops back to relatively low values. I've been trying many combinations of the learning, startSeed, coevolution, leniencyCoef and MonteCarlo parameters, but still have not succeeded in achieving the results I am hoping for.
Thanks in advance for your help!
The text was updated successfully, but these errors were encountered:
I wanted to paste the key insight from a discussion we had over email so this is public knowledge:
I think your issue is due to your model penetrating the ground plane on startup. See the attached screen shots. I saw a similar issue a few months back, and changing the model's starting Y position solved it.
The trick here is to:
(1) Turn on graphics
(2) Pause the simulation using home
(3) Use the spacebar to reset
One could piece this together from the documentation: http://ntrt.perryb.ca/doxygen/keys.html but its such a valuable debugging technique that I think I should write it into a tutorial. Where's the best place for that?
I'll close this issue once the tutorial is up and you confirm that changing the model gets the results you're looking for.
Hello all,
My goal is to develop new locomotion control algorithms for SUPERball. I have thus a controller with a given number of open parameters that need to be set. The learning library thus seemed appropriate for that. I have a few questions regarding its use.
Everything seems to be running, but I am not sure I am understanding the output correctly. My actions are scaled and returned, but when I try to turn off the learning library and put the "optimized parameters" manually, the resulting distance using the graphical output of tgSimViewGraphics is not close at all to what was being obtained with tgSimView. Why would that be the case?
Additionally, I am not really sure I am understanding the log files correctly. Though the evolutiondefault.csv and scores.csv files seem to update correctly with my series of episodes, the bestParameters ones save parameters that I do not see throughout my run of episodes (I std::cout everything in my console and check that they really do not appear). Shouldn't the best parameters reflect the best values returned by the adapter?
Finally, I am not really sure the results of one generation is being used in the next. There are indeed a few generations where the distance traveled is relatively high, but at the next ones drops back to relatively low values. I've been trying many combinations of the learning, startSeed, coevolution, leniencyCoef and MonteCarlo parameters, but still have not succeeded in achieving the results I am hoping for.
Thanks in advance for your help!
The text was updated successfully, but these errors were encountered: