-
Notifications
You must be signed in to change notification settings - Fork 946
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
python: state.observation_tensor() creates a new state... #1068
Comments
Do you have a pointer to code? I did a search for
The |
It's the call to |
Ok I think this question is more about how the Python games work than about R-NaD. The code for those functions is here: https://github.com/deepmind/open_spiel/blob/master/open_spiel/python/pybind11/python_games.cc It is possible that there are two observers? Without seeing code of your game or a stack trace, it is a bit difficult to help out. But I will tag our resident expert on python games in case he has any ideas: @elkhrt |
You are right, R-NaD was just a trigger for the issue. Thank you very much for suggestions on where to look in the code. |
Ok perfect, thanks for the detail. I will check with @elkhrt to see if this was known. Would be nice if we could avoid this. |
Thanks for flagging and the great investigation! This is indeed a bit unsatisfactory. Probably the simplest option is to add two fields to PyGame:
And then modify the methods to cache the data there:
That still results in one superfluous call per game object, but that's pretty cheap. |
Sorry for the delayed answer. As you guys are more familiar with the code, thus you may want to make the change. |
@elkhrt did we resolve this? |
We didn't make change to the core functionality; @too-far-away found a work-around for their case. |
I've been playing with open_spiel's R-NAD algorithm implementation in python and noticed some strange behavior: each time R-NAD calls
state.observation_tensor()
a new state is created, then there is a call toobservation.set_from(new_state)
and only then the original observation.set_from(state) is called. It also looks like the new state is not a clone of the original one.My game implementation is in python. Here is an excerpt from its configuration:
There is a great chance I'm doing something wrong. on the other hand I could not find any issue related to the described behavior in my code. I also tried to find the actual code of
State::ObservationTensor()
, but I guess the implementation of the virtual method is in thepyspiel.State
which to my embarrassment I was not able to find.Please advise.
The text was updated successfully, but these errors were encountered: