-
Notifications
You must be signed in to change notification settings - Fork 305
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
return poll status after first load finish #742
Conversation
Thanks for submitting your first pull request! You are awesome! 🤗 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think its super tricky to follow how KubeSpawner works with this, so I'm struggling to review this.
Did you run into issues with this using enable_user_namespaces
set to True @ivyxjc ?
Related
I ran into the issue with I am encountering the following problem: And I found out that when hub starts, hub will delete some users' server form database due to |
kubespawner/spawner.py
Outdated
reflector = await self._start_watching_pods() | ||
if not reflector.first_load_future.done(): | ||
await reflector.first_load_future |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@minrk I think this makes sense, but would appreciate your review help.
I'm especially thinking about if we should put this code in _start_watching_pods
or _start_reflector
.
Currently, self._start_watching_pods
is awaited from _start
, stop
, and poll
, where _start
also awaits await self._start_watching_events
.
Should do the await on first_load_future
if needed from _start_reflector
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think awaiting it in _start_reflector
makes sense. If I were to guess based on my foggy memory of the distant past, I think perhaps _start_reflector
maybe used to have a requirement to not be async, so it couldn't wait for things? That doesn't appear to be the case anymore.
We've implemented a copy of We did not test the implementation in the original |
Thanks for the comment @danilopeixoto! I think that this bug may be the cause of jupyterhub/mybinder.org-deploy#2686 leaving orphan pods taking up space on mybinder.org. I moved the await of first_load to inside _start_reflector, so it's always awaited and hopefully less likely to get missed. |
Now, spawner does not wait for fist load finish. So it cannot detect the running pod and return incorrect status to hub.
Update by Erik
This is a bugfix for a regression introduced with KubeSpawner version 5.0.0 and Z2JH since 3.0.0 (or the pre-release 3.0.0-alpha.1 or the development release
3.0.0-0.dev.git.6133.hbfc583f8
). It is resolved in KubeSpawner 6.1.0 and z2jh 3.1.0.For more information and help cleaning up orphaned user pods, see https://discourse.jupyter.org/t/how-to-cleanup-orphaned-user-pods-after-bug-in-z2jh-3-0-and-kubespawner-6-0/21677