-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test Discovery fails on large codebases #21922
Comments
Hi! Thank you for your detailed report. We are excited to hear your company might move to vscode and would love to make that happen! One first step which would be helpful is if you could check if you are on the new testing rewrite. You can check for it by looking to see if I am curious if this helps with efficiency as we have switched to to invoking pytest directly. The code for this rewrite can be found in the Thanks! |
@eleanorjboyd I have attempted to do so in my local user settings here As well as in my project settings here (which warned me to use the user settings): Neither of these made the test debugging any better: I am still facing the same errors as before. You are still running pytest collect in your rewrite so I am not surprised that things are not better. I would like to be clear: this is a blocker for us. If we can't get test discovery and running to work, then we (the devx team) can never recommend VS code to the engineers and support it as a first class citizen. For me there are 2 possible ways to proceed:
Please let me know which one is ideal, or if there is a third way I have not thought of. EDIT: To highlight the severity-- my remote development VM just crashed because I'm using too much memory when I ran test discovery. |
We're sorry to hear that, but we would rather not circumvent pytest for discovery which the vast majority of our users rely on and expect us to use directly (due to pytest plug-ins, it being the way pytest operates itself when they do things from the terminal, etc.). We purposefully try to integrate with tools to provide a VS Code UI on top of them and not circumvent them. If you can create a pytest plug-in that does what you want then that will work in the future as we are working toward allowing users to provide distinct settings for discovery which would allow you to specify your plug-in only in the discover phase. Otherwise I'm afraid you're looking at doing your own Python test extension. |
So if I am understanding correctly, in the future I can make a plugin that just handles test discovery -- i.e. it passes some kind of list of objects specifying tests that were found and to display as possible to run, and then the official python plugin takes it from there (handles executing and displaying). If that it the case, then that is awesome, and would totally work. What is the timeline for this? |
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
@karthiknadig, could you confirm this from a technical perspective? |
You make a pytest plug-in and if you can get pytest to do what you want then that will be doable as we are working on letting you specify the command we pass to pytest for discovery separate from execution. But I don't know if pytest specifically gives you that level of flexibility to short-circuit discovery and take it over completely. Our own code to find out what tests you have comes as a pytest plug-in itself where we just inspect the tests that happen to be found.
We are actively working on it, but otherwise we don't give out ETAs. I will also ask that you please be patient and give us a week to respond to your questions. We had a two-day team offsite and it was a holiday here in Canada on Monday. |
Super, no worries! I hope you had a good offsite! |
HI @juandiegopalomino, our rewrite is now out via the experiment to 100% of stable users. We do not have a finalized timeline for when we will move the rewrite to no longer being behind an experiment but maybe Feb next year to give you an idea. This issue will track the progress of that rollout: #20086. I am going to close this issue as that issue will allow you to get your answer on when it is released. Thanks! |
Type: Bug
Behaviour
NOTE: not sure if it's a bug report or feature request to solve a bug
Hi! I am JD from the developer experience team at Rippling, and we are exploring moving the company's official editor from Pycharm to VSCode, but we are running on some critical issues when trying to use the test runner/debugger feature.
For at least the pytest runs, test discovery currently functions by running pytest with the --collect-only flag to gather data about all current tests. For reasonable codebases, this works quite well, reasonably quick, and is a clever way of not re-inventing the wheel with test discovery.
For large codebases with ~60K tests and ~300 dependencies, this becomes a problem. In our ci with larger machines, pytest --collect-only takes a little over 3 minutes to run, and ~10 minutes in the developer VMs we tested this out on. During this time, the editor's child process also consume updwards of 16GB of RAM. Moreover, afterwards the test line placement was wrong and the editor begins to show a myriad of ui errors, to the point of just restarting it from the frustration.
It is not a big leap of imagination to consider that the issue comes from the fact that we're trying to collect an insane number of tests in one go. We understand why this default strategy was chosen, but are hard-pressed to find alternatives.
One solution which we've poc'd following this guide was to not rely on pytest for the test discovery, but rather do a primitive syntax scan on the currently open python files and dynamically identify the tests following the simple innate rules:
We found that this discovery strategy is quite snappy, takes up pretty much no resources, and works for almost all test cases (unless you're doing something really advanced with pytest/conftest).
Our ask is:
Expected vs. Actual
For the currently open test files, Pytest discovery
Actual: freezes for 10 minutes and then breaks
Steps to reproduce:
Diagnostic data
python.languageServer
setting: DefaultOutput for
Python
in theOutput
panel (View
→Output
, change the drop-down the upper-right of theOutput
panel toPython
)User Settings
Extension version: 2023.14.0
VS Code version: Code 1.81.1 (Universal) (6c3e3dba23e8fadc360aed75ce363ba185c49794, 2023-08-09T22:20:33.924Z)
OS version: Darwin arm64 22.6.0
Modes:
System Info
canvas_oop_rasterization: disabled_off
direct_rendering_display_compositor: disabled_off_ok
gpu_compositing: enabled
metal: disabled_off
multiple_raster_threads: enabled_on
opengl: enabled_on
rasterization: enabled
raw_draw: disabled_off_ok
video_decode: enabled
video_encode: enabled
vulkan: disabled_off
webgl: enabled
webgl2: enabled
webgpu: enabled
A/B Experiments
The text was updated successfully, but these errors were encountered: