You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If tests were written but not commited, commit them. If tests were not written either add a note for the contributors or plan writting them for newly added code? My issue is mostly about managing expectations, I do not think contributors should be expected to debug why tests (which are not there) do not run, and users should now that the package has no tests yet.
Use solid software engineering with documentation and tests hosted with appropriate technologies (Read The Docs and Travis are examples of technologies that can be used).
The text was updated successfully, but these errors were encountered:
Problem
I would like to contribute. I hoped that some tests would run on CI for #210. I see that there is no linting as per:
The package appears to have three suites of tests documented in packages/jupyter-ai/README.md:
1. Python tests
(removed note about
pip install -e .[tests]
not working - it works if run frompackages/jupyter-ai
)I see only one commented out test file.
2. Frontend tests
There are only two tests: one skipped and one useless.
3. Integration tests
There is only a smoke test.
Proposed Solution
If tests were written but not commited, commit them. If tests were not written either add a note for the contributors or plan writting them for newly added code? My issue is mostly about managing expectations, I do not think contributors should be expected to debug why tests (which are not there) do not run, and users should now that the package has no tests yet.
Additional context
Jupyter Criteria for official Subprojects include:
The text was updated successfully, but these errors were encountered: