Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEA] Nightly build of latest stable #120

Open
lmeyerov opened this issue Aug 27, 2020 · 3 comments
Open

[FEA] Nightly build of latest stable #120

lmeyerov opened this issue Aug 27, 2020 · 3 comments

Comments

@lmeyerov
Copy link

Is your feature request related to a problem? Please describe.

Issue rapidsai/cudf#6096 caused an expensive fire drill involving core members across multiple projects

Assuming the current stable release breaks again in the future for one of many reasons, and stable releases are out for ~6w, a useful thing may be something along the lines of daily build tests of last 2 releases (3mo)

Describe the solution you'd like

  • ~Daily builds of the last ~2 stable releases, giving coverage of releases for last ~3mo
  • Tests standalone (per-repo) + combined
  • For case of individual repos fine but combined failing, treat as a cudf issue (?)
  • Visibility - regular user: Version status badge ("0.14 passing") on each repo (self + global)
  • Visibility - new user: rapids.ai site
  • Visibility - core team: ?

I'm less clear on the relative value of the full gamut of going across CUDA etc versions. AFAICT, the 80% is probably around conda: Python verison x set of RAPIDS packages

@kkraus14
Copy link
Contributor

@lmeyerov This is a huge ask that would require a lot of resources both in engineering hours and compute resources to running all of the builds and tests where it unfortunately isn't feasible for us currently.

I think one of the challenges here is that the number of libraries in RAPIDS is continuously growing and as new libraries are added they're not necessarily as mature as say cuDF or cuML with respect to their dependency tree. Perhaps we should define some type of "graduation" process to be included into the rapids metapackage to make it a bit more of a "controlled" experience.

In the meantime, I think a middle ground here can be the ask that we treat conda solving issues of the current stable release metapackage with more attention.

@lmeyerov
Copy link
Author

lmeyerov commented Aug 27, 2020

Yeah I get the concern of doing a full test suite run. As a lot of it seems to be the mess that is pydata conda deps, maybe a nightly docker run - conda install rapids=x, x-1 py=3.6-3.8 can go far, and timeout the solver after say 5min.

It's problematic enough that we publish + use stable base docker rapids builds for our side ( https://hub.docker.com/orgs/graphistry/repositories ) and release them every month or two, though that still leaves us in an occasionally sensitive spot in practice when there are CVEs, updating base deps, doing clean builds, etc. For folks less deep into the ecosystem, having public stable / recommended not able to conda install isn't good, that's normally expected to be only for nightly..

EDIT: re RAPIDS is continuously growing and as new libraries are added .... that sounds more like an alarm bell calling for more automation here to support the growth, as it'll happen more, not less, and hit more devs+users...

@kkraus14
Copy link
Contributor

This isn't a cudf issue so moving to the integration repo as that's where the metapackage is produced.

@kkraus14 kkraus14 transferred this issue from rapidsai/cudf Aug 27, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants