-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add the Chronicle Server chart #441
Conversation
978e4a2
to
e927c38
Compare
9b3bcfa
to
dfe0063
Compare
dfe0063
to
c35e9a5
Compare
b36a930
to
63eab6f
Compare
0b5205e
to
488ceb5
Compare
@colearendt @shahmonanshi what do you 2 think of this chart in its current form? I've added you as a reviewer too @tnederlof as I see you've been making some cross cutting changes to this repo as well |
Co-authored-by: Aaron Jacobs <[email protected]>
58c8dd3
to
c903f6e
Compare
31783eb
to
c06784a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a few nits. Totally fine to update at a later date 😄 I try to avoid Statefulsets if possible, but if you have a good reason to use them (slower to roll over, etc. etc.), then carry on!
@@ -0,0 +1,80 @@ | |||
--- | |||
apiVersion: apps/v1 | |||
kind: StatefulSet |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting!! Why is a Statefulset important? Is that for a caching disk or some such?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's mostly as a convenience for the pvc provisioning, if you're running with local storage enabled Persistent storage a pvc is a must-have for most use cases. It's nice to not have to worry about duplicate replica sets being created on helm updates as well, the pod replacement policy of stateful-sets is generally simple and non-problematic for chronicle.
Theoretically we could run chronicle on a deployment/replica-set model, there's not too many technical blockers to think of. The stateful-set model has worked well so far though
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That makes sense.
It's worth noting that statefulsets do have some downsides: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#limitations
Usually services that uses statefulsets have a particular reason to do so. Otherwise replicasets/deployments are the more standard choice.
If you use a PVC, does each chronicle node store its own data? Or do they share data storage?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's tough, we do have a choice to make for sure. As a developer I feel like it's nice to have those pvcs deleted by default:
Deleting and/or scaling a StatefulSet down will not delete the volumes associated with the StatefulSet. This is done to ensure data safety, which is generally more valuable than an automatic purge of all related StatefulSet resources.
Though on the other hand, we are trying to build a data archive too. If we scaled a chronicle server from 1->5 replicas for a day, and stored a bunch of data, it would be nice if 2 weeks later when we tried scaling 1->5 again we found our old data in those local storage systems that we could add to.
If you use a PVC, does each chronicle node store its own data? Or do they share data storage?
Yeah each chronicle server does need a dedicated pvc, that's a known technical limitation right now in the alpha though. We need to rewrite some in-memory locks into file-locks for our compactor engine, and then we can support nfs or shared storage. We'll definitely need to make some helm changes around that too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This conversation has made me think of an epic though that the chronicle team should probably undertake: https://github.com/rstudio/chronicle/issues/715.
I made a note there about looking into deployments vs. statefulsets and seeing what's more practical in our testing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, just some comments about using the latest tag.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work! I left a few nit-picky comments. Where ever we can drive consistency across the other products I think it is very helpful for our end users!
P.s. I have yet to try to deploy the helm chart, but I plan to. I will update this PR once I am done to report if I run into any issues.
Co-authored-by: Sam Edwardes <[email protected]>
2245c38
to
5ddbcdb
Compare
3b39c52
to
6039a50
Compare
032c211
to
ac15ab3
Compare
Thanks everyone for all for your feedback! I don't think I disagree with anything, I tried to make 80-85% of the adjustments suggested and made a note of the rest with some github issue tracking.
|
Background
Part of https://github.com/rstudio/chronicle/issues/683.
We are migrating this from a private posit repo, where we've been hosting the chart previously.
Summary
charts/_templates.gotmpl
Notes
This PR is still an early draft, though I'd like to get some early feedback if possible and open the PR now for discussion
I've added a number of new features to the chronicle chart like pod tolerations, affinities, nodeSelectors, etc. - though i didn't add every feature under the sun. I'm on the lookout for more basic features we think the chronicle chart should include