-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove expired snapshots #29
Conversation
Bump vault version to 1.16.3, the latest community release: https://github.com/hashicorp/vault/blob/main/CHANGELOG.md#1163
This adds the variable S3_EXPIRE_DAYS. The idea of this feature is to allow the script to prune expired snapshot files on the S3 compatible remote storage. Files are considered expired once they exceed the threshold defined by S3_EXPIRE_DAYS. This feature is usefull for S3 compatible storage where there exist no lifecycle rules to clean up the storage of expired or old files, such as: * cloudscale object storage * Exoscale simple object storage (SOS) It is recommended to also configure a "Governance" lock on the files, to ensure no files are deleted by accident before the defined S3_EXPIRE_DAYS threshold.
|
also tried with |
Seems like this busybox date thingy has some different logic. This works, for example, extract 1 day:
|
The date manipulation did not work in my tests with busybox on OpenShift. This should work even in the busybox environments. It simply subtracts seconds.
Ok, new logic seems to work fine. For example, it deletes all snapshots except for the ones from today with
However, when i do So, I think I'm back to my issue with "what API commands s3cmd is actually doing".. |
Had a bug in my script. Was evident from the output ( |
@tongpu the scripting is on the next level now. It was actually tested on an OpenShift cluster. I still have some confusion regarding what a Governance lock does exactly (I could not notice any difference in the deletion behavior, the files are "archived" not deleted), but we can also take this discussion offline. Thank you for another round of review 🙏 |
I hope the auto-tagging workflow does not explode 🤞 |
In AWS S3 you can use lifecycle rules to remove expired objects.
However, I was wondering how this works in other S3 compatible storage servers, where the lifecycle rules are not implemented, such as Exoscale (see limitations) or cloudscale?
I think there needs to be some process that regularly iterates all the objects and decides which ones to prune, no?
Let's discuss..