-
Notifications
You must be signed in to change notification settings - Fork 3.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[POC] Provisioner for SBOM #13171
base: main
Are you sure you want to change the base?
[POC] Provisioner for SBOM #13171
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left a bunch of comments on the code, I think we can simplify the download to have it done once only, technically once we've copied the file locally for Packer, we can copy it to the user-specified destination (if specified). That or we can factorise the code for downloading since it's very similar.
I'll let you address those comments and do another pass of review after that.
6153767
to
127d625
Compare
`destination` option in the provisioner. | ||
|
||
Currently, we support `CycloneDX` and `SPDX` SBOM formats in `JSON`. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TBA: Add more details about max number of files allowed to download, and if we are going to add the file name field!
1e5fc02
to
4d00d8f
Compare
Some tests will create files and directories as part of the execution path for Packer, and we need a way to check this, so this commit adds a new file gadget to do those checks after a command executes.
Since the function did very little, and the code was called once in the provisioner, we remove the function itself and move the code over to the provisioner's Provision function. Also, since the output directory is prepared and its lifecycle is managed by Packer Core, we should not try to make the missing directories here, and instead rely on Packer core's code before calling the provisioner to do so.
Using io.Readers from a file's contents is not a bad idea inherently, but since we're forced to reset the reader periodically, this makes manipulation more complex because of the extra function call, error checking, and extra usage errors. To simplify how we're working with that data, we're passing around the raw file contents, so they're encapsulated into readers on-demand, when needed only, which is a rather costless operation as the bytes.Reader reads directly from the slice, maintaining its own offsets, therefore meaning that resetting it is not necessary, since we can as easily discard it, without risking failure.
When outputting the data from the provisioner so Packer can consume it, we are writing to a file. The only constraints on this file are that Packer manages its lifecycle, and that Packer core and the provisioner are synced-up in terms of the contents and the structure of the file. Since we are outputting the file, and its format, we can bundle the two together, and export them under a structure that both the provisioner and Packer core have access to, so we can then fill-in the blanks, and write a serialised version of this structure to disk. This is the approach taken in this commit. First we clean-up the existing code a bit, since some abstractions were a bit hasty, and did not necessarily make sense in all cases, so the code is consolidated under `Provision`, and we use that structure then to fill-in the relevant information before serialising it to JSON and writing it in the output file provided by Packer core.
4d00d8f
to
c3e67ec
Compare
Since we are uploading multiple SBOMs possibly for a single build, we need to generate names for them, so users are able to differentiate between those artifacts. This commit adds an optional property `sbom_name`, that users can provide in the configuration for this provisioner, which is then used by Packer core for uploading the data to HCP Packer later on.
Since the SBOM validation functions are now called only from the provisioner itself, they don't need to be public anymore, so we make them private to the package.
Since packer now supports keeping track of SBOMs produced during a build, we add the code to integrate those changes into the internal/hcp package, so we do upload them on build completion.
When a build cannot be completed without errors, the build state was left as running, unless the build explicitly failed, which meant that HCP Packer would be responsible for changing the status after the heartbeats for the build stopped being sent for two 5m periods. This commit changes this behaviour, by explicitly marking the build as failed if something did not work while trying to complete a build on HCP Packer, even if the local Packer core build succeeded before that.
In the current state, a Packer build that succeeds but fails to push its metadata to HCP for reasons other than a lack of artifact will always succeed from the perspective of a user invoking `packer build`. This can be a bit misleading, as users may expect their artifacts to appear on HCP Packer if their build succeeded on Packer Core, so this commit changes this behaviour, instead reporting HCP errors as a real error if the build failed, so packer returns a non-zero error code if this happens.
181134a
to
3f5f177
Compare
@@ -662,6 +661,23 @@ func (bucket *Bucket) completeBuild( | |||
return packerSDKArtifacts, fmt.Errorf("build failed, not uploading artifacts") | |||
} | |||
|
|||
artifacts, err := bucket.doCompleteBuild(ctx, buildName, packerSDKArtifacts, buildErr) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can consider this verbally approved, will make it as approved once we have tested it using the internal SDK instead of the locally generated one. Good work on this Lucas and Dev!
Since this will fail locally currently without the generated SDK. I checked this branch out and ran make test
and make ci-lint
, both passed.
Example templates:
JSON:
HCL: