-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kots CLI: "kubectl kots pull" creates invalid k8s objects #856
Comments
@marccampbell @markpundsack getting some more questions on this one -- can I get an ack that y'all are aware of it? |
Yeah, the admin console not respecting the namespace flag is a real pain and super confusing. Per Slack, it also seems that an environment variable of |
|
Yeah, unfortunately I think It was originally written to support a weird workflow which we no longer actually need. It attempts to "sideload" the application so that the admin console can install. Since this was written, we've introduced automated installs, and airgap installs to existing clusters. IMO, kots pull should be a way to get the Admin Console manifests (and application metadata i.e. branding and kots.io/v1beta1, application needed for RBAC permissions), and generate the manifests for it. KOTS supports airgap installs and I think this will turn |
@marccampbell fwiw, I was directed to |
@genebean understood. There are definitely some bugs in this workflow we need to address before this is viable. |
I think this is likely the same issue @dexhorthy mentioned, but when I try to apply the manifests generated by this I get the following error:
|
I've worked around this limitation by |
The large ConfigMap appears to be fixed. |
tl;dr
kubectl kots pull
seems pretty broken right now. This issue documents the problems and some manual workarounds to get around them.I tried a
kubectl kots pull
using the published sentry example license and was unable to apply the resulting yaml. I've tried this with a few apps and it should be pretty easy to reproduce. There are a few issues here:upstream/admin-console
have a hardcodednamespace: default
, regardless of what namespace is passed tokubectl kots pull
kots pull
There's one thing that I think is maybe an enhancement opportunity rather than a bug, but at the end of the deploy, kotsadm still wants you to upload a license, config, preflight checks, etc.
Repro steps
But running that gives an error of
The ConfigMap "kotsadm-bundle-0" is invalid: metadata.annotations: Too long: must have at most 262144 characters
Workaround step 1: removing config map
It seems this can be worked around by commenting the config map out of
base/kustomization.yaml
, but I am unclear as to whether this will break anythingWorkaround step 2: overriding default namespace in kustomize
After removing the config map and doing another apply, we get a whole bunch of issues with hardcoded namespaces:
So let's update base/kustomization.yaml with our namespace to see if that fixes it:
At the end of this, the apply works. I could probably have also done this namespace tweak in a downstream, so you could argue this falls on the end user, but I'd say it's better for things to work out of the box, which I think we could do by following our own advice and omitting
namespace
on all theadmin-console
resources.Unfortunately this still leaves kotsadm-api in a crash loop:
Workaround step 3: adding a cluster token via downstream
Let's make a downstream that patches in a cluster token:
Now we should have something like this to apply:
We can verify this works with a kustomize build
let's delete the previous secret so we can overwrite the value (no replace -k yet)
Let's verify really quick that we have some data in there now
And it looks like now our kotsadm-api pod is running okay. Hopefully this will also fix the crashloop in
kotsadm
as it waits for the bucket to be created in minio:success!
It looks like now kotsadm is up and running, as well as our Sentry app pods:
From here we can launch the admin console
We still have to go through and upload the license etc, but once we've gone through the UI setup things seems to be humming along nicely and we can launch the Sentry app on
localhost:9000
The text was updated successfully, but these errors were encountered: