Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question: Is it possible to create a cluster-api cluster in a different namespace than the tinkerbell stack #385

Open
willemm opened this issue Jul 18, 2024 · 0 comments

Comments

@willemm
Copy link

willemm commented Jul 18, 2024

I've been trying to get capt to work without much success, until I came across the following line in the documentation:

Generating the cluster configuration
For the purpose of this tutorial, we'll name our cluster capi-quickstart. The --target-namespace needs to be the namespace where the Tink stack is deployed. Otherwise you will see an error.

After this, I moved the tinkerbell stack to the namespace of the capi-cluster, and it suddenly started working.

Can I conclude from this that it is impossible to run the tinkerbell stack in its own namespace ?
Or is it somehow possible to configure it to look at multiple namespaces ?

In the first case, it would be nice if this was spelled out a lot clearer in the documentation.
In the second case, how do I configure it to work across namespaces ?

mergify bot added a commit to tinkerbell/tink that referenced this issue Aug 13, 2024
## Description

Removes the default setting where not specifying the kubernetes-namespace defaults it to whatever namespace the controller is running in, and changes the workflowID to namespace/name so it can find the workflows

## Why is this needed

tinkerbell/cluster-api-provider-tinkerbell#385

With this change, you can create hardware and workflow resources in different namespaces.

Fixes: #

## How Has This Been Tested?
We have a cluster-api setup where we're adding some bare metal nodes to a cluster.
With this change, the workflows that previously only worked from the tink-system namespace now also work from a different namespace.  I also tested the old working setup and that still works as well.
The change is minimal, so it shouldn't impact much.
I haven't tested if the --kube-namespace setting would restrict it to one namespace again.


## How are existing users impacted? What migration steps/scripts do we need?

No migration steps are needed, unless users have multiple instances of tink-server running in different namespaces, or have another reason why they specifically don't want resources in a different namespace to be picked up.

This could probably be avoided by having the helm chart add the kube-namespace argument to the deployment and have it pull the value from the downward api somehow, but it seems to me that having it default to looking at all namespaces would be preferrable for most users.

## Checklist:

I have:

- [ ] updated the documentation and/or roadmap (if required)
- [ ] added unit or e2e tests
- [ ] provided instructions on how to upgrade
mergify bot added a commit to tinkerbell/hegel that referenced this issue Aug 13, 2024
## Description

Removes the default setting where not specifying the kubernetes-namespace defaults it to whatever namespace the controller is running in

## Why is this needed

tinkerbell/cluster-api-provider-tinkerbell#385

With this change, you can create hardware resources in different namespaces.

Fixes: #

## How Has This Been Tested?
We have a cluster-api setup where we're adding some bare metal nodes to a cluster.
With this change, the hardware resources that previously only worked from the tink-system namespace now also work from a different namespace.  I also tested the old working setup and that still works as well.
The change is minimal, so it shouldn't impact much.
I haven't tested if the --kubernetes-namespace setting would restrict it to one namespace again.


## How are existing users impacted? What migration steps/scripts do we need?

The Role and RoleBinding resources in the helm chart (or whatever other deployment method) need to be changed to ClusterRole and ClusterRoleBinding, otherwise it will not be able to read from the other namespaces.

Otherwise no migration steps are needed, unless users have multiple instances of hegel running in different namespaces, or have another reason why they specifically don't want resources in a different namespace to be picked up.

This could probably be avoided by having the helm chart add the kubernetes-namespace argument to the deployment and have it pull the value from the downward api somehow, but it seems to me that having it default to looking at all namespaces would be preferrable for most users.

## Checklist:

I have:

- [ ] updated the documentation and/or roadmap (if required)
- [ ] added unit or e2e tests
- [ ] provided instructions on how to upgrade
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant