-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CSI Driver failes to mount in pods #377
Comments
The error you're seeing is because the worker node is unable to find the block device on the host. Before anything else, is this iSCSI or FC? If iSCSI:
If FC:
|
They are using ISCSI, the VM's for my cluster are on subnet 192.168.70.X VLAN 70, our ISCSI connection is on 192.168.221.x and 192.168.222.X VLAN221/222 The ESXi hosts have a direct connection and everything works fine there, the virtual machines that run our kubernetes cluster seem to be creating new volumes on the nimbles, and even attaching(i think) to them, as the volumes show as online in the Nimbles. |
The VMs need in-guest network interfaces on these VLANs. The creation of the volume initially is a simple control-plane operation only that completes successfully through your VLAN 70. |
OOOOOOOOOOOO so i need to add secondary NIC's to our VM's that place them on to the ISCSI VLAN's then eh, now i just feel dumb :) |
so i am playing with the HPE CSI Driver on our Nimbles, i thought i had everything configured right, it is creating the volumes on the Nimbles, and bounds in Kubernetes but fails to mount to pods.
We are using vmware csi and a smb csi already without issue so not entirely sure what i am doing wrong here:
Storage Class YAML:
Any ideas?
The end goal is to build a storage class for both of our Nimbles, and once things are working to play with the NFS Provisioner so we can make use of some RWX Volumes.
DETAILS:
Nodes: Ubuntu Server 22.04.3 LTS
RKE2 1.27
Provisioned by Rancher
The text was updated successfully, but these errors were encountered: