You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What happened:
I try to deploy a Portworx cluster with vsphere cloud storage and the settings maxStorageNodesPerZone: 2 for some tests. I've a RKE2 Kubernetes Cluster with 9 worker, split in 3 Zones and 2 Regions:
region-a
zone-1
zone-2
region-b
zone-3
Instead of a Portworx cluster with 9 Nodes, 3 compute-only nodes and 6 storage nodes, split by 2 nodes for each zone i get a storage cluster with 7 compute nodes and 2 storage nodes.
pxctl status on one of the storage nodes will show me the storage pool on the node. The storage pool has the correct zone and region. Also the nodes will get the correct topology.portworx.io Labels.
Defaulted container "portworx" out of: portworx, csi-node-driver-registrar
Status: PX is operational
Telemetry: Disabled or Unhealthy
Metering: Disabled or Unhealthy
License: Trial (expires in 31 days)
Node ID: e9fa0d76-63dd-46c0-8f40-917fd183515f
IP: 10.10.220.198
Local Storage Pool: 1 pool
POOL IO_PRIORITY RAID_LEVEL USABLE USED STATUS ZONE REGION
0 HIGH raid0 150 GiB 9.5 GiB Online dc3 duesseldorf
Local Storage Devices: 1 device
Device Path Media Type Size Last-Scan
0:1 /dev/sdb STORAGE_MEDIUM_MAGNETIC 150 GiB 04 Nov 22 14:14 UTC
total - 150 GiB
Cache Devices:
* No cache devices
Kvdb Device:
Device Path Size
/dev/sdc 32 GiB
* Internal kvdb on this node is using this dedicated kvdb device to store its data.
Cluster Summary
Cluster ID: px-test
Cluster UUID: 40db7504-8e09-4d42-9ba0-f265d942d362
Scheduler: kubernetes
Nodes: 2 node(s) with storage (2 online), 7 node(s) without storage (7 online)
IP ID SchedulerNodeName Auth StorageNode Used Capacity Status StorageStatus Version Kernel OS
10.10.220.198 e9fa0d76-63dd-46c0-8f40-917fd183515f tm-rke2-pool4-9f251d72-8f6s5 Disabled Yes 9.5 GiB 150 GiB Online Up (This node) 2.12.0-02bd5b0 5.4.0-131-generic Ubuntu 20.04.5 LTS
10.10.220.197 d052a82a-2be4-47c2-889a-aa1a986c1551 tm-rke2-pool3-81fad623-bxghr Disabled Yes 9.5 GiB 150 GiB Online Up 2.12.0-02bd5b0 5.4.0-131-generic Ubuntu 20.04.5 LTS
10.10.220.193 f510b9a2-62bd-4c23-a97a-cc557780d6b6 tm-rke2-pool4-9f251d72-6fn96 Disabled No 0 B 0 B Online No Storage 2.12.0-02bd5b0 5.4.0-131-generic Ubuntu 20.04.5 LTS
10.10.220.143 c8c5cd9a-0224-4c84-9c0f-dd1ad46d03ff tm-rke2-pool2-c2693cf3-549sk Disabled No 0 B 0 B Online No Storage 2.12.0-02bd5b0 5.4.0-131-generic Ubuntu 20.04.5 LTS
10.10.220.196 c563d40b-9fd8-48b2-874a-44762369b9e4 tm-rke2-pool3-81fad623-prz8b Disabled No 0 B 0 B Online No Storage 2.12.0-02bd5b0 5.4.0-131-generic Ubuntu 20.04.5 LTS
10.10.220.195 a787fa15-c59f-4cf6-9758-a7c4a9365c9e tm-rke2-pool2-c2693cf3-297ct Disabled No 0 B 0 B Online No Storage 2.12.0-02bd5b0 5.4.0-131-generic Ubuntu 20.04.5 LTS
10.10.220.200 6be720d4-ad73-4b25-a97b-a81a5c8b5a5e tm-rke2-pool4-9f251d72-8pc95 Disabled No 0 B 0 B Online No Storage 2.12.0-02bd5b0 5.4.0-131-generic Ubuntu 20.04.5 LTS
10.10.220.134 17961994-c523-4e7e-82b2-67fde1cf08ad tm-rke2-pool2-c2693cf3-cjrp7 Disabled No 0 B 0 B Online No Storage 2.12.0-02bd5b0 5.4.0-131-generic Ubuntu 20.04.5 LTS
10.10.220.194 15848061-71e7-47ef-82fc-645b2a005e36 tm-rke2-pool3-81fad623-xsbg9 Disabled No 0 B 0 B Online No Storage 2.12.0-02bd5b0 5.4.0-131-generic Ubuntu 20.04.5 LTS
Global Storage Pool
Total Used : 19 GiB
Total Capacity : 300 GiB
What you expected to happen:
A Portworx cluster with 9 nodes, 6 are storage nodes with 2 nodes on each of the 3 zones.
How to reproduce it (as minimally and precisely as possible):
** BUG REPORT **:
What happened:
I try to deploy a Portworx cluster with vsphere cloud storage and the settings
maxStorageNodesPerZone: 2
for some tests. I've a RKE2 Kubernetes Cluster with 9 worker, split in 3 Zones and 2 Regions:Instead of a Portworx cluster with 9 Nodes, 3 compute-only nodes and 6 storage nodes, split by 2 nodes for each zone i get a storage cluster with 7 compute nodes and 2 storage nodes.
pxctl status on one of the storage nodes will show me the storage pool on the node. The storage pool has the correct zone and region. Also the nodes will get the correct topology.portworx.io Labels.
What you expected to happen:
A Portworx cluster with 9 nodes, 6 are storage nodes with 2 nodes on each of the 3 zones.
How to reproduce it (as minimally and precisely as possible):
I use the Operator 1.10.0 with the spec below:
Anything else we need to know?:
Environment:
uname -a
): 5.4.0-131-genericThe text was updated successfully, but these errors were encountered: