Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add scheduler predictor #64

Merged
merged 4 commits into from
Dec 10, 2024
Merged

Conversation

burmanm
Copy link
Collaborator

@burmanm burmanm commented Nov 10, 2024

Fixes #62

…ld be schedulable in the cluster.

Limit the amount of pods to 1 per node to be schedulable. This does not accurately take into account interpod anti-affinities, but also does not leak information from other namespaces.

Add some smoke tests to verify we actually cover some of our base ideas, but do not replicate the actual kube-scheduler tests for every verification.
… pods and how much memory and cpu each pod needs
Copy link
Contributor

@adejanovski adejanovski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like something that will come in handy!
I have a few comments around the cli experience which could benefit some refinements.

cmd/kubectl-k8ssandra/tools/estimate.go Outdated Show resolved Hide resolved
return err
}
if err := o.Run(); err != nil {
return err
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Issue: doing things this way always spits out the help text for the command in the console, along with the actual error:

k8ssandra-client % ./kubectl-k8ssandra tools estimate --count 25 --memory 2Gi --cpu 2000m 
Error: Unable to schedule the pods: unable to schedule all the pods, requested: 25, schedulable: 5
Usage:
  k8ssandra tools estimate [flags]

Examples:

        # Estimate if pods will fit the cluster
        kubectl k8ssandra tools estimate estimate [<args>]

        # Estimate if 4 pods each with 2Gi of memory and 2 vCPUs will be able to run on the cluster.
        # All CPU values and memory use Kubernetes notation
        kubectl k8ssandra tools estimate estimate --count 4 --memory 2Gi --cpu 2000m
        

Flags:
      --as string                      Username to impersonate for the operation. User could be a regular user or a service account in a namespace.
      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
      --as-uid string                  UID to impersonate for the operation.
      --cache-dir string               Default cache directory (default "/Users/adejanovski/.kube/cache")
      --certificate-authority string   Path to a cert file for the certificate authority
      --client-certificate string      Path to a client certificate file for TLS
      --client-key string              Path to a client key file for TLS
      --cluster string                 The name of the kubeconfig cluster to use
      --context string                 The name of the kubeconfig context to use
      --count int                      new nodes to create
...

It makes it a little challenging to figure out what's going on.
Also we don't get a message if the pods can successfully be scheduled.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed to this:

k8ssandra-client git:(add_scheduler_predictor) ✗ ./kubectl-k8ssandra tools estimate --count 6 --memory 2Gi --cpu 2000m    
2024/12/03 13:24:32 Pods can be scheduled to current cluster kind-kindk8ssandra-client git:(add_scheduler_predictor) ✗ ./kubectl-k8ssandra tools estimate --count 16 --memory 2Gi --cpu 2000m
2024/12/03 13:24:36 Error: Unable to schedule the pods to current cluster kind-kind: unable to schedule all the pods, requested: 16, schedulable: 6

@adejanovski adejanovski merged commit 82ff69d into k8ssandra:main Dec 10, 2024
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Ability to predict if more pods can be scheduled to the cluster
2 participants