-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ws-scheduler] Take pod resource into account #4700
Comments
Note, however, the issue with |
Good point. For core-dev we could change the CIDR range as you suggested |
I looked into increasing the max pod numbers by using larger CIDR ranges, but no luck:
returns
And the docs say
|
Let's do the math how many preview envs we can have once #4744 has been merged: On the core-dev clusters, there are
110 pods max per node minus 10 (kube-system, docker, monitoring) leaves 100 for preview-envs. While 100 slots should in theory be enough room for 33 preview envs, in practice there will probably be trouble earlier because already-running pods won't re-locate to other nodes to make room for DaemonSets. |
That is a pity - thank you for checking this one though :) Looks like sooner or later we'll have to move to more isolated preview environments. |
Maybe pod-(anti)-affinities can be used to ensure every preview env (kinda) gets their own node:
|
True. We can minimize the chance for this by:
|
Side note: DaemonSets can have node selectors to restrict them to some nodes. This could allow limiting DaemonSets to only some nodes (for example, only 50% of the nodes, or 1 node for each deployment) thus avoiding the quadratic growth trajectory. |
In core-dev we often see
OutOfpods: Pod Node didn't have enough resource: pods, requested: 1, used: 110, capacity: 110
. ws-scheduler should takepods
resource into account.The text was updated successfully, but these errors were encountered: