-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CNTRLPLANE-1: Update to Kubernetes v1.32.0 #2147
base: master
Are you sure you want to change the base?
Conversation
…ard_reset Client go port forward reset, error handling and tests
Add statusz endpoint for apiserver
add warnings for cases one of projected volume types get overwritten by service account token
…nplace-resize-delay [FG:InPlacePodVerticalScaling] bug(quota): handle resources changed on resource quota filter
[FG:InPlacePodVerticalScaling] kubelet: Propagate error in doPodResizeAction() to the caller
KEP-3926: unsafe deletion of corrupt objects
…-place-pod-vertical-scaling-version-skew Updated version skew strategy for InPlacePodVerticalScaling
DRA: Implementation of ResourceClaim.Status.Devices (KEP-4817)
kubelet/kuberuntime: switch to runc/libct
Fix error check
Refactor: Move IsRestartableInitContainer to common utility package
Signed-off-by: Bo Wang <[email protected]>
1. Add Resources struct to PodSpec struct in both external and internal API packages 2. Adding feature gate and logic for dropping disabled fields for Pod Level Resources KEP: enhancements/keps/sig-node/2837-pod-level-resource-spec
1. Add support for pod level resources in kubectl 2. Reuse the existing method to describe container resources and generalize it to describe both pod and container level resources
1. If pod-level limit is set, pod-level request is unset and container-level request is set: derive pod-level request from container-level requests 2. If pod-level limit is set, pod-level request is unset and container-level request is unset: set pod-level request equal to pod-level limit
1. The effective container requests cannot be greater than pod-level requests 2. Inidividual container limits cannot be greater than pod-level limits 3. Only CPU & Memory are supported at pod-level 4. Inplace container resources updates are not supported if pod-level resources are set Note: effective container requests cannot be greater than pod-level limits is supported by transitivity. Effective container requests <= pod-level requests && pod-level requests <= pod-level limits; Therefore effective container requests <= pod-level limits Signed-off-by: ndixita <[email protected]>
1. Use pod-level resource when feature is enabled and resources are set at pod-level 2. Edge case handling: When a pod defines only CPU or memory limits at pod-level (but not both), and container-level requests/limits are unset, the pod-level requests stay empty for the resource without a pod-limit. The container's request for that resource is then set to the default request value from schedutil.
1. Pod cgrooup configured to use resources from pod spec if feature is enabled and resources are set at pod-level 2. Container cgroup limits defaulted to pod-level limits is container limits are not set
…urces into account Signed-off-by: ndixita <[email protected]>
Signed-off-by: ndixita <[email protected]>
…force 2nd labeling to make tests work
Adding a new mutation plugin that handles the following: 1. In case of `workload.openshift.io/enable-shared-cpus` request, it adds an annotation to hint runtime about the request. runtime is not aware of extended resources, hence we need the annotation. 2. It validates the pod's QoS class and return an error if it's not a guaranteed QoS class 3. It validates that no more than a single resource is being request. 4. It validates that the pod deployed in a namespace that has mixedcpus workloads allowed annotation. For more information see - openshift/enhancements#1396 Signed-off-by: Talor Itzhak <[email protected]> UPSTREAM: <carry>: Update management webhook pod admission logic Updating the logic for pod admission to allow a pod creation with workload partitioning annotations to be run in a namespace that has no workload allow annoations. The pod will be stripped of its workload annotations and treated as if it were normal, a warning annoation will be placed to note the behavior on the pod. Signed-off-by: ehila <[email protected]> UPSTREAM: <carry>: add support for cpu limits into management workloads Added support to allow workload partitioning to use the CPU limits for a container, to allow the runtime to make better decisions around workload cpu quotas we are passing down the cpu limit as part of the cpulimit value in the annotation. CRI-O will take that information and calculate the quota per node. This should support situations where workloads might have different cpu period overrides assigned. Updated kubelet for static pods and the admission webhook for regular to support cpu limits. Updated unit test to reflect changes. Signed-off-by: ehila <[email protected]>
…ject openshift feature gates into pkg/features Signed-off-by: Swarup Ghosh <[email protected]>
This is a short term fix, once we improve the cert rotation logic in library-go that does not depend on this hack, then we can remove this carry patch. squash with the previous PR during the rebase openshift#1924 squash with the previous PRs during the rebase openshift#1924 openshift#1929
…phase and graceful termination phase This reverts commit 85f0f2c.
…navailable errors for the etcd health checker client UPSTREAM: <carry>: replace newETCD3ProberMonitor with etcd3RetryingProberMonitor
This commit fixes bug 1919737. https://bugzilla.redhat.com/show_bug.cgi?id=1919737 * pkg/proxy/iptables/proxier.go (syncProxyRules): Prefer a local endpoint for the cluster DNS service.
…admission Signed-off-by: chiragkyal <[email protected]>
There are cases when the kubelet is starting where networking, or other components can cause the kubelet to not post the status with the bootId. The failed status update will cause the Kubelet to queue the NodeRebooted warning and sometimes cause many events to be created. This fix wraps the recordEventFunc to only emit one message per kubelet instantiation.
similarly to what we do for the managed CPU (aka workload partitioning) feature, introduce a master configuration file `/etc/kubernetes/openshift-llc-alignment` which needs to be present for the LLC alignment feature to be activated, in addition to the policy option being required. Note this replace the standard upstream feature gate check. This can be dropped when the feature per KEP kubernetes/enhancements#4800 goes beta. Signed-off-by: Francesco Romani <[email protected]>
fd58f56
to
e45a568
Compare
/retest |
/payload-job-with-prs periodic-ci-openshift-release-master-nightly-4.19-e2e-metal-ipi-ovn-ipv6 openshift/origin#29362 |
@bertinatto: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/42815de0-b89f-11ef-836e-acdfd76d5666-0 |
/payload 4.19 nightly blocking |
@bertinatto: trigger 13 job(s) of type blocking for the nightly release of OCP 4.19
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/230f2670-b937-11ef-874f-97fb83bae72b-0 trigger 67 job(s) of type informing for the nightly release of OCP 4.19
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/230f2670-b937-11ef-874f-97fb83bae72b-1 |
@bertinatto: This pull request references CNTRLPLANE-1 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the epic to target the "4.19.0" version, but no target version was set. In response to this: Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
@bertinatto: This PR was included in a payload test run from openshift/origin#29376
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/c3bbbbd0-b999-11ef-8485-4df525008c08-0 |
@bertinatto: This PR was included in a payload test run from openshift/origin#29376
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/487cdc20-b9bb-11ef-8f3b-2dffa7abea46-0 |
No description provided.