Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CFE-1134: Watch infrastructure and update AWS tags #1148

Merged
merged 1 commit into from
Nov 1, 2024

Conversation

chiragkyal
Copy link
Member

@chiragkyal chiragkyal commented Sep 23, 2024

The PR introduces the following changes:

  • The ingress controller now watches for the Infrastructure object changes. This ensures that any modifications to user-defined tags (platform.AWS.ResourceTags) trigger an update of the load balancer service.

  • Consider the awsLBAdditionalResourceTags annotation as a managed annotation. Any changes to user-defined tags in the Infrastructure object will be reflected in this annotation, prompting an update to the load balancer service.

Implements: CFE-1134

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Sep 23, 2024
@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Sep 23, 2024

@chiragkyal: This pull request references CFE-1134 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.18.0" version, but no target version was set.

In response to this:

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@chiragkyal
Copy link
Member Author

/retest

@chiragkyal chiragkyal force-pushed the aws-tags branch 2 times, most recently from ea36409 to f2e5cf8 Compare October 3, 2024 07:05
@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Oct 3, 2024

@chiragkyal: This pull request references CFE-1134 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.18.0" version, but no target version was set.

In response to this:

The PR introduces the following changes:

  • The ingress controller now watches for the Infrastructure object changes. This ensures that any modifications to user-defined tags (platform.AWS.ResourceTags) trigger an update of the load balancer service.

  • The logic for determining load balancer service updates now considers the awsLBAdditionalResourceTags annotation as a managed annotation. Any changes to user-defined tags in the Infrastructure object will be reflected in this annotation, prompting an update to the load balancer service.

    • Changing awsLBAdditionalResourceTags annotation won't mark the IngressController and the operator as Upgradable=False.

Implements: CFE-1134

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@chiragkyal chiragkyal changed the title CFE-1134: [WIP] Watch infrastructure and update AWS tags CFE-1134: Watch infrastructure and update AWS tags Oct 3, 2024
@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Oct 7, 2024

@chiragkyal: This pull request references CFE-1134 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.18.0" version, but no target version was set.

In response to this:

The PR introduces the following changes:

  • The ingress controller now watches for the Infrastructure object changes. This ensures that any modifications to user-defined tags (platform.AWS.ResourceTags) trigger an update of the load balancer service.

  • Consider the awsLBAdditionalResourceTags annotation as a managed annotation. Any changes to user-defined tags in the Infrastructure object will be reflected in this annotation, prompting an update to the load balancer service.

    • Updating awsLBAdditionalResourceTags annotation won't mark the IngressController and the operator as Upgradable=False.

Implements: CFE-1134

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@chiragkyal
Copy link
Member Author

/assign @Miciah

@candita
Copy link
Contributor

candita commented Oct 9, 2024

/assign

Copy link
Contributor

@Miciah Miciah left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add an E2E test? (I don't know whether an E2E test can update the ResourceTags in the infrastructure config status.)

pkg/operator/controller/ingress/controller.go Outdated Show resolved Hide resolved
@@ -134,6 +136,12 @@ func New(mgr manager.Manager, config Config) (controller.Controller, error) {
if err := c.Watch(source.Kind[client.Object](operatorCache, &configv1.Proxy{}, handler.EnqueueRequestsFromMapFunc(reconciler.ingressConfigToIngressController))); err != nil {
return nil, err
}
// Watch for changes to infrastructure config to update user defined tags
if err := c.Watch(source.Kind[client.Object](operatorCache, &configv1.Infrastructure{}, handler.EnqueueRequestsFromMapFunc(reconciler.ingressConfigToIngressController),
predicate.NewPredicateFuncs(hasName(clusterInfrastructureName)),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The other watches technically should have this predicate too, and ingressConfigToIngressController should be renamed. However, adding the predicate to the other watches and renaming the map function should be addressed in a follow-up.

pkg/operator/controller/ingress/load_balancer_service.go Outdated Show resolved Hide resolved
Comment on lines 756 to 759
ignoredAnnotations := managedLoadBalancerServiceAnnotations.Union(sets.NewString(awsLBAdditionalResourceTags))
ignoredAnnotations := managedLoadBalancerServiceAnnotations.Clone()
ignoredAnnotations.Delete(awsLBAdditionalResourceTags)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't we just use managedLoadBalancerServiceAnnotations now?

Suggested change
ignoredAnnotations := managedLoadBalancerServiceAnnotations.Union(sets.NewString(awsLBAdditionalResourceTags))
ignoredAnnotations := managedLoadBalancerServiceAnnotations.Clone()
ignoredAnnotations.Delete(awsLBAdditionalResourceTags)
return loadBalancerServiceAnnotationsChanged(current, expected, managedLoadBalancerServiceAnnotations)

To elaborate on that question, there are two general rules at play here:

  • First, the status logic sets Upgradeable=False if, and only if, it observes a discrepancy between the "managed" annotations' expected values and the actual values.
  • Second, by the time the status logic runs, there will not be any discrepancy between the expected (desired) annotation values and the actual annotation values.

And these general rules have exceptions:

  • As an exception to the first rule, before this PR, awsLBAdditionalResourceTags wasn't "managed", but even so, we set Upgradeable=False if it had been modified. (This is the logic that you are modifying here.)
  • As an exception to the second rule, if shouldRecreateLoadBalancer indicates that changing an annotation value requires recreating the service, then the desired and actual values can differ when the status logic observes them.

So now that you are making the awsLBAdditionalResourceTags annotation a managed annotation, don't we still want to set Upgradeable=False if the annotation value doesn't match the expected value?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the detailed explanation on how the status logic works and how it sets Upgradeable=False, as well as the exception that existed with awsLBAdditionalResourceTags before this PR. Earlier, I was under the impression that the status logic would still set Upgradeable=False even if awsLBAdditionalResourceTags was updated by the controller.

So now that you are making the awsLBAdditionalResourceTags annotation a managed annotation, don't we still want to set Upgradeable=False if the annotation value doesn't match the expected value?

Since awsLBAdditionalResourceTags will now be managed by the controller, and we still want to set Upgradeable=False if it’s updated by something other than the ingress controller, it does indeed make sense to use managedLoadBalancerServiceAnnotations directly in this logic. This way, the status logic will behave consistently for managed annotations when any discrepancy is observed.

I've removed the loadBalancerServiceTagsModified() function and used loadBalancerServiceAnnotationsChanged() directly inside loadBalancerServiceIsUpgradeable() and also added some comments for clearer understanding of the flow.

@openshift-ci-robot
Copy link
Contributor

openshift-ci-robot commented Oct 15, 2024

@chiragkyal: This pull request references CFE-1134 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.18.0" version, but no target version was set.

In response to this:

The PR introduces the following changes:

  • The ingress controller now watches for the Infrastructure object changes. This ensures that any modifications to user-defined tags (platform.AWS.ResourceTags) trigger an update of the load balancer service.

  • Consider the awsLBAdditionalResourceTags annotation as a managed annotation. Any changes to user-defined tags in the Infrastructure object will be reflected in this annotation, prompting an update to the load balancer service.

Implements: CFE-1134

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@chiragkyal
Copy link
Member Author

Could you add an E2E test? (I don't know whether an E2E test can update the ResourceTags in the infrastructure config status.)

I need to try it to see if updating the infrastructure config status is possible through E2E. I used kubectl edit-status infrastructure cluster command to manually update the status while doing some testing. Need to check if similar thing can be done through E2E.

Having said that, do you think we should get QE sign-off for this PR?

@Miciah
Copy link
Contributor

Miciah commented Oct 16, 2024

I need to try it to see if updating the infrastructure config status is possible through E2E. I used kubectl edit-status infrastructure cluster command to manually update the status while doing some testing. Need to check if similar thing can be done through E2E.

The tests in test/e2e/configurable_route_test.go update the ingress cluster config status; maybe that's useful as a reference or precedent for adding a similar E2E test to this PR.

What I'm wondering is whether the test can update the infrastructure config status without some other controller stomping the changes, and whether there could be other reasons specific to the infrastructures resource or resourceTags API field why an E2E test should not or cannot update it.

Having said that, do you think we should get QE sign-off for this PR?

As a general matter, we should have QE sign-off for this PR. QE might prefer to do pre-merge testing as well.

Is day2 tags support being handled by a specific group of QA engineers, or are the QA engineers for each affected component responsible for testing the feature? Cc: @lihongan.

@chiragkyal
Copy link
Member Author

The tests in test/e2e/configurable_route_test.go update the ingress cluster config status; maybe that's useful as a reference or precedent for adding a similar E2E test to this PR.
What I'm wondering is whether the test can update the infrastructure config status without some other controller stomping the changes, and whether there could be other reasons specific to the infrastructures resource or resourceTags API field why an E2E test should not or cannot update it.

I just pushed a commit to add an E2E test. It's working fine locally, hope it should work on CI as well.

Is day2 tags support being handled by a specific group of QA engineers, or are the QA engineers for each affected component responsible for testing the feature?

The QA engineers for each affected component are testing this feature.
/cc @lihongan

@openshift-ci openshift-ci bot requested a review from lihongan October 16, 2024 20:14
@chiragkyal
Copy link
Member Author

/retest-required

@chiragkyal chiragkyal requested a review from Miciah October 17, 2024 07:25
@lihongan
Copy link
Contributor

Did pre-merge test on standalone OCP cluster and it can add new tags key/value pair and update existing tags value, but cannot delete the user added tags. see

$ oc get clusterversion
NAME      VERSION                                                AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.18.0-0.test-2024-10-18-013151-ci-ln-4jn9mbt-latest   True        False         59m     Cluster version is 4.18.0-0.test-2024-10-18-013151-ci-ln-4jn9mbt-latest

$ kubectl edit infrastructure cluster --subresource='status'

$ oc get infrastructures.config.openshift.io cluster -oyaml | yq .status.platformStatus
aws:
  region: us-east-2
  resourceTags:
    - key: Owner
      value: QE
    - key: CaseID
      value: OCP-76984
type: AWS

$ aws elb describe-tags --load-balancer-name a8a32335a6697415e9d55bafce2e6060 --output yaml
TagDescriptions:
- LoadBalancerName: a8a32335a6697415e9d55bafce2e6060
  Tags:
  - Key: kubernetes.io/service-name
    Value: openshift-ingress/router-default
  - Key: Owner
    Value: QE
  - Key: CaseID
    Value: OCP-76984
  - Key: kubernetes.io/cluster/hongli-tags-z7s5b
    Value: owned

// edit status and remove one tags pair and update one tag value
$ oc get infrastructures.config.openshift.io cluster -oyaml | yq .status.platformStatus
aws:
  region: us-east-2
  resourceTags:
    - key: Owner
      value: None
type: AWS

$ aws elb describe-tags --load-balancer-name a8a32335a6697415e9d55bafce2e6060 --output yaml
TagDescriptions:
- LoadBalancerName: a8a32335a6697415e9d55bafce2e6060
  Tags:
  - Key: kubernetes.io/service-name
    Value: openshift-ingress/router-default
  - Key: Owner
    Value: None
  - Key: CaseID
    Value: OCP-76984
  - Key: kubernetes.io/cluster/hongli-tags-z7s5b
    Value: owned

@chiragkyal please confirm if that's expected.
And will try to check HCP later.

@lihongan
Copy link
Contributor

lihongan commented Oct 18, 2024

And the tags can be added to new created NLB custom ingresscontroler, but when updating tags in infrastructure, the NLB cannot be updated accordingly.

$ oc get infrastructures.config.openshift.io cluster -oyaml | yq .status.platformStatus
aws:
  region: us-east-2
  resourceTags:
    - key: caseid
      value: ocp-88888
type: AWS

$ aws elbv2 describe-tags --resource-arns arn:aws:elasticloadbalancing:us-east-2:******:loadbalancer/net/a84a56f6f236b4f77b0032aaa037882d/9b00573907e8e952 --output yaml
TagDescriptions:
- ResourceArn: arn:aws:elasticloadbalancing:us-east-2:******:loadbalancer/net/a84a56f6f236b4f77b0032aaa037882d/9b00573907e8e952
  Tags:
  - Key: kubernetes.io/service-name
    Value: openshift-ingress/router-nlb
  - Key: caseid
    Value: ocp-88888
  - Key: kubernetes.io/cluster/hongli-tags-z7s5b
    Value: owned

// update with new tags but NLB not changed
$ oc get infrastructures.config.openshift.io cluster -oyaml | yq .status.platformStatus
aws:
  region: us-east-2
  resourceTags:
    - key: new-key
      value: new-value
type: AWS

$ aws elbv2 describe-tags --resource-arns arn:aws:elasticloadbalancing:us-east-2:******:loadbalancer/net/a84a56f6f236b4f77b0032aaa037882d/9b00573907e8e952 --output yaml
TagDescriptions:
- ResourceArn: arn:aws:elasticloadbalancing:us-east-2:******:loadbalancer/net/a84a56f6f236b4f77b0032aaa037882d/9b00573907e8e952
  Tags:
  - Key: kubernetes.io/service-name
    Value: openshift-ingress/router-nlb
  - Key: caseid
    Value: ocp-88888
  - Key: kubernetes.io/cluster/hongli-tags-z7s5b
    Value: owned

It looks like bug with NLB? checked the NLB service and we can find the annotation of updated tags, see

$ oc -n openshift-ingress get svc router-nlb -oyaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: new-key=new-value
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "2"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "10"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: "4"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "2"
    service.beta.kubernetes.io/aws-load-balancer-type: nlb
    traffic-policy.network.alpha.openshift.io/local-with-fallback: ""

@lihongan
Copy link
Contributor

looks kubernetes/kubernetes#96939 is for NLB fix but closed

@chiragkyal
Copy link
Member Author

chiragkyal commented Oct 18, 2024

@chiragkyal please confirm if that's expected.

It looks like the tags are getting merged with the existing AWS Resource tags. I think we can only control the aws-load-balancer-additional-resource-tags annotation values; the next step depends on how this annotation is getting read by its consumer.
As per https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/add-remove-tags.html, aws has remove-tags API to remove the tags, and I believe the upstream is only using add-tags API only to add the tags, so the tags are not getting removed

@Miciah is there a way we can control this behavior ?

@openshift-ci openshift-ci bot added do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. lgtm Indicates that a PR is ready to be merged. labels Oct 28, 2024
Copy link
Contributor

openshift-ci bot commented Oct 28, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: Miciah

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Oct 28, 2024
}

t.Log("Updating AWS ResourceTags in the cluster infrastructure config")
retryErr := retry.RetryOnConflict(retry.DefaultRetry, func() error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not use updateInfrastructureConfigSpecWithRetryOnConflict here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because we want to update the status of the Infrastructure config, instead of spec. I've moved the status update logic to a new function updateInfrastructureConfigStatusWithRetryOnConflict for better understanding.

test/e2e/operator_test.go Show resolved Hide resolved
@@ -1291,6 +1300,79 @@ func TestInternalLoadBalancerGlobalAccessGCP(t *testing.T) {
}
}

// TestAWSResourceTagsChanged tests the functionality of updating AWS resource tags
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One of the acceptance criteria is:
"any modifications to user-defined tags (platform.AWS.ResourceTags) trigger an update of the load balancer service" - shouldn't we have a couple more tests, like deleting a user-defined tag and adding a user-defined tag?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

like deleting a user-defined tag and adding a user-defined tag?

The test already covers adding a user-defined tag.

However, updating the infra status again to remove certain tag is possible, which will update the annotation as well, but the tag won't be removed from the AWS resource itself, and this is an expected behaviour for cloud-provider-aws. See #1148 (comment) for more details.

I've extended the test to cover this scenario of tag removal and annotation update in the latest changes. Hope it covers the acceptance criteria.

@openshift-ci openshift-ci bot removed the lgtm Indicates that a PR is ready to be merged. label Oct 29, 2024
@chiragkyal
Copy link
Member Author

/retest

@chiragkyal chiragkyal requested a review from candita October 30, 2024 07:44
Comment on lines 1324 to 1325
// Revert to original status
originalInfraStatus := infraConfig.Status
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// Revert to original status
originalInfraStatus := infraConfig.Status
// Save a copy of the original infraConfig.Status, to revert changes before exiting.
originalInfraStatus := infraConfig.Status.DeepCopy()

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, addressed the suggestion.

// Revert to original status
originalInfraStatus := infraConfig.Status
t.Cleanup(func() {
updateInfrastructureConfigStatusWithRetryOnConflict(configClient, func(infra *configv1.Infrastructure) *configv1.Infrastructure {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
updateInfrastructureConfigStatusWithRetryOnConflict(configClient, func(infra *configv1.Infrastructure) *configv1.Infrastructure {
err := updateInfrastructureConfigStatusWithRetryOnConflict(configClient, func(infra *configv1.Infrastructure) *configv1.Infrastructure {

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, updated

infra.Status = originalInfraStatus
return infra
})
})
Copy link
Contributor

@candita candita Oct 31, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something like this:

Suggested change
})
}
if err != nil {
t.Logf("Unable to remove changes to the infraConfig, possible corruption of test environment: %v", err)
}
)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, updated.

// assertLoadBalancerServiceAnnotationWithPollImmediate checks if the specified annotation on the
// LoadBalancer Service of the given IngressController matches the expected value.
func assertLoadBalancerServiceAnnotationWithPollImmediate(t *testing.T, kclient client.Client, ic *operatorv1.IngressController, annotationKey, expectedValue string) {
err := wait.PollImmediate(5*time.Second, 5*time.Minute, func() (bool, error) {
Copy link
Contributor

@candita candita Oct 31, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I didn't notice this earlier, but since I requested other changes, I'm adding this too.

We have started replacing the use of deprecated wait.PollImmediate with
wait.PollUntilContextTimeout(context.Background(), as used in https://github.com/openshift/cluster-ingress-operator/blob/master/test/e2e/operator_test.go#L3189. Please use the updated function here, and any new code that requires polled waiting.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No problem and thanks for the suggestion. I have replaced wait.PollImmediate with wait.PollUntilContextTimeout to be consistent.

err := wait.PollImmediate(5*time.Second, 5*time.Minute, func() (bool, error) {
service := &corev1.Service{}
if err := kclient.Get(context.Background(), controller.LoadBalancerServiceName(ic), service); err != nil {
t.Logf("failed to get service %s: %v", controller.LoadBalancerServiceName(ic), err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
t.Logf("failed to get service %s: %v", controller.LoadBalancerServiceName(ic), err)
t.Logf("failed to get service %s: %v, retrying...", controller.LoadBalancerServiceName(ic), err)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

return false, nil
}
if actualValue, ok := service.Annotations[annotationKey]; !ok {
t.Logf("load balancer has no %q annotation: %v", annotationKey, service.Annotations)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
t.Logf("load balancer has no %q annotation: %v", annotationKey, service.Annotations)
t.Logf("load balancer has no %q annotation yet: %v, retrying...", annotationKey, service.Annotations)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated

return false, nil
} else if actualValue != expectedValue {
t.Logf("expected %s, found %s", expectedValue, actualValue)
return false, nil
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I expect that we don't want to keep trying, after we found an unexpected value. Or would we expect it change after this?

Suggested change
return false, nil
return false, fmt.Errorf("expected %s, found %s", expectedValue, actualValue)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to keep trying here because the annotation value might not get updated immediately after the infra status is updated.

// updateInfrastructureStatus updates the Infrastructure status by applying
// the given update function to the current Infrastructure object.
func updateInfrastructureConfigStatusWithRetryOnConflict(configClient *configclientset.Clientset, updateFunc func(*configv1.Infrastructure) *configv1.Infrastructure) error {
retryErr := retry.RetryOnConflict(retry.DefaultRetry, func() error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The retry.DefaultRetry is only 10ms https://pkg.go.dev/k8s.io/client-go/util/retry#pkg-variables. Why not use wait.PollUntilContextTimeout(context.Background() and allow it to loop on conflict for a configured amount of time?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, we can use wait.PollUntilContextTimeout(context.Background() here as well. I've updated the logic in the latest changes. Thanks!

- Ingress controller now monitors changes to the Infrastructure object,
ensuring that modifications to user-defined AWS ResourceTags (platform.AWS.ResourceTags) trigger updates to the load balancer service.
- Consider awsLBAdditionalResourceTags annotation as a managed annotation.

Signed-off-by: chiragkyal <[email protected]>
Copy link
Contributor

openshift-ci bot commented Nov 1, 2024

@chiragkyal: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-aws-ovn-single-node 19eced8 link false /test e2e-aws-ovn-single-node

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@chiragkyal chiragkyal requested a review from candita November 1, 2024 14:58
@candita
Copy link
Contributor

candita commented Nov 1, 2024

/lgtm

@candita
Copy link
Contributor

candita commented Nov 1, 2024

/unhold

@openshift-ci openshift-ci bot added lgtm Indicates that a PR is ready to be merged. and removed do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. labels Nov 1, 2024
@openshift-merge-bot openshift-merge-bot bot merged commit 1ee3995 into openshift:master Nov 1, 2024
16 of 17 checks passed
@openshift-bot
Copy link
Contributor

[ART PR BUILD NOTIFIER]

Distgit: ose-cluster-ingress-operator
This PR has been included in build ose-cluster-ingress-operator-container-v4.18.0-202411011908.p0.g1ee3995.assembly.stream.el9.
All builds following this will include this PR.

@chiragkyal chiragkyal deleted the aws-tags branch November 3, 2024 07:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants