Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

failed to list *v1.ConfigMap: configmaps is forbidden #5383

Closed
noamgreen opened this issue Dec 23, 2023 · 8 comments
Closed

failed to list *v1.ConfigMap: configmaps is forbidden #5383

noamgreen opened this issue Dec 23, 2023 · 8 comments
Labels
lifecycle/closed lifecycle/stale question Further information is requested

Comments

@noamgreen
Copy link

Description

** READ BEFORE CONTINUING: If your issue is not specific to AWS, please cut a ticket in kubernetes-sigs/karpenter.

Upgrade from X.32 to X.33.1 i see an error i think in the role
"failed to list *v1.ConfigMap: configmaps is forbidden"

i try to move to kube-system namespace and i start getting this error

karpenter-5f56b5cbdb-kmw9n controller W1223 13:13:45.929442       1 reflector.go:535] k8s.io/[email protected]/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: configmaps is forbidden: User "system:serviceaccount:karpenter:karpenter-manager" cannot list resource "configmaps" in API group "" in the namespace "karpenter"
karpenter-5f56b5cbdb-kmw9n controller E1223 13:13:45.929491       1 reflector.go:147] k8s.io/[email protected]/tools/cache/reflector.go:229: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps is forbidden: User "system:serviceaccount:karpenter:karpenter-manager" cannot list resource "configmaps" in API group "" in the namespace "karpenter"

karpenter-5f56b5cbdb-kmw9n controller {"level":"INFO","time":"2023-12-23T13:11:40.929Z","logger":"controller","message":"k8s.io/[email protected]/tools/cache/reflector.go:229: failed to list *v1.ConfigMap: configmaps is forbidden: User \"system:serviceaccount:karpenter:karpenter-manager\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"karpenter\"","commit":"56aaec1"}
        replicas: 1
        logLevel: info
        logConfig:
          enabled:
            logLevel:
              # -- Global log level, defaults to 'info'
              global: info
              # -- Controller log level, defaults to 'info'
              controller: info
              # -- Error log level, defaults to 'error'
              webhook: error
        serviceAccount:
          create: true
          name: "karpenter-manager"
          annotations:
            eks.amazonaws.com/role-arn: "arn:aws:iam::ZZZZZZZZZZZ:role/karpenter-manager"
        controller:
          env:
            - name: AWS_REGION
              value: us-east-1
        serviceMonitor:
          enabled: true
          additionalLabels:
            release: prometheus-operator
        settings:
          clusterEndpoint: https://.gr7.us-east-1.eks.amazonaws.com
          clusterName:eks-dev
          interruptionQueue: sqs-k'''

Versions:

  • Chart Version: 0.32.3
  • Kubernetes Version (kubectl version): v1.26.11-eks-8cb36c9

side note ** i am not sure why you note push the helm chart version the code is in 0.33 and the chart in 0.32

in any case i try to go back yo 0.32

@noamgreen noamgreen added bug Something isn't working needs-triage Issues that need to be triaged labels Dec 23, 2023
@jonathan-innis
Copy link
Contributor

If I understand your question correctly, you are saying that you are failing on upgrade to v0.33.x due to the lack of ConfigMap permissions? You shouldn't need ConfigMap permissions starting in v0.33.x since the log config should get mounted as a volume into the container.

Can you share the deployment that is being deployed to your cluster so we can confirm how the Deployment is grabbing the logging configuration that you specified?

@jonathan-innis jonathan-innis added question Further information is requested and removed bug Something isn't working needs-triage Issues that need to be triaged labels Dec 23, 2023
@yevgeniyo-ps
Copy link

same for me, upgrade from 0.29 to 0.33.1, deployment via helm

E1226 10:42:36.676382       1 reflector.go:140] k8s.io/[email protected]/tools/cache/reflector.go:169: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Unauthorized
W1226 10:42:49.421259       1 reflector.go:424] k8s.io/[email protected]/tools/cache/reflector.go:169: failed to list *v1.ConfigMap: Unauthorized
E1226 10:42:49.421291       1 reflector.go:140] k8s.io/[email protected]/tools/cache/reflector.go:169: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Unauthorized
W1226 10:43:11.376656       1 reflector.go:424] k8s.io/[email protected]/tools/cache/reflector.go:169: failed to list *v1.ConfigMap: Unauthorized
E1226 10:43:11.376686       1 reflector.go:140] k8s.io/[email protected]/tools/cache/reflector.go:169: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Unauthorized
panic: error waiting for ConfigMap informer to sync

goroutine 1 [running]:
github.com/samber/lo.must({0x2471dc0, 0xc000719350}, {0x0, 0x0, 0x0})
	github.com/samber/[email protected]/errors.go:53 +0x1f4
github.com/samber/lo.Must0(...)
	github.com/samber/[email protected]/errors.go:72

@jonathan-innis
Copy link
Contributor

same for me, upgrade from 0.29 to 0.33.1, deployment via helm

@yevgeniyo Just looking over the logs that you shared, those looks like errors in startup of the v0.29 version of Karpenter as opposed to the v0.33.1 version, just based off of the version of the library that is throwing the error. Do you know if these errors are being thrown by the newer version?

@orilani
Copy link

orilani commented Jan 2, 2024

@noamgreen @yevgeniyo seems like there's a mismatch in the image tag in the helm charts...
version 0.33.1 points to tag v0.32.3 instead of v0.33.1 and the digest points to afa0d0fd5ac375859dc3d239ec992f197cdf01f6c8e3413e3845a43c2434621e instead of 7f484951baf70d1574d6408be3947a3ca5f54463c2d1f29993b492e7e916ef11 i did some digging in the karpenter public ecr and found it out...

@jonathan-innis probably need to update this values?

@jonathan-innis
Copy link
Contributor

How are you downloading the chart? Are you pulling it from the OCI artifact store or are you using the code source? There's a similar comment here: #5410 (comment). This is something that we are definitely aware of and need to fix; for now, our recommendation is to use the OCI store as the source-of-truth for pulling the chart, not the git repo.

@jonathan-innis
Copy link
Contributor

This issue is now being tracked here: #5415

Copy link
Contributor

This issue has been inactive for 14 days. StaleBot will close this stale issue after 14 more days of inactivity.

@noamgreen
Copy link
Author

#5939

same issues ....

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/closed lifecycle/stale question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants