You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Dec 31, 2022. It is now read-only.
If we configure this authenticator as an initContainer instead of sidecar, then whenever there is DR switch on the vault side, the token needs to be renewed. What is actually happening is, container exits with non zero code because vault token is invalid (because of the Vault DR activity that shifted to a new datacenter and hence new vault instance). Now kubernetes keeps restarting THIS container again and again which re-uses the old token from the initContainer. There isnt a way to recreate the pod automatically so that initContainer runs again and gets the new token from the newly shifted datacenter. Is sidecar a better approach in this case instead of initContainer or is there a way to make initContiner run again to fetch the new token?
The text was updated successfully, but these errors were encountered:
Init containers are designed to run once and exit. You can use a sidecar, but that's going to introduce other problems (like signaling to children that there's a new token)
This is actually quite a tricky nut to crack. Been running into this issue that if the main container crashes and gets restarted by Kubernetes it will read a .vault-token that has maybe expired hours ago and there is no way to signal K8s to consider the whole pod failed and retry from start AFAIK.
Just came across this issue as my k8s pod had 2500 restarts because of the token.
Now the obviously simple solution would be to just keep renewing the token every few minutes to keep the token around for a restart - but the underlying issue remains that if the token manages to expire once (for whatever reason -> e.g. connectivity issues with vault for example) the k8s deployment will start going into a crash loop and never recover unless someone manually kills the pod 🤔
I tried with a sidecar container next but that has the obvious disadvantages of having to manage lifecycle and signalling etc.
Is there a new "default" way to do this I am not aware of? Because if not the obvious solution here would be to make my application not just vault-aware but also k8s aware and handle the requesting of the token by itself (also not the greatest solution)
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
If we configure this authenticator as an initContainer instead of sidecar, then whenever there is DR switch on the vault side, the token needs to be renewed. What is actually happening is, container exits with non zero code because vault token is invalid (because of the Vault DR activity that shifted to a new datacenter and hence new vault instance). Now kubernetes keeps restarting THIS container again and again which re-uses the old token from the initContainer. There isnt a way to recreate the pod automatically so that initContainer runs again and gets the new token from the newly shifted datacenter. Is sidecar a better approach in this case instead of initContainer or is there a way to make initContiner run again to fetch the new token?
The text was updated successfully, but these errors were encountered: