-
We have a use case which we are not quite sure how best to support. For context, we host a series of applications in Kubernetes which require secrets such as API keys or other client secrets when communicating (both internally and with external systems). The secrets are stored in Azure key vault and synced to kubernetes secrets in order to expose them as environment variables in our application pods. Our API keys are usually created with a set of permissions granted on the target APIs. As new application versions are released, the applications using these keys could require additional or even fewer permissions, in which case we need to issue new API keys. To manage all of these API keys and keep them up to date, each application is required to provide a small CLI app in the form of a Docker image that can be used to declaratively and idempotently keep the secrets up to date for that application. The gist is: if nothing has changed since the last version, leave existing secrets as is, otherwise issue a new secret and store it in the Azure key vault. When we roll out an update to one of these applications, we first invoke its CLI container in a Kubernetes job to ensure that all secrets are up to date. Then we trigger a rolling upgrade of the application itself. The problem occurs when the application is rolled out too soon, and the CSI driver has not synced the new secrets stored in the Azure key vault. See, we require the new secrets to be backwards compatible, such that the old app version can continue to work should the rollout fail. But the new app version could use additional API endpoints that rely on the permissions granted to the new API key. This causes the new application pods to fail until the secret is synced via the CSI driver, and a tool such as Reloader can replace our pods once more to mount the new API key. We wish to hear about best practices for ensuring that our secrets are synced in time before we roll out new app versions. Solutions we currently consider are:
Looking forward to hearing your thoughts on this! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
For anyone else who happens to come across this looking for an answer like we did, here is the solution we ended up implementing. We include our application version in the names of our SecretProviderClasses as well as our Secret resources: apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: my-app-secret-1.0.0
spec:
secretObjects:
- secretName: my-app-secret-1.0.0
type: Opaque
data:
- ... During an application upgrade, let's say from v1 to v2, the SecretProviderClass named However, the old Secret Old v1 pods continue to operate with the old secrets, and since the SecretProviderClass |
Beta Was this translation helpful? Give feedback.
For anyone else who happens to come across this looking for an answer like we did, here is the solution we ended up implementing.
We include our application version in the names of our SecretProviderClasses as well as our Secret resources:
During an application upgrade, let's say from v1 to v2, the SecretProviderClass named
my-app-secret-1.0.0
is deleted and a new SecretProviderClass namedmy-app-secret-2.0.0
is created in its place.However, the old Secret
my-app-secret-1.0.0
is …