The CSI snapshotter is part of Kubernetes implementation of Container Storage Interface (CSI).
The volume snapshot feature supports CSI v1.0 and higher. It was introduced as an Alpha feature in Kubernetes v1.12 and has been promoted to an Beta feature in Kubernetes 1.17.
⚠️ WARNING: There is a new validating webhook server which provides tightened validation on snapshot objects. This SHOULD be installed by all users of this feature. More details below.
With the promotion of Volume Snapshot to beta, the feature is now enabled by default on standard Kubernetes deployments instead of being opt-in.
The move of the Kubernetes Volume Snapshot feature to beta also means:
- A revamp of volume snapshot APIs.
- The CSI external-snapshotter sidecar is split into two controllers, a snapshot controller and a CSI external-snapshotter sidecar.
The snapshot controller is deployed by the Kubernetes distributions and is responsible for watching the VolumeSnapshot CRD objects and manages the creation and deletion lifecycle of snapshots.
The CSI external-snapshotter sidecar watches Kubernetes VolumeSnapshotContent CRD objects and triggers CreateSnapshot/DeleteSnapshot against a CSI endpoint.
Blog post for the beta feature can be found here
This information reflects the head of this branch.
Compatible with CSI Version | Container Image | Min K8s Version | Snapshot CRD version |
---|---|---|---|
CSI Spec v1.2.0 | k8s.gcr.io/sig-storage/csi-snapshotter | 1.17 | v1beta1 |
CSI Spec v1.2.0 | k8s.gcr.io/sig-storage/snapshot-controller | 1.17 | v1beta1 |
CSI Spec v1.2.0 | k8s.gcr.io/sig-storage/snapshot-validation-webhook | 1.17 | v1beta1 |
The VolumeSnapshotDataSource
feature gate was introduced in Kubernetes 1.12 and it is enabled by default in Kubernetes 1.17 when the volume snapshot feature is promoted to beta.
Both the snapshot controller and CSI external-snapshotter sidecar follow controller pattern and uses informers to watch for events. The snapshot controller watches for VolumeSnapshot
and VolumeSnapshotContent
create/update/delete events.
The CSI external-snapshotter sidecar only watches for VolumeSnapshotContent
create/update/delete events. It filters out these objects with Driver==<CSI driver name>
specified in the associated VolumeSnapshotClass object and then processes these events in workqueues with exponential backoff.
The CSI external-snapshotter sidecar talks to CSI over socket (/run/csi/socket by default, configurable by -csi-address).
- DeletionPolicy is a required field in both VolumeSnapshotClass and VolumeSnapshotContent. This way the user has to explicitly specify it, leaving no room for confusion.
- VolumeSnapshotSpec has a required Source field. Source may be either a PersistentVolumeClaimName (if dynamically provisioning a snapshot) or VolumeSnapshotContentName (if pre-provisioning a snapshot).
- VolumeSnapshotContentSpec has a required Source field. This Source may be either a VolumeHandle (if dynamically provisioning a snapshot) or a SnapshotHandle (if pre-provisioning volume snapshots).
- VolumeSnapshot contains a Status to indicate the current state of the volume snapshot. It has a field BoundVolumeSnapshotContentName to indicate the VolumeSnapshot object is bound to a VolumeSnapshotContent.
- VolumeSnapshotContent contains a Status to indicate the current state of the volume snapshot content. It has a field SnapshotHandle to indicate that the VolumeSnapshotContent represents a snapshot on the storage system.
The Volume Snapshot feature now depends on a new, volume snapshot controller in addition to the volume snapshot CRDs. Both the volume snapshot controller and the CRDs are independent of any CSI driver. Regardless of the number CSI drivers deployed on the cluster, there must be only one instance of the volume snapshot controller running and one set of volume snapshot CRDs installed per cluster.
Therefore, it is strongly recommended that Kubernetes distributors bundle and deploy the controller and CRDs as part of their Kubernetes cluster management process (independent of any CSI Driver).
If your Kubernetes distribution does not bundle the snapshot controller, you may manually install these components by executing the following steps. Note that the snapshot controller YAML files in the git repository deploy into the default namespace for system testing purposes. For general use, update the snapshot controller YAMLs with an appropriate namespace prior to installing. For example, on a Vanilla Kubernetes cluster update the namespace from 'default' to 'kube-system' prior to issuing the kubectl create command.
There is a new validating webhook server which provides tightened validation on snapshot objects. The cluster admin or Kubernetes distribution admin should install the webhook alongside the snapshot controllers and CRDs. More details below.
Install Snapshot Beta CRDs:
- kubectl create -f client/config/crd
- https://github.com/kubernetes-csi/external-snapshotter/tree/master/client/config/crd
- Do this once per cluster
Install Common Snapshot Controller:
- Update the namespace to an appropriate value for your environment (e.g. kube-system)
- kubectl create -f deploy/kubernetes/snapshot-controller
- https://github.com/kubernetes-csi/external-snapshotter/tree/master/deploy/kubernetes/snapshot-controller
- Do this once per cluster
Install CSI Driver:
- Follow instructions provided by your CSI Driver vendor.
- Here is an example to install the sample hostpath CSI driver
- kubectl create -f deploy/kubernetes/csi-snapshotter
- https://github.com/kubernetes-csi/external-snapshotter/tree/master/deploy/kubernetes/csi-snapshotter
The snapshot validating webhook is an HTTP callback which responds to admission requests. It is part of a larger plan to tighten validation for volume snapshot objects. This webhook introduces the ratcheting validation mechanism targeting the tighter validation. The cluster admin or Kubernetes distribution admin should install the webhook alongside the snapshot controllers and CRDs.
⚠️ WARNING: Cluster admins choosing not to install the webhook server and participate in the phased release process can cause future problems when upgrading fromv1beta1
tov1
volumesnapshot API, if there are currently persisted objects which fail the new stricter validation. Potential impacts include being unable to delete invalid snapshot objects.
Read more about how to install the example webhook here.
-
--tls-cert-file
: File containing the x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). Required. -
--tls-private-key-file
: File containing the x509 private key matching --tls-cert-file. Required. -
--port
: Secure port that the webhook listens on (default 443)
-
--leader-election
: Enables leader election. This is useful when there are multiple replicas of the same snapshot controller running for the same Kubernetes deployment. Only one of them may be active (=leader). A new leader will be re-elected when current leader dies or becomes unresponsive for ~15 seconds. -
--leader-election-namespace <namespace>
: The namespace where the leader election resource exists. Defaults to the pod namespace if not set. -
--http-endpoint
: The TCP network address where the HTTP server for diagnostics, including metrics and leader election health check, will listen (example::8080
which corresponds to port 8080 on local host). The default is empty string, which means the server is disabled. -
--metrics-address
: (deprecated) The TCP network address where the prometheus metrics endpoint will run (example::8080
which corresponds to port 8080 on local host). The default is empty string, which means metrics endpoint is disabled. -
--metrics-path
: The HTTP path where prometheus metrics will be exposed. Default is/metrics
. -
--worker-threads
: Number of worker threads. Default value is 10.
-
--kubeconfig <path>
: Path to Kubernetes client configuration that the snapshot controller uses to connect to Kubernetes API server. When omitted, default token provided by Kubernetes will be used. This option is useful only when the snapshot controller does not run as a Kubernetes pod, e.g. for debugging. -
--resync-period <duration>
: Internal resync interval when the snapshot controller re-evaluates all existingVolumeSnapshot
instances and tries to fulfill them, i.e. create / delete corresponding snapshots. It does not affect re-tries of failed calls! It should be used only when there is a bug in Kubernetes watch logic. Default is 15 minutes. -
--version
: Prints current snapshot controller version and quits. -
All glog / klog arguments are supported, such as
-v <log level>
or-alsologtostderr
.
-
--csi-address <path to CSI socket>
: This is the path to the CSI driver socket inside the pod that the external-snapshotter container will use to issue CSI operations (/run/csi/socket
is used by default). -
--leader-election
: Enables leader election. This is useful when there are multiple replicas of the same external-snapshotter running for one CSI driver. Only one of them may be active (=leader). A new leader will be re-elected when current leader dies or becomes unresponsive for ~15 seconds. -
--leader-election-namespace <namespace>
: The namespace where the leader election resource exists. Defaults to the pod namespace if not set. -
--timeout <duration>
: Timeout of all calls to CSI driver. It should be set to value that accommodates majority ofCreateSnapshot
,DeleteSnapshot
, andListSnapshots
calls. 1 minute is used by default. -
snapshot-name-prefix
: Prefix to apply to the name of a created snapshot. Default issnapshot
. -
snapshot-name-uuid-length
: Length in characters for the generated uuid of a created snapshot. Defaults behavior is to NOT truncate. -
--worker-threads
: Number of worker threads for running create snapshot and delete snapshot operations. Default value is 10.
-
--kubeconfig <path>
: Path to Kubernetes client configuration that the CSI external-snapshotter uses to connect to Kubernetes API server. When omitted, default token provided by Kubernetes will be used. This option is useful only when the external-snapshotter does not run as a Kubernetes pod, e.g. for debugging. -
--resync-period <duration>
: Internal resync interval when the CSI external-snapshotter re-evaluates all existingVolumeSnapshotContent
instances and tries to fulfill them, i.e. update / delete corresponding snapshots. It does not affect re-tries of failed CSI calls! It should be used only when there is a bug in Kubernetes watch logic. Default is 15 minutes. -
--version
: Prints current CSI external-snapshotter version and quits. -
All glog / klog arguments are supported, such as
-v <log level>
or-alsologtostderr
.
The external-snapshotter optionally exposes an HTTP endpoint at address:port specified by --http-endpoint
argument. When set, these two paths are exposed:
-
Metrics path, as set by
--metrics-path
argument (default is/metrics
). -
Leader election health check at
/healthz/leader-election
. It is recommended to run a liveness probe against this endpoint when leader election is used to kill external-provisioner leader that fails to connect to the API server to renew its leadership. See kubernetes-csi/csi-lib-utils#66 for details.
The change from v1alpha1 to v1beta1 snapshot APIs is not backward compatible.
If you have already deployed v1alpha1 snapshot APIs and external-snapshotter sidecar controller and want to upgrade to v1beta1, you need to do the following:
- Note: The underlying snapshots on the storage system will be deleted in the upgrade process!!!
- Delete volume snapshots created using v1alpha1 snapshot CRDs and external-snapshotter sidecar controller.
- Uninstall v1alpha1 snapshot CRDs, external-snapshotter sidecar controller, and CSI driver.
- Install v1beta1 snapshot CRDs, snapshot controller, CSI external-snapshotter sidecar and CSI driver.
Running Unit Tests:
go test -timeout 30s github.com/kubernetes-csi/external-snapshotter/pkg/common-controller
go test -timeout 30s github.com/kubernetes-csi/external-snapshotter/pkg/sidecar-controller
Volume snapshot APIs and client library are now in a separate sub-module: github.com/kubernetes-csi/external-snapshotter/client/v3
.
Use the command go get -u github.com/kubernetes-csi/external-snapshotter/client/[email protected]
to get the client library.
ResourceQuotas
are namespaced objects that can be used to set limits on objects of a particular Group.Version.Kind
. Before we set resource quota, make sure that snapshot CRDs are installed in the cluster. If not please follow this guide.
kubectl get crds | grep snapshot
Now create a ResourceQuota
object which sets the limits on number of volumesnapshots that can be created:
apiVersion: v1
kind: ResourceQuota
metadata:
name: snapshot-quota
spec:
hard:
count/volumesnapshots.snapshot.storage.k8s.io: "10"
If you try to create more snapshots than what is allowed, you will see error like the following:
Error from server (Forbidden): error when creating "csi-snapshot.yaml": volumesnapshots.snapshot.storage.k8s.io "new-snapshot-demo" is forbidden: exceeded quota: snapshot-quota, requested: count/volumesnapshots.snapshot.storage.k8s.io=1, used: count/volumesnapshots.snapshot.storage.k8s.io=10, limited: count/volumesnapshots.snapshot.storage.k8s.io=10
external-snapshotter uses go modules.
Learn how to engage with the Kubernetes community on the community page.
You can reach the maintainers of this project at:
Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.