From 9c6415cb7747e668cf7c3dd4dd28c1065da20ab2 Mon Sep 17 00:00:00 2001 From: pgasca <87044997+pgasca@users.noreply.github.com> Date: Wed, 14 Jun 2023 14:55:33 -0500 Subject: [PATCH] Combining useful information from EKS User Guide. Moved considerations, prereqs, and IAM steps from EKS User Guide. Globally replaced "EFS" and "AWS EFS" with "Amazon EFS". "Amazon EKS" branding Globally replaced "EKS" with "Amazon EKS". Replacement correction. Global replacement fix. Updated links to work with move Merged deploy/install info from EKS User Guide. Moved IAM policy steps to separate file. Removed steps that were moved to separate file. Added new link. Moved over "Create an Amazon EFS file system" from user guide Update and rename efs-create-filesystem to efs-create-filesystem.md Update iam-policy-create.md Give extra context. Adding references to other sections, other tweaks small edits Update iam-policy-create.md Update README.md Merged extra details from EKS User Guide. Update README.md Formatting fixes Update README.md Cleaned up descriptions of settings. Update README.md Update README.md Merged extra details from EKS User Guide. Update README.md Addressed Ashley's comment regarding numbering starting over. Also updated replacing release-X.X with more clear instructions on how to look up latest active branch. Corrected Branches links. Removed mention that master isn't recommended. I originally added these statements based on my understanding from a different storage driver (https://github.com/kubernetes-sigs/aws-fsx-csi-driver/blob/master/docs/install.md), but this same recommendation may not apply to EFS CSI. --- docs/README.md | 251 +++++++++++++----- docs/efs-create-filesystem.md | 114 ++++++++ docs/iam-policy-create.md | 115 ++++++++ .../kubernetes/dynamic_provisioning/README.md | 204 ++++++++++---- examples/kubernetes/multiple_pods/README.md | 218 +++++++++++---- 5 files changed, 737 insertions(+), 165 deletions(-) create mode 100644 docs/efs-create-filesystem.md create mode 100644 docs/iam-policy-create.md diff --git a/docs/README.md b/docs/README.md index 69259c48c..65db4807e 100644 --- a/docs/README.md +++ b/docs/README.md @@ -7,7 +7,7 @@ The [Amazon Elastic File System](https://aws.amazon.com/efs/) Container Storage Interface (CSI) Driver implements the [CSI](https://github.com/container-storage-interface/spec/blob/master/spec.md) specification for container orchestrators to manage the lifecycle of Amazon EFS file systems. ### CSI Specification Compatibility Matrix -| AWS EFS CSI Driver \ CSI Spec Version | v0.3.0| v1.1.0 | v1.2.0 | +| Amazon EFS CSI Driver \ CSI Spec Version | v0.3.0| v1.1.0 | v1.2.0 | |----------------------------------------|-------|--------|--------| | master branch | no | no | yes | | v1.x.x | no | no | yes | @@ -16,9 +16,9 @@ The [Amazon Elastic File System](https://aws.amazon.com/efs/) Container Storage | v0.1.0 | yes | no | no | ## Features -EFS CSI driver supports dynamic provisioning and static provisioning. -Currently Dynamic Provisioning creates an access point for each PV. This mean an AWS EFS file system has to be created manually on AWS first and should be provided as an input to the storage class parameter. -For static provisioning, AWS EFS file system needs to be created manually on AWS first. After that it can be mounted inside a container as a volume using the driver. +Amazon EFS CSI driver supports dynamic provisioning and static provisioning. +Currently, Dynamic Provisioning creates an access point for each PV. This mean an Amazon EFS file system has to be created manually on AWS first and should be provided as an input to the storage class parameter. +For static provisioning, the Amazon EFS file system needs to be created manually on AWS first. After that, it can be mounted inside a container as a volume using the driver. The following CSI interfaces are implemented: * Controller Service: CreateVolume, DeleteVolume, ControllerGetCapabilities, ValidateVolumeCapabilities @@ -38,7 +38,7 @@ The following CSI interfaces are implemented: | basePath | | | true | Path under which access points for dynamic provisioning is created. If this parameter is not specified, access points are created under the root directory of the file system | | az | | "" | true | Used for cross-account mount. `az` under storage class parameter is optional. If specified, mount target associated with the az will be used for cross-account mount. If not specified, a random mount target will be picked for cross account mount | -**Notes**: +**Note** * Custom Posix group Id range for Access Point root directory must include both `gidRangeStart` and `gidRangeEnd` parameters. These parameters are optional only if both are omitted. If you specify one, the other becomes mandatory. * When using a custom Posix group ID range, there is a possibility for the driver to run out of available POSIX group Ids. We suggest ensuring custom group ID range is large enough or create a new storage class with a new file system to provision additional volumes. * `az` under storage class parameter is not be confused with efs-utils mount option `az`. The `az` mount option is used for cross-az mount or efs one zone file system mount within the same aws account as the cluster. @@ -47,27 +47,23 @@ The following CSI interfaces are implemented: * The uid/gid configured on the access point is either the uid/gid specified in the storage class, a value in the gidRangeStart-gidRangeEnd (used as both uid/gid) specified in the storage class, or is a value selected by the driver is no uid/gid or gidRange is specified. * We suggest using [static provisioning](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/examples/kubernetes/static_provisioning/README.md) if you do not wish to use user identity enforcement. -**Note** - -If you want to pass any other mountOptions to EFS CSI driver while mounting, they can be passed in through the Persistent Volume or the Storage Class objects, depending on whether static or dynamic provisioning is used. -Examples of some mountOptions that can be passed: - -**lookupcache**: Specifies how the kernel manages its cache of directory entries for a given mount point. Mode can be one of all, none, pos, or positive. Each mode has different functions and for more information you can refer to this [link](https://linux.die.net/man/5/nfs). - -**iam**: Use the CSI Node Pod's IAM identity to authenticate with EFS. +If you want to pass any other mountOptions to Amazon EFS CSI driver while mounting, they can be passed in through the Persistent Volume or the Storage Class objects, depending on whether static or dynamic provisioning is used. The following are examples of some mountOptions that can be passed: +* **lookupcache**: Specifies how the kernel manages its cache of directory entries for a given mount point. Mode can be one of all, none, pos, or positive. Each mode has different functions and for more information you can refer to this [link](https://linux.die.net/man/5/nfs). +* **iam**: Use the CSI Node Pod's IAM identity to authenticate with Amazon EFS. ### Encryption In Transit -One of the advantages of using EFS is that it provides [encryption in transit](https://aws.amazon.com/blogs/aws/new-encryption-of-data-in-transit-for-amazon-efs/) support using TLS. Using encryption in transit, data will be encrypted during its transition over the network to the EFS service. This provides an extra layer of defence-in-depth for applications that requires strict security compliance. +One of the advantages of using Amazon EFS is that it provides [encryption in transit](https://aws.amazon.com/blogs/aws/new-encryption-of-data-in-transit-for-amazon-efs/) support using TLS. Using encryption in transit, data will be encrypted during its transition over the network to the Amazon EFS service. This provides an extra layer of defence-in-depth for applications that requires strict security compliance. -Encryption in transit is enabled by default in the master branch version of the driver. To disable it and mount volumes using plain NFSv4, set `volumeAttributes` field `encryptInTransit` to `"false"` in your persistent volume manifest. For an example manifest, see [Encryption in Transit Example](../examples/kubernetes/encryption_in_transit/specs/pv.yaml). +Encryption in transit is enabled by default in the master branch version of the driver. To disable it and mount volumes using plain NFSv4, set the `volumeAttributes` field `encryptInTransit` to `"false"` in your persistent volume manifest. For an example manifest, see the [encryption in transit example](../examples/kubernetes/encryption_in_transit/specs/pv.yaml). -**Note** Kubernetes version 1.13+ is required if you are using this feature in Kubernetes. +**Note** +Kubernetes version 1.13 or later is required if you are using this feature in Kubernetes. -## EFS CSI Driver on Kubernetes -The following sections are Kubernetes specific. If you are a Kubernetes user, use this for driver features, installation steps and examples. +## Amazon EFS CSI Driver on Kubernetes +The following sections are Kubernetes specific. If you are a Kubernetes user, use this for driver features, installation steps, and examples. ### Kubernetes Version Compability Matrix -| AWS EFS CSI Driver \ Kubernetes Version | maturity | v1.11 | v1.12 | v1.13 | v1.14 | v1.15 | v1.16 | v1.17+ | +| Amazon EFS CSI Driver \ Kubernetes Version | maturity | v1.11 | v1.12 | v1.13 | v1.14 | v1.15 | v1.16 | v1.17+ | |-----------------------------------------|----------|-------|-------|-------|-------|-------|-------|-------| | master branch | GA | no | no | no | no | no | no | yes | | v1.5.x | GA | no | no | no | no | no | no | yes | | | | | | | | | | @@ -81,7 +77,7 @@ The following sections are Kubernetes specific. If you are a Kubernetes user, us | v0.1.0 | alpha | yes | yes | yes | no | no | no | no | ### Container Images -| EFS CSI Driver Version | Image | +| Amazon EFS CSI Driver Version | Image | |------------------------|----------------------------------| | master branch | amazon/aws-efs-csi-driver:master | | v1.5.8 | amazon/aws-efs-csi-driver:v1.5.8 | @@ -126,50 +122,183 @@ The following sections are Kubernetes specific. If you are a Kubernetes user, us |----------------|-------------------------------------------------------------------------------| | v1.5.0 | public.ecr.aws/efs-csi-driver/amazon/aws-efs-csi-driver:v1.5.0 | -#### Note : You can find previous efs-csi-driver versions' images from [here](https://gallery.ecr.aws/efs-csi-driver/amazon/aws-efs-csi-driver) +**Note** +You can find previous efs-csi-driver versions' images from [here](https://gallery.ecr.aws/efs-csi-driver/amazon/aws-efs-csi-driver) ### Features -* Static provisioning - EFS file system needs to be created manually first, then it could be mounted inside container as a persistent volume (PV) using the driver. -* Dynamic provisioning - Uses a persistent volume claim (PVC) to dynamically provision a persistent volume (PV). On Creating a PVC, kuberenetes requests EFS to create an Access Point in a file system which will be used to mount the PV. +* Static provisioning - Amazon EFS file system needs to be created manually first, then it could be mounted inside container as a persistent volume (PV) using the driver. +* Dynamic provisioning - Uses a persistent volume claim (PVC) to dynamically provision a persistent volume (PV). On Creating a PVC, kuberenetes requests Amazon EFS to create an Access Point in a file system which will be used to mount the PV. * Mount Options - Mount options can be specified in the persistent volume (PV) or storage class for dynamic provisioning to define how the volume should be mounted. -* Encryption of data in transit - EFS file systems are mounted with encryption in transit enabled by default in the master branch version of the driver. -* Cross account mount - EFS file systems from different aws accounts can be mounted from an EKS cluster. -* Multiarch - EFS CSI driver image is now multiarch on ECR +* Encryption of data in transit - Amazon EFS file systems are mounted with encryption in transit enabled by default in the master branch version of the driver. +* Cross account mount - Amazon EFS file systems from different aws accounts can be mounted from an Amazon EKS cluster. +* Multiarch - Amazon EFS CSI driver image is now multiarch on ECR -**Notes**: -* Since EFS is an elastic file system it doesn't really enforce any file system capacity. The actual storage capacity value in persistent volume and persistent volume claim is not used when creating the file system. However, since the storage capacity is a required field by Kubernetes, you must specify the value and you can use any valid value for the capacity. +**Note** +Since Amazon EFS is an elastic file system, it doesn't really enforce any file system capacity. The actual storage capacity value in persistent volume and persistent volume claim is not used when creating the file system. However, since the storage capacity is a required field by Kubernetes, you must specify the value and you can use any valid value for the capacity. ### Installation -#### Set up driver permission: + +**Considerations** ++ The Amazon EFS CSI Driver isn't compatible with Windows\-based container images. ++ You can't use dynamic persistent volume provisioning with Fargate nodes, but you can use static provisioning. ++ Dynamic provisioning requires `1.2` or later of the driver. You can statically provision persistent volumes using version `1.1` of the driver on any [supported Amazon EKS cluster version](https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html). ++ Version `1.3.2` or later of this driver supports the Arm64 architecture, including Amazon EC2 Graviton\-based instances. ++ Version `1.4.2` or later of this driver supports using FIPS for mounting file systems. For more information on how to enable FIPS, see [Helm](#-helm-). ++ Take note of the resource quotas for Amazon EFS. For example, there's a quota of 1000 access points that can be created for each Amazon EFS file system. For more information, see [https://docs.aws.amazon.com/efs/latest/ug/limits.html#limits-efs-resources-per-account-per-region](https://docs.aws.amazon.com/efs/latest/ug/limits.html#limits-efs-resources-per-account-per-region). + +**Prerequisites** ++ An existing AWS Identity and Access Management \(IAM\) OpenID Connect \(OIDC\) provider for your cluster. To determine whether you already have one, or to create one, see [Creating an IAM OIDC provider for your cluster](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html). ++ The AWS CLI installed and configured on your device or AWS CloudShell. To install the latest version, see [Installing, updating, and uninstalling the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) and [Quick configuration with `aws configure`](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-config) in the AWS Command Line Interface User Guide. The AWS CLI version installed in the AWS CloudShell may also be several versions behind the latest version. To update it, see [Installing AWS CLI to your home directory](https://docs.aws.amazon.com/cloudshell/latest/userguide/vm-specs.html#install-cli-software) in the AWS CloudShell User Guide. ++ The `kubectl` command line tool is installed on your device or AWS CloudShell. The version can be the same as or up to one minor version earlier or later than the Kubernetes version of your cluster. To install or upgrade `kubectl`, see [Installing or updating `kubectl`](install-kubectl.md). + +**Note** +A Pod running on AWS Fargate automatically mounts an Amazon EFS file system, without needing the manual driver installation steps described on this page. + +#### Set up driver permission The driver requires IAM permission to talk to Amazon EFS to manage the volume on user's behalf. There are several methods to grant driver IAM permission: -* Using IAM Role for Service Account (Recommended if you're using EKS): create an [IAM Role for service accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) with the [required permissions](./iam-policy-example.json). Uncomment annotations and put the IAM role ARN in [service-account manifest](../deploy/kubernetes/base/controller-serviceaccount.yaml) -* Using IAM [instance profile](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html) - grant all the worker nodes with [required permissions](./iam-policy-example.json) by attaching policy to the instance profile of the worker. +* Using IAM role for service account (recommended if you're using Amazon EKS) – Create an [IAM Role for service accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) with the required permissions in [iam-policy-example.json](./iam-policy-example.json). Uncomment annotations and put the IAM role ARN in the [service-account manifest](../deploy/kubernetes/base/controller-serviceaccount.yaml). For example steps, see [Create an IAM policy and role for Amazon EKS](./iam-policy-create.md). +* Using IAM [instance profile](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html) – Grant all the worker nodes with [required permissions](./iam-policy-example.json) by attaching the policy to the instance profile of the worker. -#### Deploy the driver: +------ -If you want to deploy the stable driver: -```sh -kubectl apply -k "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-1.5" -``` +#### Deploy the driver -If you want to deploy the development driver: -```sh -kubectl apply -k "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/dev/?ref=master" -``` +There are several options for deploying the driver. The following are some examples. -Alternatively, you could also install the driver using helm: -```sh -helm repo add aws-efs-csi-driver https://kubernetes-sigs.github.io/aws-efs-csi-driver/ -helm repo update -helm upgrade --install aws-efs-csi-driver --namespace kube-system aws-efs-csi-driver/aws-efs-csi-driver -``` +------ +##### [ Helm ] -To force the efs-csi-driver to use FIPS, you can add an argument to the helm upgrade command: -``` -helm upgrade --install aws-efs-csi-driver --namespace kube-system aws-efs-csi-driver/aws-efs-csi-driver --set useFips=true -``` -**Notes**: -* `hostNetwork: true` (should be added under spec/deployment on kubernetes installations where AWS metadata is not reachable from pod network. To fix the following error `NoCredentialProviders: no valid providers in chain` this parameter should be added.) +This procedure requires Helm V3 or later. To install or upgrade Helm, see [Using Helm with Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/helm.html). + +**To install the driver using Helm** + +1. Add the Helm repo. + + ```sh + helm repo add aws-efs-csi-driver https://kubernetes-sigs.github.io/aws-efs-csi-driver/ + ``` + +1. Update the repo. + + ```sh + helm repo update aws-efs-csi-driver + ``` + +1. Install a release of the driver using the Helm chart. + + ```sh + helm upgrade --install aws-efs-csi-driver --namespace kube-system aws-efs-csi-driver/aws-efs-csi-driver + ``` + + To specify an image repository, add the following argument. Replace the repository address with the cluster's [container image address](https://docs.aws.amazon.com/eks/latest/userguide/add-ons-images.html). + ```sh + --set image.repository=602401143452.dkr.ecr.region-code.amazonaws.com/eks/aws-efs-csi-driver + ``` + + If you already created a service account by following [Create an IAM policy and role for Amazon EKS](./iam-policy-create.md), then add the following arguments. + ```sh + --set controller.serviceAccount.create=false \ + --set controller.serviceAccount.name=efs-csi-controller-sa + ``` + + If you don't have outbound access to the Internet, add the following arguments. + ```sh + --set sidecars.livenessProbe.image.repository=602401143452.dkr.ecr.region-code.amazonaws.com/eks/livenessprobe \ + --set sidecars.node-driver-registrar.image.repository=602401143452.dkr.ecr.region-code.amazonaws.com/eks/csi-node-driver-registrar \ + --set sidecars.csiProvisioner.image.repository=602401143452.dkr.ecr.region-code.amazonaws.com/eks/csi-provisioner + ``` + + To force the Amazon EFS CSI driver to use FIPS for mounting the file system, add the following argument. + ```sh + --set useFips=true + ``` +**Note** +`hostNetwork: true` (should be added under spec/deployment on kubernetes installations where AWS metadata is not reachable from pod network. To fix the following error `NoCredentialProviders: no valid providers in chain` this parameter should be added.) + +------ +##### [ Manifest \(private registry\) ] + +If you want to download the image with a manifest, we recommend first trying these steps to pull secured images from the private Amazon ECR registry. + +**To install the driver using images stored in the private Amazon ECR registry** + +1. Download the manifest. Replace `release-X.X` with your desired branch. We recommend using the latest released version. For a list of active branches, see [Branches](../../../branches/active). + + ```sh + kubectl kustomize \ + "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/ecr/?ref=release-1.X" > private-ecr-driver.yaml + ``` + **Note** + If you encounter an issue that you aren't able to resolve by adding IAM permissions, try the [Manifest \(public registry\)](#-manifest-public-registry-) steps instead. + +1. In the following command, replace `region-code` with the AWS Region that your cluster is in. Then run the modified command to replace `us-west-2` in the file with your AWS Region. + + ```sh + sed -i.bak -e 's|us-west-2|region-code|' private-ecr-driver.yaml + ``` + +1. Replace `account` in the following command with the account from [Amazon container image registries](add-ons-images.md) for the AWS Region that your cluster is in and then run the modified command to replace `602401143452` in the file. + + ```sh + sed -i.bak -e 's|602401143452|account|' private-ecr-driver.yaml + ``` + +1. If you already created a service account by following [Create an IAM policy and role for Amazon EKS](./iam-policy-create.md), then edit the `private-ecr-driver.yaml` file. Remove the following lines that create a Kubernetes service account. + + ``` + apiVersion: v1 + kind: ServiceAccount + metadata: + labels: + app.kubernetes.io/name: aws-efs-csi-driver + name: efs-csi-controller-sa + namespace: kube-system + --- + ``` + +1. Apply the manifest. + + ```sh + kubectl apply -f private-ecr-driver.yaml + ``` + +------ +##### [ Manifest \(public registry\) ] + +For some situations, you may not be able to add the necessary IAM permissions to pull from the private Amazon ECR registry. One example of this scenario is if your [IAM principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_terms-and-concepts.html) isn't allowed to authenticate with someone else's account. When this is true, you can use the public Amazon ECR registry. + +**To install the driver using images stored in the public Amazon ECR registry** + +1. Download the manifest. Replace `release-X.X` with your desired branch. We recommend using the latest released version. For a list of active branches, see [Branches](../../../branches/active). + + ```sh + kubectl kustomize \ + "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-1.X" > public-ecr-driver.yaml + ``` + +1. If you already created a service account by following [Create an IAM policy and role](./iam-policy-create.md), then edit the `private-ecr-driver.yaml` file. Remove the following lines that create a Kubernetes service account. + + ```sh + apiVersion: v1 + kind: ServiceAccount + metadata: + labels: + app.kubernetes.io/name: aws-efs-csi-driver + name: efs-csi-controller-sa + namespace: kube-system + --- + ``` + +1. Apply the manifest. + + ```sh + kubectl apply -f public-ecr-driver.yaml + ``` +------ + +After deploying the driver, you can continue to these sections: +* [Create an Amazon EFS file system for Amazon EKS](./efs-create-filesystem.md) +* [Examples](#examples) ### Container Arguments for efs-plugin of efs-csi-node daemonset | Parameters | Values | Default | Optional | Description | @@ -177,13 +306,13 @@ helm upgrade --install aws-efs-csi-driver --namespace kube-system aws-efs-csi-dr | vol-metrics-opt-in | | false | true | Opt in to emit volume metrics. | | vol-metrics-refresh-period | | 240 | true | Refresh period for volume metrics in minutes. | | vol-metrics-fs-rate-limit | | 5 | true | Volume metrics routines rate limiter per file system. | -| tags | | | true | Space separated key:value pairs which will be added as tags for EFS resources. For example, '--tags=name:efs-tag-test date:Jan24' | +| tags | | | true | Space separated key:value pairs which will be added as tags for Amazon EFS resources. For example, '--tags=name:efs-tag-test date:Jan24' | ### Container Arguments for deployment(controller) | Parameters | Values | Default | Optional | Description | |-----------------------------|--------|---------|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | delete-access-point-root-dir| | false | true | Opt in to delete access point root directory by DeleteVolume. By default, DeleteVolume will delete the access point behind Persistent Volume and deleting access point will not delete the access point root directory or its contents. | -### Upgrading the EFS CSI Driver +### Upgrading the Amazon EFS CSI Driver #### Upgrade to the latest version: @@ -204,24 +333,24 @@ kubectl apply -f driver.yaml ``` ### Examples -Before the example, you need to: -* Get yourself familiar with how to setup Kubernetes on AWS and how to [create EFS file system](https://docs.aws.amazon.com/efs/latest/ug/getting-started.html). -* When creating EFS file system, make sure it is accessible from Kubernetes cluster. This can be achieved by creating the file system inside the same VPC as Kubernetes cluster or using VPC peering. -* Install EFS CSI driver following the [Installation](README.md#Installation) steps. +Before following the examples, you need to: +* Get yourself familiar with how to setup Kubernetes on AWS and how to [create Amazon EFS file system](https://docs.aws.amazon.com/efs/latest/ug/getting-started.html). +* When creating an Amazon EFS file system, make sure it is accessible from the Kubernetes cluster. This can be achieved by creating the file system inside the same VPC as the Kubernetes cluster or using VPC peering. +* Install Amazon EFS CSI driver following the [Installation](README.md#Installation) steps. #### Example links * [Static provisioning](../examples/kubernetes/static_provisioning/README.md) * [Dynamic provisioning](../examples/kubernetes/dynamic_provisioning/README.md) * [Encryption in transit](../examples/kubernetes/encryption_in_transit/README.md) * [Accessing the file system from multiple pods](../examples/kubernetes/multiple_pods/README.md) -* [Consume EFS in StatefulSets](../examples/kubernetes/statefulset/README.md) +* [Consume Amazon EFS in StatefulSets](../examples/kubernetes/statefulset/README.md) * [Mount subpath](../examples/kubernetes/volume_path/README.md) * [Use Access Points](../examples/kubernetes/access_points/README.md) ## Using botocore to retrieve mount target ip address when dns name cannot be resolved -* EFS CSI driver supports using botocore to retrieve mount target ip address when dns name cannot be resolved, e.g., when user is mounting a file system in another VPC, botocore comes preinstalled on efs-csi-driver which can solve this DNS issue. +* Amazon EFS CSI driver supports using botocore to retrieve mount target ip address when dns name cannot be resolved, e.g., when user is mounting a file system in another VPC, botocore comes preinstalled on efs-csi-driver which can solve this DNS issue. * IAM policy prerequisites to use this feature : - Allow ```elasticfilesystem:DescribeMountTargets``` and ```ec2:DescribeAvailabilityZones``` actions in your policy attached to the EKS service account role, refer to example policy [here](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/docs/iam-policy-example.json#L9-L10). + Allow ```elasticfilesystem:DescribeMountTargets``` and ```ec2:DescribeAvailabilityZones``` actions in your policy attached to the Amazon EKS service account role, refer to example policy [here](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/docs/iam-policy-example.json#L9-L10). ## Development * Please go through [CSI Spec](https://github.com/container-storage-interface/spec/blob/master/spec.md) and [Kubernetes CSI Developer Documentation](https://kubernetes-csi.github.io/docs) to get some basic understanding of CSI driver before you start. diff --git a/docs/efs-create-filesystem.md b/docs/efs-create-filesystem.md new file mode 100644 index 000000000..b307fbf31 --- /dev/null +++ b/docs/efs-create-filesystem.md @@ -0,0 +1,114 @@ + +## Create an Amazon EFS file system for Amazon EKS + +This topic gives example steps for creating an Amazon EFS file system for Amazon EKS. You can also refer to [Getting started with Amazon Elastic File System](https://docs.aws.amazon.com/efs/latest/ug/getting-started.html). + +The Amazon EFS CSI driver supports [Amazon EFS access points](https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html), which are application\-specific entry points into an Amazon EFS file system that make it easier to share a file system between multiple Pods. Access points can enforce a user identity for all file system requests that are made through the access point, and enforce a root directory for each Pod. For more information, see [Amazon EFS access points](../examples/kubernetes/access_points/README.md). + +**Important** +You must complete the following steps in the same terminal because variables are set and used across the steps. + +**To create an Amazon EFS file system for your Amazon EKS cluster** + +1. Retrieve the VPC ID that your cluster is in and store it in a variable for use in a later step. Replace `my-cluster` with your cluster name. + + ``` + vpc_id=$(aws eks describe-cluster \ + --name my-cluster \ + --query "cluster.resourcesVpcConfig.vpcId" \ + --output text) + ``` + +1. Retrieve the CIDR range for your cluster's VPC and store it in a variable for use in a later step. Replace `region-code` with the AWS Region that your cluster is in. + + ``` + cidr_range=$(aws ec2 describe-vpcs \ + --vpc-ids $vpc_id \ + --query "Vpcs[].CidrBlock" \ + --output text \ + --region region-code) + ``` + +1. Create a security group with an inbound rule that allows inbound NFS traffic for your Amazon EFS mount points. + + 1. Create a security group. Replace the *`example values`* with your own. + + ``` + security_group_id=$(aws ec2 create-security-group \ + --group-name MyEfsSecurityGroup \ + --description "My EFS security group" \ + --vpc-id $vpc_id \ + --output text) + ``` + + 1. Create an inbound rule that allows inbound NFS traffic from the CIDR for your cluster's VPC. + + ``` + aws ec2 authorize-security-group-ingress \ + --group-id $security_group_id \ + --protocol tcp \ + --port 2049 \ + --cidr $cidr_range + ``` +**Important** +To further restrict access to your file system, you can use the CIDR for your subnet instead of the VPC. + +1. Create an Amazon EFS file system for your Amazon EKS cluster. + + 1. Create a file system. Replace `region-code` with the AWS Region that your cluster is in. + + ``` + file_system_id=$(aws efs create-file-system \ + --region region-code \ + --performance-mode generalPurpose \ + --query 'FileSystemId' \ + --output text) + ``` + + 1. Create mount targets. + + 1. Determine the IP address of your cluster nodes. + + ``` + kubectl get nodes + ``` + + The example output is as follows. + + ``` + NAME STATUS ROLES AGE VERSION + ip-192-168-56-0.region-code.compute.internal Ready 19m v1.XX.X-eks-49a6c0 + ``` + + 1. Determine the IDs of the subnets in your VPC and which Availability Zone the subnet is in. + + ``` + aws ec2 describe-subnets \ + --filters "Name=vpc-id,Values=$vpc_id" \ + --query 'Subnets[*].{SubnetId: SubnetId,AvailabilityZone: AvailabilityZone,CidrBlock: CidrBlock}' \ + --output table + ``` + + The example output is as follows. + + ``` + | DescribeSubnets | + +------------------+--------------------+----------------------------+ + | AvailabilityZone | CidrBlock | SubnetId | + +------------------+--------------------+----------------------------+ + | region-codec | 192.168.128.0/19 | subnet-EXAMPLE6e421a0e97 | + | region-codeb | 192.168.96.0/19 | subnet-EXAMPLEd0503db0ec | + | region-codec | 192.168.32.0/19 | subnet-EXAMPLEe2ba886490 | + | region-codeb | 192.168.0.0/19 | subnet-EXAMPLE123c7c5182 | + | region-codea | 192.168.160.0/19 | subnet-EXAMPLE0416ce588p | + +------------------+--------------------+----------------------------+ + ``` + + 1. Add mount targets for the subnets that your nodes are in. From the output in the previous two steps, the cluster has one node with an IP address of `192.168.56.0`. That IP address is within the `CidrBlock` of the subnet with the ID `subnet-EXAMPLEe2ba886490`. As a result, the following command creates a mount target for the subnet the node is in. If there were more nodes in the cluster, you'd run the command once for a subnet in each AZ that you had a node in, replacing `subnet-EXAMPLEe2ba886490` with the appropriate subnet ID. + + ``` + aws efs create-mount-target \ + --file-system-id $file_system_id \ + --subnet-id subnet-EXAMPLEe2ba886490 \ + --security-groups $security_group_id + ``` diff --git a/docs/iam-policy-create.md b/docs/iam-policy-create.md new file mode 100644 index 000000000..00b790793 --- /dev/null +++ b/docs/iam-policy-create.md @@ -0,0 +1,115 @@ +## Create an IAM policy and role for Amazon EKS + +The following steps give an example of using an IAM role for service account to talk to Amazon EFS. + +1. Create an IAM policy that allows the CSI driver's service account to make calls to AWS APIs on your behalf. + + 1. Download the IAM policy document. + + ```sh + curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/master/docs/iam-policy-example.json + ``` + + 1. Create the policy. You can change `EKS_EFS_CSI_Driver_Policy` to a different name, but if you do, make sure to change it in later steps too. + + ```sh + aws iam create-policy \ + --policy-name EKS_EFS_CSI_Driver_Policy \ + --policy-document file://iam-policy-example.json + ``` + +1. Create an IAM role and attach the IAM policy to it. Annotate the Kubernetes service account with the IAM role ARN and the IAM role with the Kubernetes service account name. You can create the role using `eksctl` or the AWS CLI. + +------ +#### [ eksctl ] + + Run the following command to create the IAM role and Kubernetes service account. It also attaches the policy to the role, annotates the Kubernetes service account with the IAM role ARN, and adds the Kubernetes service account name to the trust policy for the IAM role. Replace `my-cluster` with your cluster name and `111122223333` with your account ID. Replace `region-code` with the AWS Region that your cluster is in. If your cluster is in the AWS GovCloud \(US\-East\) or AWS GovCloud \(US\-West\) AWS Regions, then replace `arn:aws:` with `arn:aws-us-gov:`. + + ```sh + eksctl create iamserviceaccount \ + --cluster my-cluster \ + --namespace kube-system \ + --name efs-csi-controller-sa \ + --attach-policy-arn arn:aws:iam::111122223333:policy/EKS_EFS_CSI_Driver_Policy \ + --approve \ + --region region-code + ``` + +------ +#### [ AWS CLI ] + + 1. Determine your cluster's OIDC provider URL. Replace `my-cluster` with your cluster name. If the output from the command is `None`, review the **Prerequisites**. + + ```sh + aws eks describe-cluster --name my-cluster --query "cluster.identity.oidc.issuer" --output text + ``` + + The example output is as follows. + + ``` + https://oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE + ``` + + 1. Create the IAM role, granting the Kubernetes service account the `AssumeRoleWithWebIdentity` action. + + 1. Copy the following contents to a file named `trust-policy.json`. Replace `111122223333` with your account ID. Replace `EXAMPLED539D4633E53DE1B71EXAMPLE` and `region-code` with the values returned in the previous step. If your cluster is in the AWS GovCloud \(US\-East\) or AWS GovCloud \(US\-West\) AWS Regions, then replace `arn:aws:` with `arn:aws-us-gov:`. + + ``` + { + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "Federated": "arn:aws:iam::111122223333:oidc-provider/oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE" + }, + "Action": "sts:AssumeRoleWithWebIdentity", + "Condition": { + "StringEquals": { + "oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:sub": "system:serviceaccount:kube-system:efs-csi-controller-sa" + } + } + } + ] + } + ``` + + 1. Create the role. You can change `EKS_EFS_CSI_DriverRole` to a different name, but if you do, make sure to change it in later steps too. + + ```sh + aws iam create-role \ + --role-name EKS_EFS_CSI_DriverRole \ + --assume-role-policy-document file://"trust-policy.json" + ``` + + 1. Attach the IAM policy to the role with the following command. Replace `111122223333` with your account ID. If your cluster is in the AWS GovCloud \(US\-East\) or AWS GovCloud \(US\-West\) AWS Regions, then replace `arn:aws:` with `arn:aws-us-gov:`. + + ```sh + aws iam attach-role-policy \ + --policy-arn arn:aws:iam::111122223333:policy/EKS_EFS_CSI_Driver_Policy \ + --role-name EKS_EFS_CSI_DriverRole + ``` + + 1. Create a Kubernetes service account that's annotated with the ARN of the IAM role that you created. + + 1. Save the following contents to a file named `efs-service-account.yaml`. Replace `111122223333` with your account ID. If your cluster is in the AWS GovCloud \(US\-East\) or AWS GovCloud \(US\-West\) AWS Regions, then replace `arn:aws:` with `arn:aws-us-gov:`. + + ``` + --- + apiVersion: v1 + kind: ServiceAccount + metadata: + labels: + app.kubernetes.io/name: aws-efs-csi-driver + name: efs-csi-controller-sa + namespace: kube-system + annotations: + eks.amazonaws.com/role-arn: arn:aws:iam::111122223333:role/EKS_EFS_CSI_DriverRole + ``` + + 1. Create the Kubernetes service account on your cluster. The Kubernetes service account named `efs-csi-controller-sa` is annotated with the IAM role that you created named `EKS_EFS_CSI_DriverRole`. + + ```sh + kubectl apply -f efs-service-account.yaml + ``` +------ diff --git a/examples/kubernetes/dynamic_provisioning/README.md b/examples/kubernetes/dynamic_provisioning/README.md index 35ec5e483..c76b8092b 100644 --- a/examples/kubernetes/dynamic_provisioning/README.md +++ b/examples/kubernetes/dynamic_provisioning/README.md @@ -1,51 +1,155 @@ ## Dynamic Provisioning -This example shows how to create a dynamically provisioned volume created through [EFS access points](https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html) and Persistent Volume Claim (PVC) and consume it from a pod. - -**Note**: this example requires Kubernetes v1.17+ and driver version >= 1.2.0. - -### Edit [StorageClass](./specs/storageclass.yaml) - -``` -kind: StorageClass -apiVersion: storage.k8s.io/v1 -metadata: - name: efs-sc -provisioner: efs.csi.aws.com -mountOptions: - - tls -parameters: - provisioningMode: efs-ap - fileSystemId: fs-92107410 - directoryPerms: "700" - gidRangeStart: "1000" - gidRangeEnd: "2000" - basePath: "/dynamic_provisioning" -``` -* provisioningMode - The type of volume to be provisioned by efs. Currently, only access point based provisioning is supported `efs-ap`. -* fileSystemId - The file system under which Access Point is created. -* directoryPerms - Directory Permissions of the root directory created by Access Point. -* gidRangeStart (Optional) - Starting range of Posix Group ID to be applied onto the root directory of the access point. Default value is 50000. -* gidRangeEnd (Optional) - Ending range of Posix Group ID. Default value is 7000000. -* basePath (Optional) - Path on the file system under which access point root directory is created. If path is not provided, access points root directory are created under the root of the file system. - -### Deploy the Example -Create storage class, persistent volume claim (PVC) and the pod which consumes PV: -```sh ->> kubectl apply -f examples/kubernetes/dynamic_provisioning/specs/storageclass.yaml ->> kubectl apply -f examples/kubernetes/dynamic_provisioning/specs/pod.yaml -``` - -### Check EFS filesystem is used -After the objects are created, verify that pod is running: - -```sh ->> kubectl get pods -``` - -Also you can verify that data is written onto EFS filesystem: - -```sh ->> kubectl exec -ti efs-app -- tail -f /data/out -``` -### Note: -When you want to delete an access point in a file system when deleting PVC, you should specify `elasticfilesystem:ClientRootAccess` to the file system access policy to provide the root permissions. \ No newline at end of file +**Important** +You can't use dynamic provisioning with Fargate nodes. + +This example shows how to create a dynamically provisioned volume created through [Amazon EFS access points](https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html) and a persistent volume claim (PVC) that's consumed by a Pod. + +**Prerequisite** +This example requires Kubernetes 1.17 or later and a driver version of 1.2.0 or later. + +1. Create a storage class for Amazon EFS. + + 1. Retrieve your Amazon EFS file system ID. You can find this in the Amazon EFS console, or use the following AWS CLI command. + + ```sh + aws efs describe-file-systems --query "FileSystems[*].FileSystemId" --output text + ``` + + The example output is as follows. + + ``` + fs-582a03f3 + ``` + + 1. Download a `StorageClass` manifest for Amazon EFS. + + ```sh + curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/master/examples/kubernetes/dynamic_provisioning/specs/storageclass.yaml + ``` + + 1. Edit [the file](./specs/storageclass.yaml). Find the following line, and replace the value for `fileSystemId` with your file system ID. + + ``` + fileSystemId: fs-582a03f3 + ``` + Modify the other values as needed: + * `provisioningMode` - The type of volume to be provisioned by Amazon EFS. Currently, only access point based provisioning is supported (`efs-ap`). + * `fileSystemId` - The file system under which the access point is created. + * `directoryPerms` - The directory permissions of the root directory created by the access point. + * `gidRangeStart` (Optional) - The starting range of the Posix group ID to be applied onto the root directory of the access point. The default value is `50000`. + * `gidRangeEnd` (Optional) - The ending range of the Posix group ID. The default value is `7000000`. + * `basePath` (Optional) - The path on the file system under which the access point root directory is created. If the path isn't provided, the access points root directory is created under the root of the file system. + + 1. Deploy the storage class. + + ```sh + kubectl apply -f storageclass.yaml + ``` + +1. Test automatic provisioning by deploying a Pod that makes use of the PVC: + + 1. Download a manifest that deploys a Pod and a PVC. + + ```sh + curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-efs-csi-driver/master/examples/kubernetes/dynamic_provisioning/specs/pod.yaml + ``` + + 1. Deploy the Pod with a sample app and the PVC used by the Pod. + + ```sh + kubectl apply -f pod.yaml + ``` + +1. Determine the names of the Pods running the controller. + + ```sh + kubectl get pods -n kube-system | grep efs-csi-controller + ``` + + The example output is as follows. + + ``` + efs-csi-controller-74ccf9f566-q5989 3/3 Running 0 40m + efs-csi-controller-74ccf9f566-wswg9 3/3 Running 0 40m + ``` + +1. After few seconds, you can observe the controller picking up the change \(edited for readability\). Replace `74ccf9f566-q5989` with a value from one of the Pods in your output from the previous command. + + ```sh + kubectl logs efs-csi-controller-74ccf9f566-q5989 \ + -n kube-system \ + -c csi-provisioner \ + --tail 10 + ``` + + The example output is as follows. + + ``` + [...] + 1 controller.go:737] successfully created PV pvc-5983ffec-96cf-40c1-9cd6-e5686ca84eca for PVC efs-claim and csi volume name fs-95bcec92::fsap-02a88145b865d3a87 + ``` + + If you don't see the previous output, run the previous command using one of the other controller Pods. + +1. Confirm that a persistent volume was created with a status of `Bound` to a `PersistentVolumeClaim`: + + ```sh + kubectl get pv + ``` + + The example output is as follows. + + ``` + NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE + pvc-5983ffec-96cf-40c1-9cd6-e5686ca84eca 20Gi RWX Delete Bound default/efs-claim efs-sc 7m57s + ``` + +1. View details about the `PersistentVolumeClaim` that was created. + + ```sh + kubectl get pvc + ``` + + The example output is as follows. + + ``` + NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE + efs-claim Bound pvc-5983ffec-96cf-40c1-9cd6-e5686ca84eca 20Gi RWX efs-sc 9m7s + ``` + +1. View the sample app Pod's status until the `STATUS` becomes `Running`. + + ```sh + kubectl get pods -o wide + ``` + + The example output is as follows. + + ``` + NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES + efs-app 1/1 Running 0 10m 192.168.78.156 ip-192-168-73-191.region-code.compute.internal + ``` +**Note** +If a Pod doesn't have an IP address listed, make sure that you added a mount target for the subnet that your node is in \(as described at the end of [Create an Amazon EFS file system](#efs-create-filesystem)\). Otherwise the Pod won't leave `ContainerCreating` status. When an IP address is listed, it may take a few minutes for a Pod to reach the `Running` status. + +1. Confirm that the data is written to the volume. + + ```sh + kubectl exec efs-app -- bash -c "cat data/out" + ``` + + The example output is as follows. + + ``` + [...] + Tue Mar 23 14:29:16 UTC 2021 + Tue Mar 23 14:29:21 UTC 2021 + Tue Mar 23 14:29:26 UTC 2021 + Tue Mar 23 14:29:31 UTC 2021 + [...] + ``` + +1. \(Optional\) Terminate the Amazon EKS node that your Pod is running on and wait for the Pod to be re\-scheduled. Alternately, you can delete the Pod and redeploy it. Complete the previous step again, confirming that the output includes the previous output. + +**Note** +When you want to delete an access point in a file system when deleting PVC, you should specify `elasticfilesystem:ClientRootAccess` to the file system access policy to provide the root permissions. diff --git a/examples/kubernetes/multiple_pods/README.md b/examples/kubernetes/multiple_pods/README.md index 87d29cd69..0d1ca5abc 100644 --- a/examples/kubernetes/multiple_pods/README.md +++ b/examples/kubernetes/multiple_pods/README.md @@ -1,55 +1,165 @@ ## Multiple Pods Read Write Many -This example shows how to create a static provisioned EFS persistence volume (PV) and access it from multiple pods with RWX access mode. - -### Edit Persistent Volume -Edit persistent volume using sample [spec](./specs/pv.yaml): -``` -apiVersion: v1 -kind: PersistentVolume -metadata: - name: efs-pv -spec: - capacity: - storage: 5Gi - volumeMode: Filesystem - accessModes: - - ReadWriteMany - persistentVolumeReclaimPolicy: Retain - storageClassName: efs-sc - csi: - driver: efs.csi.aws.com - volumeHandle: [FileSystemId] -``` -Replace `volumeHandle` value with `FileSystemId` of the EFS filesystem that needs to be mounted. Note that the access mode is `RWX` which means the PV can be read and written from multiple pods. - -You can get `FileSystemId` using AWS CLI: - -```sh ->> aws efs describe-file-systems --query "FileSystems[*].FileSystemId" -``` - -### Deploy the Example Application -Create PV, persistence volume claim (PVC), storageclass and the pods that consume the PV: -```sh -kubectl apply -f examples/kubernetes/multiple_pods/specs/storageclass.yaml -kubectl apply -f examples/kubernetes/multiple_pods/specs/pv.yaml -kubectl apply -f examples/kubernetes/multiple_pods/specs/claim.yaml -kubectl apply -f examples/kubernetes/multiple_pods/specs/pod1.yaml -kubectl apply -f examples/kubernetes/multiple_pods/specs/pod2.yaml -``` - -In the example, both pod1 and pod2 are writing to the same EFS filesystem at the same time. - -### Check the Application uses EFS filesystem -After the objects are created, verify that pod is running: - -```sh ->> kubectl get pods -``` - -Also verify that data is written onto EFS filesystem from both pods: - -```sh ->> kubectl exec -ti app1 -- tail -f /data/out1.txt ->> kubectl exec -ti app2 -- tail -f /data/out2.txt -``` +This example shows how to create a static provisioned Amazon EFS persistent volume (PV) and access it from multiple pods with the `ReadWriteMany` (RWX) access mode. This mode allows the PV to be read and written from multiple pods. + +1. Clone the [Amazon EFS Container Storage Interface \(CSI\) driver](https://github.com/kubernetes-sigs/aws-efs-csi-driver) GitHub repository to your local system. + + ```sh + git clone https://github.com/kubernetes-sigs/aws-efs-csi-driver.git + ``` + +1. Navigate to the `multiple_pods` example directory. + + ```sh + cd aws-efs-csi-driver/examples/kubernetes/multiple_pods/ + ``` + +1. Retrieve your Amazon EFS file system ID. You can find this in the Amazon EFS console, or use the following AWS CLI command. + + ```sh + aws efs describe-file-systems --query "FileSystems[*].FileSystemId" --output text + ``` + + The example output is as follows. + + ``` + fs-582a03f3 + ``` + +1. Edit the [`specs/pv.yaml`](./specs/pv.yaml) file and replace the `volumeHandle` value with your Amazon EFS file system ID. + + ``` + apiVersion: v1 + kind: PersistentVolume + metadata: + name: efs-pv + spec: + capacity: + storage: 5Gi + volumeMode: Filesystem + accessModes: + - ReadWriteMany + persistentVolumeReclaimPolicy: Retain + storageClassName: efs-sc + csi: + driver: efs.csi.aws.com + volumeHandle: fs-582a03f3 + ``` +**Note** +`spec.capacity` is ignored by the Amazon EFS CSI driver because Amazon EFS is an elastic file system. The actual storage capacity value in persistent volumes and persistent volume claims isn't used when creating the file system. However, because storage capacity is a required field in Kubernetes, you must specify a valid value, such as, `5Gi` in this example. This value doesn't limit the size of your Amazon EFS file system. + + + +1. Deploy the `efs-sc` storage class, the `efs-claim` PVC, and the `efs-pv` PV from the `specs` directory. + + ```sh + kubectl apply -f specs/pv.yaml + kubectl apply -f specs/claim.yaml + kubectl apply -f specs/storageclass.yaml + ``` + +1. List the persistent volumes in the default namespace. Look for a persistent volume with the `default/efs-claim` claim. + + ```sh + kubectl get pv -w + ``` + + The example output is as follows. + + ``` + NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE + efs-pv 5Gi RWX Retain Bound default/efs-claim efs-sc 2m50s + ``` + + Don't proceed to the next step until the `STATUS` is `Bound`. + +1. Deploy the `app1` and `app2` sample applications from the `specs` directory. Both `pod1` and `pod2` consume the PV and write to the same Amazon EFS filesystem at the same time. + + ```sh + kubectl apply -f specs/pod1.yaml + kubectl apply -f specs/pod2.yaml + ``` + +1. Watch the Pods in the default namespace and wait for the `app1` and `app2` Pods' `STATUS` to become `Running`. + + ```sh + kubectl get pods --watch + ``` +**Note** +It may take a few minutes for the Pods to reach the `Running` status. + +1. Describe the persistent volume. + + ```sh + kubectl describe pv efs-pv + ``` + + The example output is as follows. + + ``` + Name: efs-pv + Labels: none + Annotations: kubectl.kubernetes.io/last-applied-configuration: + {"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"efs-pv"},"spec":{"accessModes":["ReadWriteMany"],"capaci... + pv.kubernetes.io/bound-by-controller: yes + Finalizers: [kubernetes.io/pv-protection] + StorageClass: efs-sc + Status: Bound + Claim: default/efs-claim + Reclaim Policy: Retain + Access Modes: RWX + VolumeMode: Filesystem + Capacity: 5Gi + Node Affinity: none + Message: + Source: + Type: CSI (a Container Storage Interface (CSI) volume source) + Driver: efs.csi.aws.com + VolumeHandle: fs-582a03f3 + ReadOnly: false + VolumeAttributes: none + Events: none + ``` + + The Amazon EFS file system ID is listed as the `VolumeHandle`. + +1. Verify that the `app1` Pod is successfully writing data to the volume. + + ```sh + kubectl exec -ti app1 -- tail -f /data/out1.txt + ``` + + The example output is as follows. + + ``` + [...] + Mon Mar 22 18:18:22 UTC 2021 + Mon Mar 22 18:18:27 UTC 2021 + Mon Mar 22 18:18:32 UTC 2021 + Mon Mar 22 18:18:37 UTC 2021 + [...] + ``` + +1. Verify that the `app2` Pod shows the same data in the volume that `app1` wrote to the volume. + + ```sh + kubectl exec -ti app2 -- tail -f /data/out2.txt + ``` + + The example output is as follows. + + ``` + [...] + Mon Mar 22 18:18:22 UTC 2021 + Mon Mar 22 18:18:27 UTC 2021 + Mon Mar 22 18:18:32 UTC 2021 + Mon Mar 22 18:18:37 UTC 2021 + [...] + ``` + +1. When you finish experimenting, delete the resources for this sample application to clean up. + + ```sh + kubectl delete -f specs/ + ``` + + You can also manually delete the file system and security group that you created.