-
Notifications
You must be signed in to change notification settings - Fork 558
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support access points on the same file system #185
Conversation
Hi @2uasimojo. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/ok-to-test |
lgtm in current state pending docs/unittests. thanks! |
UT and docs are done. I'll start working on the e2e tests now. We can merge this as is and put the e2e in a separate PR, or wait and I'll add it here -- up to you. |
/test pull-aws-efs-csi-driver-unit |
/lgtm |
@jharrington22: changing LGTM is restricted to collaborators In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
BTW, this reverts the doc change in #168. |
@@ -6,8 +6,7 @@ In this case, the separation is managed on the EFS side rather than the kubernet | |||
|
|||
### Create Access Points (in EFS) | |||
Following [this doc](https://docs.aws.amazon.com/efs/latest/ug/create-access-point.html), create a separate access point for each independent data store you wish to expose in your cluster, tailoring the ownership and permissions as desired. | |||
|
|||
**Note**: Until [issue #167](https://github.com/kubernetes-sigs/aws-efs-csi-driver/issues/167) is resolved, you must use a separate file system for each access point if they are to be mounted in the same pod. | |||
There is no need to use different EFS volumes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this sentence seems out of place now, i would remove
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, this is actually the crux of the change. Previously, if you wanted multiple shared stores in the same pod, you had to use separate volumes; and if you didn't, things broke in mysterious ways. Now this is possible, and it may be quite desirable for consumers to avoid the extra overhead.
Is there some way I can rephrase to make that clearer, or somewhere else I can put it so the context is better?
lgtm aside from the example nit. regarding the e2e tests, i am trying to get #173 merged so you can reuse/build upon some of the createEFSPVCPV and such functions. You can rebase your tests on top of that PR or you dont have to use it at all, up to you! |
Thank you for the review. I'll fix up the docs ASAP.
Ah, I hadn't noticed that #173 is still open. I'm sure I would have once I started trying to build on it -- which I actually haven't started on yet. That being the case, I'd prefer to get this merged (once fixed up) and work on the e2e tests in (a) separate PR(s). |
Expands the supported `volumeHandle` formats to include a three-field version: `{fsid}:{subpath}:{apid}`. This addresses the limitation originally described in kubernetes-sigs#100 whereby k8s relies solely on the `volumeHandle` to uniquely distinguish one PV from another. As part of this fix, specifying `accesspoint=fsap-...` in `mountOptions` is deprecated. For more details, see the related issue (kubernetes-sigs#167). The following scenarios were tested in a live environment: **Conflicting access point in volumeHandle and mountOptions** - `volumeHandle: fs::ap1` - `mountOptions: ['tls', 'accesspoint=ap2']` - expect: fail - actual: fail with `Warning FailedMount 1s (x4 over 4s) kubelet, ip-10-0-137-122.ec2.internal MountVolume.SetUp failed for volume "pv-aptest-1" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = InvalidArgument desc = Found conflicting access point IDs in mountOptions (fsap-04d3307beebd04739) and volumeHandle (fsap-057e9a9b209ec9531)` - result: ✔ **Same access point in volumeHandle and mountOptions** - `volumeHandle: fs::ap1` - `mountOptions: ['tls', 'accesspoint=ap1']` - expect: success - result: ✔ **Two access points on the same file system** Also makes sure we populate tls and accesspoint in mountOptions - `mountOptions: []` (for both) - PV1: - `volumeHandle: fs1::ap1` - PV2: - `volumeHandle: fs1::ap2` - expect: success, both mounts accessible and distinct - result: ✔ **Subpaths with access points** - `mountOptions: []` (for all) - PV1: - `volumeHandle: fs1::ap1` (root -- should be able to see /foo and bar) - PV2: - `volumeHandle: fs1:/foo/bar:ap1` (absolute path) - PV3: - `volumeHandle: fs1:foo/bar:ap1` (relative path) - expect: success - actual: success (had to create `$absolutemountpoint/foo/bar` in the fs first, as expected) - result: ✔ Signed-off-by: Eric Fried <[email protected]>
/lgtm Let's wait until release time, if you want to write the release note/action required we can mimic the k/k convention and put it in the PR body so that whoever compiles the changelog will see and use it |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: 2uasimojo, wongma7 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/test pull-aws-efs-csi-driver-e2e |
Since the [fix](kubernetes-sigs/aws-efs-csi-driver#185) for [issue #167](kubernetes-sigs/aws-efs-csi-driver#167) merged, the AWS EFS CSI driver overwrote the [`latest` image tag](sha256-962619a5deb34e1c4257f2120dd941ab026fc96adde003e27f70b65023af5a07?context=explore) to include it. For starters, this means we can update this operator to use the [new method of specifying access points](https://github.com/kubernetes-sigs/aws-efs-csi-driver/tree/0ae998c5a95fe6dbee7f43c182997e64872695e6/examples/kubernetes/access_points#edit-persistent-volume-spec) via a colon-delimited `volumeHandle` as opposed to in `mountOptions`. However, the same update to `latest` also brought in a [commit](kubernetes-sigs/aws-efs-csi-driver@b3baff8) that requires an additional mount in the `efs-plugin` container of the DaemonSet. So we need to update our YAML for that resource at the same time, or everything is broken (this might be upstream [issue #192](kubernetes-sigs/aws-efs-csi-driver#192). This update to the DaemonSet YAML also syncs with [upstream](https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/0ae998c5a95fe6dbee7f43c182997e64872695e6/deploy/kubernetes/base/node.yaml) by bumping the image versions for the other two containers (csi-node-driver-registrar: v1.1.0 => v1.3.0; and livenessprobe: v1.1.0 => livenessprobe:v2.0.0).
What is this PR about? / Why do we need it?
Expands the supported
volumeHandle
formats to include a three-fieldversion:
{fsid}:{subpath}:{apid}
. This addresses the limitationoriginally described in #100 whereby k8s relies solely on the
volumeHandle
to uniquely distinguish one PV from another.As part of this fix, specifying
accesspoint=fsap-...
inmountOptions
is deprecated.
For more details, see the related issue (#167).
What testing is done?
The following scenarios were tested in a live environment:
Conflicting access point in volumeHandle and mountOptions
volumeHandle: fs::ap1
mountOptions: ['tls', 'accesspoint=ap2']
Warning FailedMount 1s (x4 over 4s) kubelet, ip-10-0-137-122.ec2.internal MountVolume.SetUp failed for volume "pv-aptest-1" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = InvalidArgument desc = Found conflicting access point IDs in mountOptions (fsap-04d3307beebd04739) and volumeHandle (fsap-057e9a9b209ec9531)
Same access point in volumeHandle and mountOptions
volumeHandle: fs::ap1
mountOptions: ['tls', 'accesspoint=ap1']
Two access points on the same file system
(This is the main case for this fix.)
Also makes sure we populate tls and accesspoint in mountOptions
mountOptions: []
(for both)volumeHandle: fs1::ap1
volumeHandle: fs1::ap2
Subpaths with access points
mountOptions: []
(for all)volumeHandle: fs1::ap1
(root -- should be able to see /foo and bar)volumeHandle: fs1:/foo/bar:ap1
(absolute path)volumeHandle: fs1:foo/bar:ap1
(relative path)$absolutemountpoint/foo/bar
in the fs first, as expected)Is this a bug fix or adding new feature?
Fixes: #167
Signed-off-by: Eric Fried [email protected]