Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use existing snapshots as data source across projects #224

Closed
stiller-leser opened this issue Mar 5, 2019 · 12 comments
Closed

Use existing snapshots as data source across projects #224

stiller-leser opened this issue Mar 5, 2019 · 12 comments

Comments

@stiller-leser
Copy link
Contributor

Hi,

from what I see in the documentation and from the code it doesn't seem to be possible to do either of the following:

  • Use snapshot that are already existing in my GCP project (created in a different manner than the K8S snapshot, e.g. manually from a VM)
  • Access snapshots from a different GCP project (as it is via the API)

I am aware that both of these might be out of scope for the pure CSI standard, but am curious what the thoughts of the maintainers are regadring the usefulness of both of these scenarios.

Please feel free to correct me, should I have missed something.

Thanks for the great work,
stiller-leser

@davidz627
Copy link
Contributor

/cc @jingxu97

@jingxu97
Copy link
Contributor

jingxu97 commented Mar 8, 2019

Thanks for asking. Please see my comments below and let me know if you have any questions.

Hi,

from what I see in the documentation and from the code it doesn't seem to be possible to do either of the following:

  • Use snapshot that are already existing in my GCP project (created in a different manner than the K8S snapshot, e.g. manually from a VM)

We do support this use case and call it static binding. Basically admin creates VolumeSnapshotContent manually and points to the VolumeSnapshot which also points back to the VolumeSnapshotContent. But recently there is bug about it and we are working to fix it. kubernetes-csi/external-snapshotter#98

  • Access snapshots from a different GCP project (as it is via the API)

We haven't try to do this, but in theory, it could work. In VolumeSnapshotContent, you give the fully qualified URL of the snapshot from a different project and then use this snapshot to create volume.

I am aware that both of these might be out of scope for the pure CSI standard, but am curious what the thoughts of the maintainers are regadring the usefulness of both of these scenarios.

Please feel free to correct me, should I have missed something.

Thanks for the great work,
stiller-leser

@stiller-leser
Copy link
Contributor Author

Hi @jingxu97,

thank you for your reply. After I managed to install the driver into my cluster (having to hardcode much of the scripts, since I am on MacOS), I actually managed to create a snapshot and also removing it again.

Here's where my next question comes into play: When the snapshot resource is deleted from the cluster the snapshot itself is also deleted from the ISP (as specified in https://kubernetes.io/docs/concepts/storage/volume-snapshots/#delete). Since the SA has the Compute Storage Admin role, I was wondering how much would break down, if I'd actually remove the compute.snapshots.delete permission for the SA? In my case I wouldn't want the snapshots be deleted when it is removed from the cluster. Since this is a fairly brutal approach, I was wondering if you can think of a more elegant way?

Thanks again - this is awesome!

@jingxu97
Copy link
Contributor

sorry to reply late. So when you mention " snapshot resource is deleted from the cluster", do you mean to delete VolumeSnapshot API object from API server or directly delete the physical snapshot file?

@stiller-leser
Copy link
Contributor Author

Hi, no worries. Thank you very much for doing it at all. I guess I mean on the API server, which results in it being deleted physically.

I basically ran through the tutorial from the README (talking API objects), created a snapshot, created a disk from the snapshot, deleted the disk and then deleted the snapshot. This also deleted the physical snapshot.

I guess what I am looking for basically is the ReclaimPolicy of a PV for snapshots, so that I can define that if the snapshot API object is deleted, the physical version is not.

Happy to clarify if needed!

@jingxu97
Copy link
Contributor

We do have the deletionPolicy (very similar to ReclaimPolicy). You can choose to "retain" the physical snapshot so even if you delete VolumeSnapshot object, the VolumeSnapshotContent object and its associated physical snapshot will be retained.

@stiller-leser
Copy link
Contributor Author

Very cool. Just to spare me some research time, could you quickly point me to the correct configuration (if you have it at hand?). Feel free to close the issue afterwards. Thank you again.

@stiller-leser
Copy link
Contributor Author

Having looked at the code, I can't find anything policy related. Any hints would be appreciated :)

@jingxu97
Copy link
Contributor

@stiller-leser sorry for missing your question.
You can find policy information here https://kubernetes.io/blog/2019/01/17/update-on-volume-snapshot-alpha-for-kubernetes/
please let me know if you have any questions.

@davidz627
Copy link
Contributor

@jingxu97 is there an open issue for updating the documentation?

Retention policy is not described here: https://kubernetes.io/docs/concepts/storage/volume-snapshot-classes/

@jingxu97
Copy link
Contributor

jingxu97 commented May 23, 2019 via email

@stiller-leser
Copy link
Contributor Author

Hey @jingxu97,

no worries - was in hybernation on my side for a long time as well, came up again when I saw @davidz627 talk on the KubeCon agenda.

I really appreciate your help here!

Will be closing the issue now!

Looking forward to playing with this,
stiller-leser

FZhg added a commit to FZhg/gcp-compute-persistent-disk-csi-driver that referenced this issue Sep 20, 2023
984feece Merge pull request kubernetes-sigs#234 from siddhikhapare/csi-tools
1f7e6059 fixed broken links of testgrid dashboard
de2fba88 Merge pull request kubernetes-sigs#233 from andyzhangx/andyzhangx-patch-1
cee895e1 remove windows 20H2 build since it's EOL long time ago
670bb0ef Merge pull request kubernetes-sigs#229 from marosset/fix-codespell-errors
35d5e783 Merge pull request kubernetes-sigs#219 from yashsingh74/update-registry
63473cc9 Merge pull request kubernetes-sigs#231 from coulof/bump-go-version-1.20.5
29a5c76c Merge pull request kubernetes-sigs#228 from mowangdk/chore/adopt_kubernetes_recommand_labels
8dd28211 Update cloudbuild image with go 1.20.5
1df23dba Merge pull request kubernetes-sigs#230 from msau42/prow
1f92b7e7 Add ginkgo timeout to e2e tests to help catch any stuck tests
2b8b80ea fixing some codespell errors
c10b6780 Merge pull request kubernetes-sigs#227 from coulof/check-sidecar-supported-versions
72984ec0 chore: adopt kubernetes recommand label
b0555351 Header
bd0a10b6 typo
c39d73c3 Add comments
f6491af0 Script to verify EOL sidecar version
4133d1df Merge pull request kubernetes-sigs#226 from msau42/cloudbuild
8d519d23 Pin buildkit to v0.10.6 to workaround v0.11 bug with docker manifest
6e04a030 Merge pull request kubernetes-sigs#224 from msau42/cloudbuild
26fdfffd Update cloudbuild image
6613c398 Merge pull request kubernetes-sigs#223 from sunnylovestiramisu/update
0e7ae993 Update k8s image repo url
77e47cce Merge pull request kubernetes-sigs#222 from xinydev/fix-dep-version
155854b0 Fix dep version mismatch
8f839056 Merge pull request kubernetes-sigs#221 from sunnylovestiramisu/go-update
1d3f94dd Update go version to 1.20 to match k/k v1.27
e322ce5e Merge pull request kubernetes-sigs#220 from andyzhangx/fix-golint-error
b74a5120 test: fix golint error
901bcb5a Update registry k8s.gcr.io -> registry.k8s.io
aa61bfd0 Merge pull request kubernetes-sigs#218 from xing-yang/update_csi_driver
7563d196 Update CSI_PROW_DRIVER_VERSION to v1.11.0
a2171bef Merge pull request kubernetes-sigs#216 from msau42/process
cb987826 Merge pull request kubernetes-sigs#217 from msau42/owners
a11216e4 add new reviewers and remove inactive reviewers
dd986754 Add step for checking builds
b66c0824 Merge pull request kubernetes-sigs#214 from pohly/junit-fixes
b9b6763b filter-junit.go: fix loss of testcases when parsing Ginkgo v2 JUnit
d4277839 filter-junit.go: preserve system error log
38e11468 prow.sh: publish individual JUnit files as separate artifacts

git-subtree-dir: release-tools
git-subtree-split: 984feece4bafac3aad74deeed76a500a0c485fb1
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants