-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
prevent_destroy should let you succeed #3874
Comments
Hi @ketzacoatl - thanks for opening this! Based on your description I'm certainly sympathetic to the idea that Terraform should not terminate with an error code if the user intent is to prevent resources being deleted, but I'm inclined to say that the output should indicate that resources where |
Definitely sympathetic about this use-case too. I think a concern is that if Terraform fails to include one part of the update then that may have downstream impact in the dependency graph, which can be fine if you're intentionally doing it but would be confusing if Terraform just did it "by default". Do you think having the ability to exclude resources from plan, as proposed in #3366, would address your use-case? I'm imagining the following workflow:
I'm attracted to this solution because it makes the behavior explicit while still allowing you to proceed as you said. It requires you to still do a little more work to understand what is failing and thus what you need to exclude, but once you're sure about it you only need to modify your command line rather than having to potentially rebuild a chunk of your config. |
I'd agree @jen20, I am primarily looking for the ability to tell TF that it does not need to quit/error out hard. Same on @apparentlymart's comment on default behavior - I agree, this is a specific use case and not meant as a default.
I had to re-read that a few times to make enough sense out of how that works (the doc addition helps: Prefixing the resource with ! will exclude the resource. - this is for the In my nuanced situation, I have an #3366 is to use When I found |
Would it be possible to get an additional flag when calling: terraform plan -destroy [ -keep-prevent-destroy ] I have the same problem, I have a few EIP associated with some instances. I want to be able to destroy every but keep the EIP for obvious reasons like whitelisting but I get the same kind of problem. I understand what destroy is all about, but in some cases it would be nice getting a warning saying this and that didn't get destroyed because of lifecycle.prevent_destroy = true. @ketzacoatl exclude would be nice! |
+1, I need something along these lines as well. Would #3366 allow you to skip destroying a resource, but modify it instead? My specific use case is that I have a staging RDS instance I want to persist (never be destroyed), but I want the rest of my staging infrastructure to disappear. As a side effect of the staging environment disappearing, I need to modify the security groups on the RDS instance, since it is being deleted. So, if I had
Upon running "terraform destroy -force" I'd see:
|
Hey folks, Good discussion here. It does sound like there's enough real world use cases to warrant a feature here. What about maintaining the current semantics of Would something like this address all the needs expressed in this thread? If so, we can spec out the feature more formally and get it added. |
@phinze, that sounds good, yes.. I'd hope that in most cases, TF would be able to let the apply proceed, and let the user flag some resources as being left alone/not destroyed, and your proposal seems to provide the level of control needed, while retaining sensible semantics. |
👍 to what @ketzacoatl said |
👍 to what @phinze proposed. |
I keep running in to this, I would like the ability for TF to only create if it does not exist and do not delete it. I would like to keep some ebs or rds data around and keep the rest of my stack as ephemeral (letting TF apply/destroy at will). Currently been doing this with different projects/directories. But it would be nice to keep the entire stack together as one piece. I too thought the prevent_destroy would not create an error and have been hacking my way around it quite a bit :( |
👍 to what @phinze said. During apply, I want it to be created but ignored during destroy. Currently, I have to explicitly define the rest of the targets just to ignore 1 s3 resource. |
+1 - just ran into this. Another example are key pairs. I want to create (import) them if they don't exist, but if I destroy, I don't want to delete the keypair as other instances may be using the shared keypair. Is there a way around this for now? |
Yes, split your terraform project into multiple parts. Example:
Le dimanche 27 mars 2016, gservat [email protected] a écrit :
Sent from Gmail Mobile |
this is a must for me to be able to work with the |
This comment was marked as off-topic.
This comment was marked as off-topic.
Changing the flags here to enhancement, but still agree this is a good idea. |
This comment was marked as off-topic.
This comment was marked as off-topic.
Is this being looked at? I can't imagine there are many use cases that would NOT benefit from it. |
This is absolutely one of the banes of my life too. I've got dozens of resources I want to preserve from accidental overwrites - such as DynamoDB tables. A pair of flags for:
The flags could be something explicit like:
This would allow us to have the desired behaviour and only require an operator intervention in the case where the resource still exists, but is not mutatable into the target state during a terraform apply (i.e. If you've still got the same table, but the keys are now incompatible or some other potentially destructive update). |
Here's the use case we'd like this for: we have a module that we can use either for production (where some resources like Elastic IPs should not be accidentally deleted) or for running integration tests (where all resources should be destroyed afterwards). Because of #10730/#3116, we can't set these resources to be conditionally prevent_destroy, which would be the ideal solution. As a workaround, we'd be happy to have our integration test scripts run |
This would defintely be a useful feature. I've been using terraform for less than a month and ran into this required feature in order to protect DNS Managed zone ... everything else in my infrastucture is transient but dealing with a new DNS zone comes with it computed ( potentially new ) Name Servers on what is a delegated zone, and this would introduce an unnecessary manual step to update the parent DNS managed zone - not to mention the DNS change time delay permeating making any auto testing have a much higher latency. Reading above looks like the workaround is to split my project into different parts. Not sure I can pass in a resource from one project into another ... but I guess I can use variables in worst case scenario. |
I'm hitting a slightly different use case with Vault. I'm not 100% sure whether this belongs here. Might be best handled in the Vault resource itself. Example: resource "vault_generic_secret" "github_auth_enable" {
path = "sys/auth/github"
data_json = "...some json..."
}
resource "vault_generic_secret" "github_auth_config" {
path = "auth/github/config"
data_json = "...some json..."
depends_on = ["vault_generic_secret.github_auth_enable"]
} The problem is that the 'auth/github/config' path does not even support the delete operation: the entire 'auth/github' prefix gets wiped as soon as 'sys/auth/github' is deleted. Not only does this result in an error, but also a broken state: a subsequent apply would assume that key still exists. |
So my instance and issue would be things like rapid development and say docker_image / docker_container usage. I set TARGETS=$(for I in $(terraform state list | grep -v docker_image); do echo " -target $I"; done); echo terraform destroy $TARGETS What I would like would be two methods. One that allows me to still succeed because the plan says "hey, don't destroy this", and if I am bold and say |
@evbo you are right, that's perfectly valid case . I saw a bunch of people mentioning doing |
Is it possible to define a new resource alongside a remove block so the resource is created on apply and immediately removed from the state (but not destroyed)? |
If it's possible, the issue is that it will retry to recreate it indefinitely until you remove it from tf configuration :/ |
@anthosz How so? If I understand the docs, the removed block is meant to be added alongside the definition of a resource so as long as the order of applying them is done correctly (create resource first followed by remove) it should make no difference whether they are both applied from the plan or different ones. Or did I miss something? If terraform attempts to create the resource on the next apply because it does not exist in its state even though a |
At least introduce a new setting |
Here's a use case this feature would be great for |
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This issue is blocking me trying to use bind9 for DNS. I want terraform to manage the NS records for the zone, but when I come to delete the zone, bind9 can't delete the last NS record as it'd leave an invalid config. I need terraform to not delete this record, so that the zone itself can be removed. Not sure how to do this at present but I've already wasted hours trying to find a workaround. |
I've tried the removed block that came out in January but it's pretty weak in this context. The NS record is defined in a module. I can't use the remove block because it errors saying the the NS record is still defined (which it should be for other configs using the module). |
The solution we came up with is to build our Terraform setup in layers,
each with his own state.
The outer layer stays when the inner is destroyed. A few example:
- with Snowflake we have multiple environments in the same account. Account
level object are created in the Account Layer (for example human users or
spending monitors), env layers (dev/test/prod) take care of the env
specific objects.
- for the Data Product repos. Handle the repo creation/deletion at the
external layer, other operations in the specific Data Product layer.
Outputs and loops are your friends in this setup.
Happy to add more details if anyone is interested.
…On Sun, Mar 17, 2024, 13:14 unknownconstant ***@***.***> wrote:
I've tried the removed block that came out in January but it's pretty weak
in this context.
The NS record is defined in a module. I can't use the remove block because
it errors saying the the NS record is still defined (which it should be for
other configs using the module).
—
Reply to this email directly, view it on GitHub
<#3874 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAXKTXZJLFQM3XERTJRAVCTYYWCKHAVCNFSM4BUDL332U5DIOJSWCZC7NNSXTN2JONZXKZKDN5WW2ZLOOQ5TEMBQGI2DGNJQGM3A>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
All the solutions in this thread so far violate one of my long-held principles of development: “Never let the workaround become the work.” |
Just as an FYI, I tried this out and one cannot add a config-driven remove resource without first removing the Terraform resource it refers to from the config. A bit unfortunate :/ |
Hello, I need to use the same volume for several workspaces. Can I use this same approach, my problem is that when I delete a workspace I need to keep the volume. |
This comment was marked as off-topic.
This comment was marked as off-topic.
I concur with almost everything that was said above. In our case, we would like to not delete a FileSystem that was created by terraform so that we make sure we don't erase any data. I would love for the destroy to let the rest of the ressource be destroyed and only the ones that were tagged prevent_destroy to be left alone. |
This comment was marked as duplicate.
This comment was marked as duplicate.
any update about this feature ? |
1 similar comment
This comment was marked as duplicate.
This comment was marked as duplicate.
This was raised in 2015 with 100+ comments and multiple concrete use cases provided, but still no resolution. Do we give up hope? |
The |
Arghh...running exactly into the same issue here, but when it comes to the usage of KMS keys in AWS, as I might want to have them created but to skip destroys. Doing a "manual" step to remove the resource from the state (even if automated in a Pipeline), is quite a dirty thing to do, is it still there any plan to add this?? |
We run into this every now and then, having to do changes in the Terraform code to accommodate this.
Quite interesting that they have time to mark all the comments as off-topic or duplicate but they don't appear to bother replying to this issue or doing anything about it. If we wait another year, we can celebrate the 10-year anniversary. |
yeah removing state or writing a code in python and execute it with null resource |
Is there any update on this feature of having skip_destroy with no errors? |
No update at this time. It is on our list of issues that gets revisited every planning cycle (e.g., the top 25 most upvoted issues). |
Pretty disappointed in the fact that this has sat for so long, I could see a easy way out that just allows on the Our use case is that KMS keys are managed in the same state as the Backups for RDS, it takes a final snapshot but 30d later we now have a snapshot that isn't accessible because the KMS key has been deleted. Now I could go add a feature that just doesn't make the API request on the AWS Terraform Provider for KMS so it just doesn't schedule but I doubt it will be accepted. |
This issue STILL being open is not surprising to me and indicative of why I've never been able to get any of my teams to really embrace Terraform. It's oddities like this that just make the tool very difficult to use and frankly quite often in the way of getting any work done. Terraform needs to stop being so opinionated and give users simple CLI configurability to make decisions. |
I mostly disagree that terraform is too opinionated, for me, a lot of the value of terraform is in its opinionated nature. But I do think the lack of this feature is forcing developers into splitting their code into a module structure, this is in some cases an anti-pattern. I suspect the reason for the lack of this feature, is that some internals inside terraform might make this harder than it seems, but I don't know and I would really appreciate a short rundown if someone has it. Converting a "delete" into a "forget" seems like a simple concept, but it might cause some complex problems in the terraform vs. terraform provider relationship, though I cannot see how. |
Call me crazy, but I'm willing to call the current implementation of
prevent_destroy
a bug. Here is why: The current implementation of this flag prevents you from using it for 1/2 the use case.The net result is more frustration when trying to get Terraform to succeed instead of destroying your resources..
prevent_destroy
adds to the frustration more than alleviating it.prevent_destroy
is for these two primary use cases, right?I see no reason why TF must return an error when using
prevent_destroy
for the second use case, and in doing so, TF is completely ignoring my utterly clear directive to let me get work done. As a user, I end up feeling as though TF is wasting my time because I am focused on simple end goals which I am unable to attain while I spin my wheels begging TF to create more resources without destroying what exists.You might say the user should update their plan to not be in conflict, and I would agree that is what you want to do in most cases.. but, honestly, that is not always the right solution for the situation at hand when using a tool like TF for the real-world. I believe in empowering users, and the current implementation of this flag prevents sensible use of the tool.
The text was updated successfully, but these errors were encountered: