-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
node_pool.# count is incorrect "2" => "1" (forces new resource) #3107
Comments
Hey @baboune! The cause of this is that Terraform isn't able to tell you've defined a separate "fine-grained" node pool using the If possible, our recommendation is to use exclusively fine-grained node pools such as in this example, removing the default pool with I believe it's possible to use the default node pool as a fine-grained node pool by importing it and omitting any |
@rileykarson It is confusing. The example https://www.terraform.io/docs/providers/google/r/container_node_pool.html#example-usage-2-node-pools-1-separately-managed-the-default-node-pool is similar to what I did, and that one works. There should be more info in the doc about how this node_config causes problems. |
Ah yep- the difference is just that if I'll reopen this to add specific warnings to the top of one/both resources about using that subfield and the finegrained resource in tandem, similar to what we do for IAM ones. |
OK, so when I add: Then terraform wants to update the network:
Over and over again. Once applied successfully, if I terraform plan again, no changes should be needed. |
Ah yeah, we normally suppress that diff but using If you set |
ok. Will try. thanks |
Looks like there's a warning already; closing this out. https://www.terraform.io/docs/providers/google/r/container_cluster.html#node_pool |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks! |
Community Note
Terraform Version
terraform v0.11.10
Affected Resource(s)
Terraform Configuration Files
Debug Output
output for plan:
https://gist.github.com/baboune/be8d446b1656fc98f598ccadc41a6c9d
Panic Output
No panic
Expected Behavior
Once applied succesfully, if I terraform plan again, no changes should be needed.
Actual Behavior
terraform apply should create a stable deployment.
Instead, after
terraform apply
is applied the resources are created, but on any subsequent apply/plan terraform wants to destroy and re-create the node pool.Steps to Reproduce
terraform apply
terraform apply
orterraform plan
Important Factoids
References
Seem similar to #2115
ie that the pb might be because of the node_config section. However in this case, the default node pool is used so it is not clear how to deal with the node_config section in the google_container_cluster part?
The text was updated successfully, but these errors were encountered: