You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We want to update the databricks runtime version to "9.0.x-scala2.12", so the databricks_instance_pool has to be replaced. Replacing the databricks_instance_pool causes an error in terraform apply.
Configuration
resource"databricks_instance_pool""pool" {
instance_pool_name="pool"min_idle_instances=0node_type_id=var.databricks_node_typepreloaded_spark_versions=[var.databricks_spark_version]
idle_instance_autotermination_minutes=15enable_elastic_disk=trueazure_attributes {
availability="ON_DEMAND_AZURE"spot_bid_max_price=0
}
}
resource"databricks_cluster""terraform" {
cluster_name="terraform-only"spark_version=var.databricks_spark_versioninstance_pool_id=databricks_instance_pool.pool.idautotermination_minutes=15is_pinned=truenum_workers=0spark_conf={
"spark.databricks.cluster.profile":"singleNode",
"spark.master":"local[*]"
}
custom_tags={
ResourceClass ="SingleNode"
}
}
variable"databricks_spark_version" {
default="9.0.x-scala2.12"description="Spark version to be used inside Databricks"type=string
}
Expected Behavior
databricks_cluster_terraform should have been updated in-place, so that it uses the new pool.
terraform plan:
# databricks_cluster.terraform will be updated in-place
~ resource"databricks_cluster""terraform" {
id="123456"
~ instance_pool_id="old_id"-> (known after apply)
~ spark_version="8.1.x-scala2.12"->"9.0.x-scala2.12"# (15 unchanged attributes hidden)
}
# databricks_instance_pool.pool must be replaced-/+resource"databricks_instance_pool""pool" {
~ id="old_id"-> (known after apply)
~ instance_pool_id="old_id"-> (known after apply)
-max_capacity=0->null
~ preloaded_spark_versions=[ # forces replacement-"8.1.x-scala2.12",
+"9.0.x-scala2.12",
]
# (5 unchanged attributes hidden)# (1 unchanged block hidden)
Actual Behavior
│ Error: Can't find an instance pool with id: old_id.
│
│ with databricks_cluster.terraform,
│ on databricks_clusters.tf line 1, in resource "databricks_cluster" "terraform":
│ 1: resource "databricks_cluster" "terraform" {
Steps to Reproduce
update the databricks runtime version of databricks_instance_pool so the databricks_instance_pool is replaced
Added corner case to fix issue #824 where `driver_instance_pool_id` was not explicitly specified and old driver instance pool sent in cluster update request
Added corner case to fix issue #824 where `driver_instance_pool_id` was not explicitly specified and old driver instance pool sent in cluster update request
…bricks#850)
Added corner case to fix issue databricks#824 where `driver_instance_pool_id` was not explicitly specified and old driver instance pool sent in cluster update request
We want to update the databricks runtime version to "9.0.x-scala2.12", so the databricks_instance_pool has to be replaced. Replacing the databricks_instance_pool causes an error in terraform apply.
Configuration
Expected Behavior
databricks_cluster_terraform should have been updated in-place, so that it uses the new pool.
terraform plan:
Actual Behavior
│ Error: Can't find an instance pool with id: old_id.
│
│ with databricks_cluster.terraform,
│ on databricks_clusters.tf line 1, in resource "databricks_cluster" "terraform":
│ 1: resource "databricks_cluster" "terraform" {
Steps to Reproduce
terraform apply
Terraform and provider versions
Terraform v1.0.4
provider registry.terraform.io/databrickslabs/databricks v0.3.7
Fix
When I add the driver_instance_pool argument to databricks_cluster resource, it resolves the issue! This is not expected behaviour.
The text was updated successfully, but these errors were encountered: