-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
First implementation of module attribution. #22583
Conversation
f78d386
to
2c5e8ac
Compare
This also may need some sort of validation, to make sure that we can surface a schema/config mismatch error before we reach apply time? Looking into how to do that now. I imagine it has to be an EvalNode, as it should be part of the graph walk, but I'm not sure what eval to stuff the validation into. I considered the I think I'm going to investigate how to give |
Thank you for raising the PR early! 👍 I'm just going to subscribe @hashicorp/terraform-enablement to this thread because it touches SDK which was extracted, but not made public yet. Depending on timelines this might result in two PRs - one for SDK related paths (mostly |
Hey @radeksimko! Yeah, my plan was to get the core side of this working, and I was hoping that would keep me on one side of the RPC barrier, and then I could do the SDK work to expose it after the ongoing Enablement work is ready for that. Unfortunately, the E2E tests and making the whole thing compile after generating the protobuf files are going to require changes in the I was hoping to chat more about this next week and figure out an approach that worked for everyone. Whether that looks like minor changes to the This Draft is mainly so interested parties can track what's going on and we have a place to store discussions (like this one!) and is not ready either for review or for merge yet. RFCs for both the core side and the SDK side are incoming once I'm confident that my approach is working the way I hoped it would and it's a viable path forward. I'll tag Enablement for approval on both. |
76dd363
to
ee16912
Compare
The RFC for this has been completed and approved, and now we're code and documentation complete, along with tests, I believe. I think this is now ready for review for inclusion. In the documentation I made the assumption in one spot that this would be included in Terraform 0.12.14. If that is not the case, we should update the documentation with the first version of Terraform that the functionality appears in, so module authors can properly restrict their modules to versions of Terraform that support the functionality. |
|
Requesting Enablement to take a look as well, since this involves the SDK |
It was not supposed to be included, I'm not sure why it was, but I've rectified the error. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks like a good start. I made a couple notes about not handling the data in some of the rpc methods.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current implementation doesn't work for modules, but I think that's due to the metas following the provider config, rather than the resource config.
If you want to make an end-to-end test, the easiest place will be in the builtin/providers/test
provider. You could add a schema and legacy acceptance tests in there, which will allow you to check that the values make it all the way through the protocol and shims.
terraform/transform_provider.go
Outdated
|
||
// attach the provider_meta info | ||
if gnapmc, ok := v.(GraphNodeAttachProviderMetaConfigs); ok { | ||
log.Printf("[TRACE] ProviderTransformer: attaching provider meta config for %s to %s", p, dag.VertexName(v)) | ||
if t.Config == nil { | ||
log.Printf("[TRACE] ProviderTransformer: no config set on the transformer for %s", dag.VertexName(v)) | ||
continue | ||
} | ||
if t.Config.Module == nil { | ||
log.Printf("[TRACE] ProviderTransformer: no module in config for %s", dag.VertexName(v)) | ||
continue | ||
} | ||
if t.Config.Module.ProviderMetas == nil { | ||
log.Printf("[TRACE] ProviderTransformer: no provider metas defined for %s", dag.VertexName(v)) | ||
continue | ||
} | ||
if meta, ok := t.Config.Module.ProviderMetas[p.ProviderConfig.Type]; ok { | ||
gnapmc.AttachProviderMetaConfigs(meta) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You don't want to attach the metas according to providers, because providers should not be configured in a module. This attaches the provider metas from the module where the provider is configured, which is likely the root module.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So wait, t.Config.Module
is the module where the provider is configured, not the module of the node being transformed?
In my conversations with @apparentlymart we decided on attaching provider meta to resources, and that resources should only have the provider meta for the provider they belong to attached. Is... is that not what this is doing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Providers don't necessarily come from the same module as the resource. They can be inherited or passed into a module explicitly. If the idea is to attach metadata about a module, resources are the only objects that are scoped to a module and communicated to a provider.
What you're doing here will only send module info from where the provider is configured, which should almost never actually be in a module other than root. If you modify the tests here to use modules, you'll see you're getting the wrong metas.
@@ -0,0 +1,7 @@ | |||
resource "test_instance" "foo" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These tests don't load any provider_meta
blocks from modules, which I think is where they are intended to be used.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point! Let me see if I can include that. I thought I'd simplify the testing by excluding it, but you're right that it's probably best if the tests all just cross module boundaries.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @paddycarver
Firstly sorry for the delay in reviewing this (on behalf of the Enablement team).
I skimmed over the RFC and the implementation. I like it in principle (the way it's designed to work). I don't have enough context (yet) loaded in my brain to say anything useful about the implementation details in terraform
package. Core/James has a lot more context to comment on that part.
The rest of your changes (in helper/*
packages and other SDK-related changes) look reasonable to me and James has already commented on the protocol changes.
I have just two concerns:
- The Enablement team is just undergoing some very early stages of exploratory work and RFC work involving introduction of cty more transparently into the SDK (in such way that providers themselves begin to import cty). I'd greatly appreciate if we can leave the decision on how to do this for a bit later and expose a flatmap here instead.
- There are changes which make sense to land only in the SDK (such as
helper/plugin
&helper/schema
) and changes which will need to take place also in SDK (proto
file + regenerated.go
version of it). The work of cleaner decoupling of core and SDK is still ongoing, but I think we have already committed to not changing "SDK in core" anymore if possible. So if you can raise a relevant PR over in SDK repo and omit SDK changes from this one, that would be great.
DynamicValue proposed_new_state = 3; | ||
DynamicValue config = 4; | ||
bytes prior_private = 5; | ||
DynamicValue provider_meta = 6; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nitpick: Inconsistent spacing
DynamicValue planned_state = 3; | ||
DynamicValue config = 4; | ||
bytes planned_private = 5; | ||
DynamicValue provider_meta = 6; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nitpick: Inconsistent spacing
message Request { | ||
string type_name = 1; | ||
DynamicValue config = 2; | ||
DynamicValue provider_meta = 3; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nitpick: Inconsistent spacing
@@ -549,3 +552,10 @@ func (d *ResourceData) get(addr []string, source getSource) getResult { | |||
Schema: schema, | |||
} | |||
} | |||
|
|||
func (d *ResourceData) GetProviderMeta(dst interface{}) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a strong reason why we can't expose this in the old flatmap format? We will certainly need to expose cty eventually through the SDK, but this is setting a precedent on how to do this. If we can just leave that precedent for a more high-level RFC I would feel much safer/better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly that it's a lot more work for a feature that doesn't actually have any reason to go through flatmap except for consistency. It's basically throwing away type information and then reconstructing it just for consistency, and it opens this up to a lot more bugs than necessary. I'm not the most opposed to doing it, if the Enablement team would benefit from it, but it was an entire new codepath I had to figure out and learn, and when I looked at it, there wasn't really any reason to learn / implement it, so I didn't.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Understood.
Is there a reason why the interface should be accepting a pointer and writing into it, instead of just returning cty.Value
?
Hey @radeksimko, no worries, everyone's had a busy couple of months. Replied to your other concern inline, but want to address this one:
From our internal conversations, I believe the conclusions we drew from this is that there wasn't a clean way to separate the core changes from the SDK changes in this (I believe it led to compilation errors and / or an inability to write tests). That may have changed since the SDK extraction became more advanced, but I don't actually know that to be the case. What parts of this PR would you like to see dropped? As for an SDK PR, yes, the intention is for one to come as soon as this PR is merged. I just didn't want to keep two PRs in flight at the same time. Once the core part finalises, I figured I'd PR it into the SDK. I believe that's what the Asana tracking this internally had as the plan. |
enum StringKind { | ||
PLAIN = 0; | ||
MARKDOWN = 1; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason for introducing Markdown in the context of this particular PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I answered this below, this was an attempt to minimize protocol version churn, but we could probably have structured the commits / PR to better illustrate the intent.
A pass at implementing module attribution ("provider_meta blocks in the terraform block", whatever, it's for module attribution). The general idea is that we're adding a new ProviderMeta struct to our configs package, which is then added to Files, Modules, and parsed as part of the terraform block. The configs.Config type then makes that available to the terraform.ProviderTransformer type, which is already scoped by module. It picks out the ProviderMeta for the provider we need in its Transform method, and checks to see if the node it's working on implements the terraform.GraphNodeAttachProviderMetaConfigs interface, which is a new interface we're adding. It has only one method: AttachProviderMetaConfigs, which is passed a *configs.ProviderMeta. We implement this on the terraform.NodeAbstractResource type, storing the passed *configs.ProviderMeta in a new property on that struct. That struct is eventually made available to the terraform.NodeApplyableResourceInstance type, and its method evalTreeManagedResources, which creates an EvalApply type. The EvalApply type gets a ProviderMeta type, which is set to the *configs.ProviderMeta obtained from the terraform.NodeAbstractResource available to it. This means that EvalApply now has the raw config values the user entered for the provider_meta block, but we also need a schema; providers can define their own schema for the provider_meta block, and we need that schema to know how to translate what is in the config into a cty.Value, which is what we eventually want here. Fortunately, EvalApply already has access to the provider's schema; we can augment the GetSchema RPC call to pass back the ProviderMeta schema (added to helper/schema.Provider) in addition to resource, data source, and provider schemas. This makes it available, eventually, to EvalApply. Now that EvalApply has both the config values and the schema, we can parse them into a cty.Value. Finally, we augment the ApplyResourceChange RPC call with a ProviderMeta field, which can be populated by EvalApply with the parsed cty.Value, which successfully makes the config values available to the provider. This still needs tests written for it to ensure it works, but no current tests are broken on it and it compiles, which is reassuring.
Add the ProviderMeta field to the EvalValidateResource type so we can check that the HCL for the resource's provider's provider_meta block is valid. Add a code note about why we're doing this in the resource block instead of somewhere that makes more sense.
Now if a provider doesn't support provider_meta and somehow we miss that at validation time, the user will get an error with more details and pointing to the specific provider_meta declaration that is a problem.
Actually initialize the ProviderMetas map for the Module when calling configs.NewModules so we don't panic. Check the NodeValidatableResource implements GraphNodeAttachProviderMetaConfigs. When calling terraform.loadProviderSchemas, actually set the ProviderMeta block. Add ProviderMeta to state. For some reason? I don't know why. Fix the ProviderTransformer to use the ProviderConfig.Type instead of String() output for the provider when looking up the ProviderMeta to use.
We weren't passing provider meta information on destroy, now we attach it to nodes for resource destruction, which should let it get sent along as necessary.
Figure out how to test provider_meta, and write our first test, a basic check that the value will be delivered as part of the ApplyResourceChange request when set. Still a lot of test cases that need testing.
We want ProviderMeta information to be available to the provider during all the RPC calls, so extend the protocol to account for that, and fill out all the needed methods.
Set the ProviderMeta value when planning resource changes, and add a test to ensure that it's getting set properly.
Not sure why this got included.
Our tests broke because they need to use the addrs.Provider struct now instead of the string. This updates all our tests to use the new struct.
The addrs.AbsProviderConfig.Type changed from a string to an addrs.Provider, so we need to pull the Type out of that. Note that this returns something like "google", not "registry.terraform.io/providers/hashicorp/google" or whatever, which is probably what we want, so we don't get confused if two providers with the same name have module attribution turned on. I left a BUG comment to track that, as I'm not sure what's actually involved in that change.
Add a submodule to our test configs so we can check that this actually works across module boundaries the way it's intended to. These test configs don't actually do anything yet, our test code needs to be updated to actually check for them, and I'm not sure how to do that just yet, but I'll figure that out.
We returned the ProviderMeta schema from the client on every protocol request, but missed it on ReadResource for some reason. This rectifies that.
So thanks to review, we have a few changes all rolled up in one here, to account for a design change. First, the codebase has marched on while we wrote this, meaning we need to refer to providers as addrs.Provider, not as strings. This is for the best, but is a lot of replacing through the codebase. Second, some methods were missing from the provider-side GRPC implementations, so we added those. Third, it turns out that attaching the meta configs to resources in transform_provider wouldn't actually work, because resources aren't guaranteed to be in the same module as the provider config powering them. So we need to attach them in transform_attach_config_resource, not in transform_provider, so they'll be on the right side of the module boundary. Unfortunately, this means we're going to have to stop passing around the provider's provider_meta specifically, and are going to have to pass the entire block, because we don't have the provider data we need when we attach the config to filter out just what we need. We now separate them out in the Eval methods. This, unfortunately, also causes us issues, because we don't have that info already when we call EvalValidate, which means we're either going to need to pipe new info through to it, or we're not going to be able to validate the schema pre-plan-time. There's still a recommendation for an end-to-end test to do, but I'm hopeful.
Add provider meta tests to the builtin test provider.
https://gifs.paddy.io/crying.gif Need to attach the descendant config, not the root config.
Our rebase changed some of the types for provider addresses under us, so this fixes our tests to use the right types.
2dba279
to
ecc8114
Compare
|
||
const ( | ||
StringPlain StringKind = iota | ||
StringMarkdown |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we introducing StringMarkdown
here? Why does the schema need to know how the string will be interpreted?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/cc @paultyng for this one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For a language server integration and for document generation for the registry (or anything else) we need the ability to store descriptions that are rich or we would need to duplicate efforts between web documentation and online documentation which seems problematic. This is the Langauge Server Specification that this model was taken from though: https://microsoft.github.io/language-server-protocol/specifications/specification-current/#markupContent
Allowing providers to store descriptions in markdown would alleviate the need to also store attribute, block, and resource descriptions in documentation markdown files.
Description string | ||
DescriptionKind StringKind | ||
|
||
Deprecated bool |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The configschema is intended to describe the structure , rather than how the content is used. Adding fields has already proved troublesome, for example we can't really use MinItems
and MaxItems
on nested blocks. I'd rather sort out necessary schema changes separately if they're not required for provider-metas.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/cc @paultyng
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this is the only way to get them to the JSON schema dump, so the only alternative would be to plumb that path for schema information differently which may be undesirable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
approved, some minor whitespace comments
@@ -9,17 +9,20 @@ | |||
"attributes": { | |||
"ami": { | |||
"type": "string", | |||
"optional": true | |||
"optional": true, | |||
"description_kind": "plain" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is some weird whitespace here
}, | ||
"id": { | ||
"type": "string", | ||
"optional": true, | ||
"computed": true | ||
"computed": true, | ||
"description_kind": "plain" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is some weird whitespace here
} | ||
} | ||
}, | ||
"description_kind": "plain" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is some weird whitespace here
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
A pass at implementing module attribution ("provider_meta blocks in the
terraform block", whatever, it's for module attribution).
The general idea is that we're adding a new ProviderMeta struct to our
configs package, which is then added to Files, Modules, and parsed as
part of the terraform block.
The configs.Config type then makes that available to the
terraform.ProviderTransformer type, which is already scoped by module.
It picks out the ProviderMeta for the provider we need in its Transform
method, and checks to see if the node it's working on implements the
terraform.GraphNodeAttachProviderMetaConfigs interface, which is a new
interface we're adding. It has only one method:
AttachProviderMetaConfigs, which is passed a *configs.ProviderMeta.
We implement this on the terraform.NodeAbstractResource type, storing
the passed *configs.ProviderMeta in a new property on that struct.
That struct is eventually made available to the
terraform.NodeApplyableResourceInstance type, and its method
evalTreeManagedResources, which creates an EvalApply type.
The EvalApply type gets a ProviderMeta type, which is set to the
*configs.ProviderMeta obtained from the terraform.NodeAbstractResource
available to it.
This means that EvalApply now has the raw config values the user entered
for the provider_meta block, but we also need a schema; providers can
define their own schema for the provider_meta block, and we need that
schema to know how to translate what is in the config into a cty.Value,
which is what we eventually want here.
Fortunately, EvalApply already has access to the provider's schema; we
can augment the GetSchema RPC call to pass back the ProviderMeta schema
(added to helper/schema.Provider) in addition to resource, data source,
and provider schemas. This makes it available, eventually, to EvalApply.
Now that EvalApply has both the config values and the schema, we can
parse them into a cty.Value.
Finally, we augment the ApplyResourceChange RPC call with a ProviderMeta
field, which can be populated by EvalApply with the parsed cty.Value,
which successfully makes the config values available to the provider.
This still needs tests written for it to ensure it works, but no current
tests are broken on it and it compiles, which is reassuring.
Additions of Description, DescriptionKind, and Deprecated
Some additional fields were added to the protocol in this PR and plumbed through to the schema JSON dump to bring it to closer parity with the Language Server Protocol and make things like document generation easier. This fields are not related to module attribution, but added in this PR solely to minimize protocol version churn.