Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow other plugins to modify ScalaPB types #1007

Closed
thesamet opened this issue Dec 16, 2020 · 33 comments
Closed

Allow other plugins to modify ScalaPB types #1007

thesamet opened this issue Dec 16, 2020 · 33 comments

Comments

@thesamet
Copy link
Contributor

To implement scalapb/scalapb-validate#38, we want to enable other libraries or plugins to modify the types that ScalaPB will generate. Since protoc plugins have very limited ways to consume what a previous plugin produces (only insertion points are currently supported), we need to come with our own mechanism to plug external type information into ScalaPB.

Requirements:

  1. ScalaPB should not know about specific plugins (such as validate, or validate-cats)
  2. Solution should work with protoc directly, not just limited to SBT, so users of other build tools will be able to use it.
  3. Other Scala code generators that use ScalaPB's DescriptorImplicits should be able to see the updated types, so they generate valid code.
@thesamet
Copy link
Contributor Author

thesamet commented Dec 16, 2020

Solution 1: Preprocessors as sub-plugins

Add to ScalaPB an option to load and invoke a class that will preprocess a CodeGeneratorRequest. The preprocessor can arbitrarily modify the request. For validate/cats use case it would add custom type options based on pgv rules. To use, a user would add a package scoped file like this:

import "scalapb/scalapb.proto";

package com.mypackage;

option (scalapb.options) = {
  scope: PACKAGE
  preprocessors: ["com.thesamet.scalapb:scalapb-validate-cats:0.0.1"]    // [*]
};

[*] This would make ScalaPB's compilerplugin load the code from this artifact (fetch if necessary through coursier), and expect to find a function CodeGeneratorRequest=>CodeGeneratorRequest that would return a mutated copy of the CodeGeneratorRequest to be used by ScalaPB and downstream plugins.

@bjaglin
Copy link
Contributor

bjaglin commented Dec 17, 2020

expect to find a function CodeGeneratorRequest=>CodeGeneratorRequest that would return a mutated copy of the CodeGeneratorRequest to be used by ScalaPB and downstream plugins.

3.Other Scala code generators that use ScalaPB's DescriptorImplicits should be able to see the updated types, so they generate valid code.

Since DescriptorImplicits takes CodeGeneratorRequest.allProtos, shouldn't the preprocessor scope be limited to Seq[FileDescriptor] => Seq[FileDescriptor] for the change to be non-breaking for other protoc-bridge generators customizing ScalaPB?

@thesamet
Copy link
Contributor Author

Good point. Another issue is that updating a given FileDescriptor is tricky due to entities being cross-referenced (if we change a message, we need to update all other messages that rely on it to it to use the updated instance). I think for Solution 1, we should further constrain the signature to fit closer to the use case. Instead, we rely on the underlying FileDescriptorProto (not FileDescriptor), and the result type indicates the mutations: FileDescriptorProto=>UpdatedOptions
where UpdatedOptions is:

message UpdatedOptions {
  // maps a FQDN of entity into new options to be merged to it.
  map<string, MessageOptions> add_message_options;
  map<string, FieldOptions> add_field_options;
  // same for oneofs, enums, ...
}

The function would be called only for files that are in the scope of the preprocessor. DescriptorImplicits will merge the new options at query time (like it does for other auxiliary options).
A benefit of this approach is that both inputs and outputs of the preprocessor are protos, so it can be classloaded easily and we don't need the version of Scala/protobuf-java of the preprocessor to align with ScalaPB itself. The actual interface could be Array[Byte] to avoid potential binary breakages down the line.

@bjaglin
Copy link
Contributor

bjaglin commented Dec 17, 2020

Preprocessors should be able to customize https://github.com/scalapb/protoc-bridge/blob/0214bdd17e9e3034d421bc51f38a4b7f34934478/bridge/src/main/scala/protocbridge/ProtocCodeGenerator.scala#L7 (to be able to use cats classes in generated code when scalapb-validate-cats is used for example).

@bjaglin
Copy link
Contributor

bjaglin commented Dec 17, 2020

message UpdatedOptions {
  // maps a FQDN of entity into new options to be merged to it.
  map<string, MessageOptions> add_message_options;
  map<string, FieldOptions> add_field_options;
  // same for oneofs, enums, ...
}

FileOptions would also be helpful to add custom file-level imports in order to get the right TypeMappers in scope for third-party types.

@thesamet
Copy link
Contributor Author

thesamet commented Dec 17, 2020

Preprocessors should be able to customize suggestedDependencies

Since the dependency on the preprocessor is only discovered during the invocation of ScalaPB source generator, the SBT project is already set up, so we can't modify its library dependencies. However Solution 2 (coming below) doesn't have this limitation.

@thesamet
Copy link
Contributor Author

Solution 2: Preprocessors as protoc plugins communicating through protocbridge.

In this solution, preprocessors are protoc plugins (not scalapb sub-plugins), that are invoked directly by protoc (and would be listed in PB.targets in SBT). They may generate zero or more files, and in addition, augment their CodeGeneratorResponse output with an extension message that has a map<string, UpdatedOptions> field. This map maps proto files to the updated options they should have. When protoc-bridge sees this extension field, it remembers the updated options and associates them with the plugin that returned them (pgv-cats). Subsequent plugins running inside the same JVM will be able to query the UpdatedOptions returned by previously ran plugins by the plugin name.

The user's project would have package-scoped option that instruct ScalaPB to look for those updated options:
import "scalapb/scalapb.proto";

package com.mypackage;

option (scalapb.options) = {
  scope: PACKAGE
  preprocessors: ["pgv-cats"] // [*]
};

In this solution preprocessors are JVM protoc plugins, they are loaded into SBT and and can have suggestedDependencies that are appended to the project's libraryDependencies.

[*] Tells ScalaPB to look for UpdatedOptions provided by a previously ran preprocessor called pgv-cats, and only apply its updates to current package.

This solution would work when protoc-bridge is available (at least for ScalaPBC, sbt-protoc, pants, mill), but not when using directly with sbt-protoc since there's no shared protoc-bridge. I expect this to provide sufficient coverage, but in order to cover users who use protoc directly, they would be provided with the following workaround: run protoc once with the preprocessor and a parameter to emit the UpdatedOptions into a binary file. Then, run protoc again and provide the UpdatedOptions file as a generator parameter for ScalaPB.

While this solution doesn't satisfy requirement (2) nicely, it's probably acceptable given the expected use. I prefer this Solution over Solution 1 since there is no runtime classloading involved and SBT or other build tools would take care of loading the preprocessor using existing mechanisms.

@bjaglin
Copy link
Contributor

bjaglin commented Dec 22, 2020

[*] Tells ScalaPB to look for UpdatedOptions provided by a previously ran preprocessor called pgv-cats, and only apply its updates to current package.

I guess ScalaPB would fail if it cannot find UpdateOptions (even empty) for any declared preprocessor? That would help making sure that protoc plugins are executed in the right order (the most fragile part of this solution I believe). Could we maybe keep version-free maven coordinates as preprocessor names to have a more actionnable error message in that case?

I expect this to provide sufficient coverage, but in order to cover users who use protoc directly, they would be provided with the following workaround: run protoc once with the preprocessor and a parameter to emit the UpdatedOptions into a binary file. Then, run protoc again and provide the UpdatedOptions file as a generator parameter for ScalaPB.

Instead of storing UpdateOptions in memory by protoc-bridge, could the file be stored/looked-up in a deterministic location (/tmp/${preprocessor}/${hash of CodeGeneratorRequest with parameter set to empty})? This would race for 2 concurrent protoc that only differ on the preprocessor plugin parameter or version, but I guess it's acceptable to achieve (2) more easily? Are you concerned about that side effect maybe?

In any case, the in-memory would work for the use-case driving this as we don't use ScalaPB as a raw protoc plugin. I was just challenging the in-memory appaoch because I think it is quite hard for people not familiar with the ecosystem to understand the protoc-bridge lifecycle, so it might make troubleshooting harder.

I prefer this Solution over Solution 1 since there is no runtime classloading involved and SBT or other build tools would take care of loading the preprocessor using existing mechanisms.

Agreed. The only tiny downside I see is that a preprocessor protoc plugin needs to be published for all supported platforms if we want it to work with protoc directly (but it looks like the GraalVM build target from ScalaPB is easily reusable) ?

@bjaglin
Copy link
Contributor

bjaglin commented Dec 22, 2020

Solution 3: Preprocessors as declarative transformations defined as package-level options

Thanks to new file/package-level ScalaPB options, ...

message ScalaPbOptions {
  // ...

  // reference packages to inherit preprocessors defined as package-level options
  repeated string preprocessors = 22;

  message FieldOptionsPreprocessor {
    required string if = 1; // i.e. "validate.rules"
    oneof textformat_value {
      string contains = 2; // i.e. "{ repeated: { min_items: 1 } }"
      string is = 3; // i.e. "{ string: { email: true } }"
      // maybe a not_contains could be useful?
    }
    optional scalapb.FieldOptions add_field_options = 4;
    optional scalapb.ScalaPbOptions add_file_options = 5; // https://github.com/scalapb/ScalaPB/issues/1007#issuecomment-747637804
  }
  // processors are evaluated in order, merging options along the way on the matching elements
  repeated FieldOptionsPreprocessor field_options_preprocessors = 23;
  // same for messages, oneofs, enums, ...
}

... custom transformations could be defined in a package.proto file embedded in scalapb-validate-cats-core ...

package scalapb.validate.cats;

import "scalapb/scalapb.proto";

option (scalapb.options) = {
  scope: PACKAGE,
  field_options_preprocessors: [
    {
      if: "validate.rules",
      contains: "{ repeated: { min_items: 1 } }",
      add_field_options: { type: "cats.data.NonEmptyList" },
      add_file_options: { import: "scalapb.validate.cats.NonEmptyListMapper" }
    },
    {
      if: "validate.rules",
      contains: "{ repeated: { unique: true } }",
      add_field_options: { type: "LinkedHashSet" }
    }
  ]
};

... that the user should import on the protoc include path (as well as the Java runtime for cats classes) and reference wherever needed via:

package com.mypackage;

import "scalapb/scalapb.proto";

option (scalapb.options) = {
  scope: PACKAGE
  preprocessors: ["scalapb.validate.cats"]
};

This is of course less powerful than solution 1 and solution 2 as transformations must be declared using a simple DSL, but simpler as no codegen plugin is involved. I haven't gone through all PGV fields and whether mappings can be declarative, but I am hoping they can with simple contains/is predicates (provided we have the right ScalaPB options obviously). This also has the advantage of allowing users to implement preprocessors within their own project/package.

Implementation-wise, Parsing textformat seems to be easy, but I am unsure how robust/custom the runtime option parsing (if field) would be. Implementation of contains might be a bit tricky for deeply nested structures, but I guess shortcuts can be taken.

@thesamet
Copy link
Contributor Author

thesamet commented Dec 22, 2020

For Solution 2

I guess ScalaPB would fail if it cannot find UpdateOptions (even empty) for any declared preprocessor? Could we maybe keep version-free maven coordinates as preprocessor names to have a more actionnable error message in that case?

Yes to both questions. Failing would be the way to know that the expected options are not found. Maven names as convention sounds good.

I like the suggestion to communicate through the filesystem. It's also inline with the advice I got on the protobuf mailing list. I'd probably make the paths explicitly provided, and have introduce some convenience in SBT to make it easy to use.

I have a bare bones implementation of the Solution 2 locally. It works well, but there's a a chain of promise-future that passes the side output between plugins which could make it non-obvious to debug.

For Solution 3

It's worth consideration, especially for not requiring a pre-processor. I am a bit concerned that in practice there's going to be more logic required in the preprocessor than this mini-dsl would be able to support.

In terms of project execution, I am relying now on Solution 2 (available locally for me) to explore the deeper parts of the project (like non-total fields), import injection, etc. I'll probably be able to evaluate all the mini-DSL would need to do better then. I am more inclined right now to a variant of solution 2 that doesn't use in-memory, but external files, since it would require little to no modification to protoc-bridge.

@bjaglin
Copy link
Contributor

bjaglin commented Dec 23, 2020

For Solution 2

It's also inline with the advice I got on the protobuf mailing list.

One concern I have both for in in-memory and file based communication (that that answer on that mailing list does not address) is whether it's guaranteed that protoc plugin invocations are and remain sequential and in-order. Do you know about that?

@bjaglin
Copy link
Contributor

bjaglin commented Dec 23, 2020

For Solution 3

I am a bit concerned that in practice there's going to be more logic required in the preprocessor than this mini-dsl would be able to support.

Indeed, time will tell. This might be out of scope of the driving use case, but i can think of a few of our internal use cases where it would be useful to have locally-declared mappings to easily and centrally derive scalapb options from certain custom options - a way to keep the protos annotated with only semanticb custom options without leaking scalapb sugar, yet leveraging it. A bit like auxiliary options but without the ad-hoc part.

@thesamet
Copy link
Contributor Author

thesamet commented Dec 23, 2020

One concern I have both for in in-memory and file based communication (that that answer on that mailing list does not address) is whether it's guaranteed that protoc plugin invocations are and remain sequential and in-order. Do you know about that?

Yes, see https://github.com/protocolbuffers/protobuf/blob/master/src/google/protobuf/compiler/plugin.proto#L168-L169 - however their outputs is written to disk by protoc only at the very end. If we land on a solution where they communicate through the filesystem, the writing (and reading) would be through a side-effect in the plugin.

@bjaglin
Copy link
Contributor

bjaglin commented Dec 23, 2020

For Solution 3

I am a bit concerned that in practice there's going to be more logic required in the preprocessor than this mini-dsl would be able to support.

Apart from cats, we might in the future implement PGV mappings for refined. In that case, it would be very interesting to capture values for PGV options (and not just trigger on their presence/usage), to use them as literal types. That's not trivial with a protobuf-based DSL.

thesamet added a commit to scalapb/protoc-bridge that referenced this issue Dec 26, 2020
protoc-bridge will now create a directory for secondary outputs which
plugins can read and write to in order to exchange additional
information through a single protoc run.

The location of the secondary directory is provided to native protoc
plugins through an environment variable SCALAPB_SECONDARY_OUTPUT_DIR
passed through protoc.

Since JVM based plugins are running the same JVM process as
protocbridge, they are not able to see this environment variables. For
them, a new message `ExtraEnv` is passed as an unknown option using the
ScalaPB field number (1020).

See scalapb/ScalaPB#1007
thesamet added a commit to scalapb/protoc-bridge that referenced this issue Dec 26, 2020
protoc-bridge will now create a directory for secondary outputs which
plugins can read and write to in order to exchange additional
information through a single protoc run.

The location of the secondary directory is provided to native protoc
plugins through an environment variable SCALAPB_SECONDARY_OUTPUT_DIR
passed through protoc.

Since JVM based plugins are running the same JVM process as
protocbridge, they are not able to see this environment variables. For
them, a new message `ExtraEnv` is passed as an unknown option using the
ScalaPB field number (1020).

See scalapb/ScalaPB#1007
@thesamet
Copy link
Contributor Author

Solution 4 (currently implemented in preview1)

protoc-bridge creates a temporary directory before running protoc that plugins can use to read and write from. protocbridge does not track the access to the directory. The expectation is that plugins creates files that match their class name. ScalaPB gets a new preprocess file-level options that makes it search for a file with the same name. The file contains AuxFieldOptions to apply to fields.

How plugins know where the secondary output directory is? protocbridge passes it to protoc through an environment variable, SCALAPB_SECONDARY_OUTPUT_DIR, which is in turn passed to the native plugins. This is useful for direct protoc invocations. For JVM plugins, protocbridge appends an ExtraEnv to the CodeGeneratorRequest where plugin can interpret as an unknown field to find the directory name. A helper method called ExtraEnvParser is provided.

@thesamet
Copy link
Contributor Author

@bjaglin . The preprocessor in scalapb-validate has a mini-dsl inspired by the one you proposed in Solution 3.

@bjaglin
Copy link
Contributor

bjaglin commented Jan 12, 2021

scalapb/scalapb-validate#38 (comment) is very exciting as it really bridges PGV to idiomatic, state-of-the-art Scala!

As already mentioned above, I wonder if this DSL couldn't/shouldn't be part of ScalaPB itself. Could https://github.com/scalapb/scalapb-validate/blob/4b49c274f5ab7bc7a67ab8db38499bb9c9fdfb26/core/src/main/protobuf/scalapb/validate.proto#L45 be made more generic while keeping type safety by using google.protobuf.FieldOptions (at the price of slightly more verbose syntax for scalapb/scalapb-validate#38 though)?

To give a concrete use-case (distinct from scalapb/scalapb-validate#38), we are setting a sensitive custom boolean option on some of our gRPC request message fields (signaling/documenting the behavior for clients), which should result in obfuscation on the server-side (and thus have a custom ScalaPB type to customize serialization). Being able to declare the mapping sensitive -> custom type in a separate package-level file (like the scalapb-validate DSL) would allow us to elegantly keep the business protos free of server-side implementation details.

This is just a final thought on this issue, not a blocker to close it (nor scalapb/scalapb-validate#38).

@thesamet
Copy link
Contributor Author

I looked into a solution where the DSL is part of ScalaPB while investigating this issue. I quickly ran into some technical challenges since the types we match on (pgv) are unknown to ScalaPB and only loaded at runtime through what the user imports. It turns out that working with unknown/dynamic extension fields is a bit tricky and I wasn't able to find a way to get the syntax and type safety we are getting with the current implementation. However, now that we have #38 figured out, I'll be revisiting this idea to see if we can embed field transformations on arbitrary options in ScalaPB.

@thesamet
Copy link
Contributor Author

Good news - I believe I am close to getting the FieldTransformation DSL implemented in ScalaPB rather than in scalapb-validate. I was able to find the necessary tricks to work with unknown extensions. The benefit, besides simplification of the data flow, is that field transformations would be able to match on arbitrary custom options. If we go there, I will remove the field transformations from scalapb-validate so we have a single implementation. scalapb-validate will retain the capability to push cats and set field transformations through its secondary output. I'll check in the coming days if we can have it within the preview series, so we have a finalized API when we merge.

@bjaglin
Copy link
Contributor

bjaglin commented Jan 17, 2021

Following up here on the now-generic field transformation described in scalapb/scalapb-validate#38 (comment), as it's not blocking scalapb/scalapb-validate#38 but further usage of that supporting feature.

A limitation/gotcha in the current implementation/design is that field transformations injected by preprocessors can only work if the field extensions used in the rules (when) added by those preprocessors are imported (directly or transitively) in the file that enables the preprocessor. That's the case for scalapb-validate as any validate/validate.proto injected rule requires to be enabled via scalapb/validate.proto which brings it transitively. I think that's a fair thing to ask, but maybe

s"$currentFile: Could not find extension number $number for message ${m.toString()}"
could provide a more actionnable error?

thesamet added a commit that referenced this issue Jan 18, 2021
Add compiler-plugin tests for field transformation injections

For #1007
@thesamet
Copy link
Contributor Author

Commit d443601 improves the error messages and adds unit tests in the compiler plugin for the scenario when injected transformations can't be resolved. I changed in scalapb-validate to make this scenario impossible, explained in scalapb/scalapb-validate#38 (comment).

thesamet added a commit that referenced this issue Jan 19, 2021
Add compiler-plugin tests for field transformation injections

For #1007
@bjaglin
Copy link
Contributor

bjaglin commented Jan 19, 2021

I ran into what could be considered a limitation with the current DSL design (I am really giving it a hard time by pushing as much as I can ahead of merging the preview, beyond the primary goal of this ticket!). It's impossible to set options that do not extend ScalaPB:

optional FieldOptions set = 3;

In my case, I was interested in setting PGV options using field transformations (based on custom options in the when), that would then be picked up by scalapb-validate after the scalapb gen run. Is there a reason to restrict the set to ScalaPB extensions only (instead of google.protobuf.FieldOptions like the when)?

@thesamet
Copy link
Contributor Author

thesamet commented Jan 19, 2021

@bjaglin , interesting. If we do this there's a caveat that could be quite confusing. The original PGV generator wouldn't be able to see the tranformed options. Not sure if it's a reason not to do it though.

@bjaglin
Copy link
Contributor

bjaglin commented Jan 19, 2021

I just realized

optional FieldOptions options = 2;
would be a problem since that's where/how the set ends up, no?

@thesamet
Copy link
Contributor Author

thesamet commented Jan 19, 2021

Yes, just thought of the same. This would require some amount of rework of the internal plumbing field transformations relies on. In order to let #38 reach completion with the suggested interface so we can build this later on, I intend to change

optional FieldOptions set = 3;
to google.protobuf.FieldOptions, and make the implementation verify that only ScalaPB options are set.

@bjaglin
Copy link
Contributor

bjaglin commented Jan 19, 2021

That would be yet another API change, but I am wondering if the when (and thus the scope of resolution of the $() path expansion) could be a FieldDescriptorProto instead of just FieldOptions? A use-case for this I just ran into is to extract/match on the primitive type for a field to get a precise custom Scala type in set.

@thesamet
Copy link
Contributor Author

thesamet commented Jan 19, 2021

The use case makes sense. I'll look into getting it in. I wonder if the levels of nesting needed to match on a PGV field (for example, {options: { [validate.rules] { int32: {gt: 0}}}}) would be cumbersome, and whether we should have syntactic sugar to help with this. Thoughts on this are welcome.

@bjaglin
Copy link
Contributor

bjaglin commented Jan 19, 2021

I initially thought of suggesting to add another "targeting" field specifically for type and to leave when at the current level, but since options is just one level deep and not an array, it's really easy to navigate FieldDescriptorProto as a whole IMHO. As an IntelliJ user with https://github.com/jvolkman/intellij-protobuf-editor (which has a perfect TextFormat autocompletion/formatting support), it takes me seconds to write field transformations (no matter at which level when is, I tried to see). So I would say that we don't need sugar, it would add complexity just to save a few key strokes.

thesamet added a commit that referenced this issue Jan 20, 2021
Now field_transformations.when matches on FeildDescriptor
and field_transformation.set accepts FieldOptions,
however it only allows setting ScalaPB options for the time
being.

For #1007
thesamet added a commit that referenced this issue Jan 20, 2021
Now field_transformations.when matches on FeildDescriptor
and field_transformation.set accepts FieldOptions,
however it only allows setting ScalaPB options for the time
being.

For #1007
thesamet added a commit that referenced this issue Jan 20, 2021
Add compiler-plugin tests for field transformation injections

For #1007
thesamet added a commit that referenced this issue Jan 20, 2021
Now field_transformations.when matches on FeildDescriptor
and field_transformation.set accepts FieldOptions,
however it only allows setting ScalaPB options for the time
being.

For #1007
@bjaglin
Copy link
Contributor

bjaglin commented Jan 20, 2021

Some more feedback/thoughts on the API as I continue using the transformation DSL

  • optional google.protobuf.FieldOptions set = 3;
    could this be required?
  • optional google.protobuf.FieldDescriptorProto when = 1;
    could this be required? having a transformation trigger everywhere is probably not desired (and if the user really wants that, I believe an empty FieldDescriptorProto can be used)
  • enum MatchType {
    CONTAINS = 0;
    EXACT = 1;
    PRESENCE = 2;
    }
    (cosmetic) this will most likely be reused for other transformations (enum, oneof, etc) - would it be more correct to have it declared outside FieldTransformation, as a top-level message instead?

thesamet added a commit that referenced this issue Jan 20, 2021
We anticipate it will be shared with other transformation types.

See #1007
thesamet added a commit that referenced this issue Jan 20, 2021
We anticipate it will be shared with other transformation types.

See #1007
@thesamet
Copy link
Contributor Author

thesamet commented Jan 20, 2021

I moved MatchType to be top-level enum so it can be shared with future transformation types. I actually went ahead and added logic that verifies that set and when are defined, though I decided to roll back this part of the change for the time being. The reasoning is similar to the one mentioned in "required is forever" in the language guide. Specifically, I anticipate that in the future we may have alternative fields to when that match differently, and maybe even alternatives to set. If when and set were required (or threw an error if undefined), all ScalaPB-based plugins would have to be upgraded in lockstep to handle the schema change.

@thesamet
Copy link
Contributor Author

Note that the MatchType enum move hasn't made it into preview14 but it doesn't impact the DSL syntax in any way.

@bjaglin
Copy link
Contributor

bjaglin commented Jan 21, 2021

Specifically, I anticipate that in the future we may have alternative fields to when that match differently, and maybe even alternatives to set.

You are right, I already thought of a potential use-case for an unset actually: scalapb-validate built-in transformations could reset PGV rules that are already captured/enforced to avoid further validation in the Validator. Given the mergeFrom semantics, it's impossible to reset a field via set, so we would need a separate action for that (through a FieldMask or maybe a logic similar to the PRESENCE type to designate what should be reset).

thesamet added a commit that referenced this issue Jan 21, 2021
We anticipate it will be shared with other transformation types.

See #1007
thesamet added a commit that referenced this issue Jan 23, 2021
thesamet added a commit that referenced this issue Jan 23, 2021
As transformations is a ScalaPB feature that is not constrained to
validation, its documentation has been factored out to a self-contained
page.

For #1007
@thesamet
Copy link
Contributor Author

Closing this issue as the general approach for transformations has been established, and implemented specifically for fields. Transformations for other entities will be worked on through future tickets.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants