Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TableMetadataBuilder #587

Merged
merged 19 commits into from
Nov 26, 2024
Merged

Conversation

c-thiel
Copy link
Collaborator

@c-thiel c-thiel commented Aug 26, 2024

This PR is now ready for first reviews.
Some Remarks:

  • In add_sort_order and add_partition_spec the Java code re-builds the added sort-order against the current schema by matching column names. This implementation currently does not do this. Adding this feature would require PartitionSpec (bound) to store the schema it was bound against (probably a good idea anyway) and split SortOrder in bound and unbound, where the bound SortOrder also stores the schema it was bound against. Instead, this implementation assumes that provided sort-orders and partition-specs are valid for the current schema. Compatibility with the current schema is tested.
  • In contrast to java, our add_schema method does not require a new_last_column_id argument. In java there is a todo to achieve the same. I put my reasoning in a comment in the code, feel free to comment on it.
  • I have a few specifics that would be good to discuss with a Java Maintainer with a Java Maintainer. Its not ready to be merged yet - please wait for my OK too :)
  • I had to change a few tests that used the builder under the hood - mostly due to the changed new() behaviour that now re-assignes field-ids to start from 0. Some tests started from 1 before. Re-assigning the ids, just like in Java, ensures that fresh metadata always has fresh and correct ids even if they are created manually or re-used from another metadata.

@c-thiel c-thiel marked this pull request as draft August 26, 2024 18:48
@c-thiel c-thiel force-pushed the ft/table-metadata-builder branch from 6afeeb0 to 4193131 Compare August 30, 2024 10:48
@c-thiel
Copy link
Collaborator Author

c-thiel commented Aug 30, 2024

ToDos:

  • First PR as discussion basis
  • Reserved Property handling
  • Finish discussion about SortOrder and PartitionSpec re-binding
  • Review from 1-2 Java Maintainers
  • Review from 1-2 Rust Maintainers

@c-thiel c-thiel marked this pull request as ready for review August 30, 2024 11:33
@c-thiel
Copy link
Collaborator Author

c-thiel commented Sep 3, 2024

Fixes #232

@c-thiel c-thiel changed the title WIP: TableMetadataBuilder TableMetadataBuilder Sep 3, 2024
@liurenjie1024
Copy link
Contributor

Thanks @c-thiel for this pr, I've skimmed through it and it looks great to me. However this pr is too huge to review(3k lines), would you mind to split them into smaller onces? For example, we can add one pr for methods involved in one TableUpdate action and add enough tests for it? Also it would be better to put refactoring TableMetadataBuilder in a standalone module a pr?

@c-thiel
Copy link
Collaborator Author

c-thiel commented Sep 6, 2024

Thanks for your Feedback @liurenjie1024. This isn't really a refactoring of the builder, it's more a complete rewrite. The old builder allowed to create corrupt metadata in various ways. Splitting it up by TableUpdate would not be straight forward - many tests also in other modules depend on building Metadata. Creating Metadata now always goes through builder methods - it's a different architecture that requires basic support for all methods from the beginning just to keep tests running.

I would currently prefer to keep it as a larger block mainly because:

  • I don't have much time currently and its going to be more effort
  • We would need to write auxiliary code to provide non-checked methods so that crate tests don't fail
  • The total timespan of merging 10 or so PRs is expected to be much larger than putting ~2 full days effort in from a Rust Maintainer and a Java Maintainer to review it as a block.
  • Patterns are repetitive and can be reviewed together in many cases
  • A lot of it are tests - the core builder are 1145 lines. Its long - but doable :)

We now have a vision of what it could look like in the end. Before putting any more effort in, we should answer the following questions:

  • Is the overall structure OK or should we head in a different direction?
  • My first point of the opening statement: Do we re-write our SortOrder and add the schema to PartitionSpec so that we can match on names like Java does or not?
  • My second point from the opening statement: How do we handle new_last_column_id

Those points might change the overall design quite a bit and might require a re-write of SortOrder first (split to bound and unbound).

After we answered those questions, and we still think splitting makes sense, I can try to find time to build stacked-PRs. Maybe just splitting normalization / validation in table_metadata.rs from the actual builder would be a leaner option than splitting every single TableUpdate?

@c-thiel
Copy link
Collaborator Author

c-thiel commented Sep 8, 2024

@liurenjie1024 I tried to cut a few things out - but not along the lines of TalbeUpdate. I hope that's OK?

  1. Feat: Normalize TableMetadata #611
  2. feat: partition compatibility #612
  3. feat: SortOrder methods should take schema ref if possible #613
  4. fix: Less Panics for Snapshot timestamps #614
  5. feat: Reassign field ids for schema #615

After they are all merged, I'll rebase this PR for the actual builder.

@liurenjie1024
Copy link
Contributor

Hi, @c-thiel Sorry for late reply.

Is the overall structure OK or should we head in a different direction?

I've went through the new builder and I think this is your design is the right direction.

My first point of the opening statement: Do we re-write our SortOrder and add the schema to PartitionSpec so that we can match on names like Java does or not?

To be honest, I don't quite understand the use case. We can ask for background of this in dev channel, but I think this is not a blocker of this pr, we can always add this later.

My second point from the opening statement: How do we handle new_last_column_id

I've took a look at the comments of these two prs: apache/iceberg#6701 apache/iceberg#7445

And I think the reason behavior is the last_column_id is optional, and we calculate it from highest field id when missing. But allowing user to pass last_column_id should be added to be compatible with current behavior, but this should be a feature which could be added later.

@liurenjie1024
Copy link
Contributor

Those points might change the overall design quite a bit and might require a re-write of SortOrder first (split to bound and unbound).

I agree that this should be required, as I mentioned in #550

@liurenjie1024
Copy link
Contributor

After we answered those questions, and we still think splitting makes sense, I can try to find time to build stacked-PRs. Maybe just splitting normalization / validation in table_metadata.rs from the actual builder would be a leaner option than splitting every single TableUpdate?

That sound reasonable to me. If one pr per table update is too much burden, could we split them by components, for example sort oder, partition spec, schema changes?

@c-thiel
Copy link
Collaborator Author

c-thiel commented Sep 9, 2024

@liurenjie1024 thanks for the Feedback!

My first point of the opening statement: Do we re-write our SortOrder and add the schema to PartitionSpec so that we can match on names like Java does or not?

To be honest, I don't quite understand the use case. We can ask for background of this in dev channel, but I think this is not a blocker of this pr, we can always add this later.

The problem in changing it later is that it changes the semantic of the function. Right now we expect source_id to match the current_schema() (which is also the reason why I expose it during build). Java doesn't do this, instead, it looks up the name by id in the schema bound to a SortOrder or PartitionSpec, and then searches for the same name in the new schema to use it.

In my opinion ids are much cleaner than names (we might have dropped and re-added a column with the same name in the meantime), so I am OK with going forward. However, moving over to java semantics will require new endpoints (i.e. add_migrate_partition_spec or so), which takes a bound partition spec in contrast to the unbound spec we currently pass in.

Give me a thumbs up if that's OK for you. I'll also open a discussion in the dev channel to get some more opinions.

@c-thiel
Copy link
Collaborator Author

c-thiel commented Sep 9, 2024

My second point from the opening statement: How do we handle new_last_column_id

I've took a look at the comments of these two prs: apache/iceberg#6701 apache/iceberg#7445

And I think the reason behavior is the last_column_id is optional, and we calculate it from highest field id when missing. But allowing user to pass last_column_id should be added to be compatible with current behavior, but this should be a feature which could be added later.

I don't think we should add the argument to be honest. My reasoning is as follows:
If specified, it could be a way to artificially increase last_assigned_field_id of a TableMetadata. I can't see any motivation behind that. Its just adds a source of confusion of what to specify here - and what to do if its wrong.
The only useful thing to do with it is to check for outdated TableMetadata at the time of constructing the Schema.
I added this check here:
https://github.com/apache/iceberg-rust/pull/587/files#diff-04f26c83b3c614be6f6d6cfb6c4cefef9e01ec2d31395ac487cdcdff2dbae729R442-R451

Maybe add @nastra or @Fokko could add some comments on the intention of that parameter?

@Xuanwo
Copy link
Member

Xuanwo commented Sep 9, 2024

I have reviewed most PRs that I am confident can be merged. The only one left is #615, for which I need more input.

@c-thiel
Copy link
Collaborator Author

c-thiel commented Nov 7, 2024

@Xuanwo, @liurenjie1024 this PR is ready for another round of review. It's now rebased on the 6 PRs we merged during the last months. The core logic is ~1100 lines of code, including quite a bit of comments.
Its not very straightforward to split it up further, as even the new function already requires a lot of the overall logic.

Copy link
Contributor

@Fokko Fokko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@c-thiel I left a few comments, and suggestions for improvement, LMKWYT. Apart from that it looks great and good to go 👍

crates/iceberg/src/spec/table_metadata_builder.rs Outdated Show resolved Hide resolved
crates/iceberg/src/spec/table_metadata_builder.rs Outdated Show resolved Hide resolved
crates/iceberg/src/spec/table_metadata_builder.rs Outdated Show resolved Hide resolved
crates/iceberg/src/spec/table_metadata_builder.rs Outdated Show resolved Hide resolved
return Ok(self);
}

// ToDo Discuss: Java builds a fresh spec here:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We just do this once, after that the schema is evolved using the UpdateSchema, that's implemented by the SchemaUpdate. There you pass in the last-column-id, and new fields will increment from that.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is actually a very important point, and I think we should fix rust.
Currently in rust we use the SchemaBuilder for everything.
To get to it, we use an existing Schema and use the into_builder(self) method.

I believe we should modify this to into_builder(self, last_column_id: Option<i64>) so that users are forced to think about the last_column_id. Most likely they will want to use the last_column_id of the metadata they got the Schema from.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

@Fokko Fokko Nov 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm leaning toward the other way, where the last-field-id is tracked by Iceberg-rust itself and nog exposed by the user. The last-schema-id is a monotonically increasing ID that keeps track of which field-IDs are already being used.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we do it completely internally, then we should probably not allow or discourage into_builder on schema.

I am unsure how an alternative API should look like. @liurenjie1024 some input would be very helpful.
My thoughts:

  1. Introduce TableMetadata.update_schema that returns the current SchemaBuilder, however with a fixed last-column-id (set to the TableMetadata value). We document that add_schema should only be used for schemas modified in this way to ensure that last_column_id is correct.
  2. If documentation is not enough, we could Introduce a new type SchemaUpdate that can only be created by calling TableMetadata.update_schema. It would hold the TableMetadataBuilder internally and have methods like add_field or remove_field. Upon build, it would return the TableMetadataBuilder.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When adding a word of caution to the method, I'm fine with keeping it public for now.

I know that this is quite late in the process here, but I think it might be complimentary to the builder we're seeing here. For PyIceberg we've taken a different approach, where we apply the MetadataUpdates (AddSchema, SetCurrentSchema, etc) to the metadata, rather than having a builder where you can modify the TableMetadata. Code can be found here: https://github.com/apache/iceberg-python/blob/60800d88c7e44fe2ed58a25f1b0fcd5927156adf/pyiceberg/table/update/__init__.py#L215-L494

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The TableUpdate is currently only part of the rest catalog - not of the core spec.
I think we should keep them public unless we go switch to the python approach.

I added a word of warning that we do not check compatibility. Let me know if this is sufficient.
We should extend it with a link to the new method that is implemented in #697 .

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The TableUpdate is currently only part of the rest catalog - not of the core spec.

Just to mention that it's part of core crate is rust:

pub enum TableUpdate {
.

I think we should keep them public unless we go switch to the python approach.

In fact, the original design was following java/python approach, see

pub struct Transaction<'a> {
. I'm hesitating to make it public as making it crate private in future would be a breaking change. This method would be a foundation of implementing update schema transaction api.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@liurenjie1024 you are right, I thought it was part of rest.

I am still a bit hesitant to making it private. Let me try to curate my points. None of those are blockers, just want to take them into consideration.

  1. I think it's quite useful in tests - good evidence is the rust crate itself where we use it at a few places. With Lakekeeper we are also using it for tests. Its handy to not require a TableUpdate wrapper around the schema that we want to add.
  2. There are features that we would need to expose in other ways:
    a) applying multiple TableUpdates to a TableMetadata in a single shot and obtaining the TableMetadataBuildResult, including changes and expired_metadata_logs. A Transaction isn't suitable for this. A Transaction works on a Table, not TableMetadata. TableMetadata is much lighter and we as a catalog never actually initialize Table - and we don't want to initialize it just to mutate some metadata. Transaction also calls the Catalog trait in the end - not applying the updates. So its not exposing the Metadata mutating functionality for us.
    This could be solved by adding pub struct TableUpdates(Vec<TableUpdate>) or so instead and add an apply method to it. There are other options as well of course. Each of them would require another public struct, so we might also stick with the builder.
    b) Initializing TableMetadata conveniently including all fields. Currently the only way to do this is via the builder. I don't think it would be ergonomic to have a basic new() method on TableMetadata and require user to use a completely different mechanism, such as TableUpdate.apply to go further (i.e. adding a snapshot or setting the Uuid). TableCreation doesn't offer this either in its current state.

My baseline is that mutating TableMetadata on its own is something that is valuable to external crates, such as our Catalog. So we should offer a public ergonomic interface for it. I believe the builder offers this interface - Transaction and TableUpdate on its own do not.

Let me know what you think!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's reasonable to keep them public for reasons you mentioned. TableMetadataBuilder is doesn't need to do all validity checks for inputs, which are the responsibliity of transaction api.

@Fokko Fokko requested review from liurenjie1024 and Xuanwo November 8, 2024 13:24
Fokko added a commit to Fokko/iceberg that referenced this pull request Nov 11, 2024
I've added this to the spec a while ago:

apache#7445

But I think this was a mistake, and we should not expose this
to the public APIs, as it is much better to track this internally.

I noticed this while reviewing apache/iceberg-rust#587

Removing this as part of the APIs in Java, and the Open-API
update makes it much more resilient, and don't require the
clients to compute this value
Fokko added a commit to Fokko/iceberg that referenced this pull request Nov 11, 2024
Okay, I've added this to the spec a while ago:

apache#7445

But I think this was a mistake, and we should not expose this
to the public APIs, as it is much better to track this internally.

I noticed this while reviewing apache/iceberg-rust#587

Removing this as part of the APIs in Java, and the Open-API
update makes it much more resilient, and don't require the
clients to compute this value. For example. when there are two conflicting
schema changes, the last-column-id must be recomputed correctly when doing
the retry operation.
Fokko added a commit to Fokko/iceberg that referenced this pull request Nov 15, 2024
Okay, I've added this to the spec a while ago:

apache#7445

But I think this was a mistake, and we should not expose this
to the public APIs, as it is much better to track this internally.

I noticed this while reviewing apache/iceberg-rust#587

Removing this as part of the APIs in Java, and the Open-API
update makes it much more resilient, and don't require the
clients to compute this value. For example. when there are two conflicting
schema changes, the last-column-id must be recomputed correctly when doing
the retry operation.
Copy link
Contributor

@liurenjie1024 liurenjie1024 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Finished second round review. There are some files I haven't check yet, need another review.

Comment on lines 192 to 206
pub fn current_schema(&self) -> &SchemaRef {
self.metadata.current_schema()
}

/// Get the current last column id
#[inline]
pub fn last_column_id(&self) -> i32 {
self.metadata.last_column_id
}

/// Get the current last updated timestamp
#[inline]
pub fn last_updated_ms(&self) -> i64 {
self.metadata.last_updated_ms
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should make all these three methods as private.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See my comment here: #587 (comment)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed

/// # Errors
/// - The ref is unknown.
/// - Any of the preconditions of `self.add_snapshot` are not met.
pub fn append_snapshot(self, snapshot: Snapshot, ref_name: Option<&str>) -> Result<Self> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. In fact that methods is a wrong design for me. The implementation lacks some necessary check, and it's inconsistent with other parts, e.g. TableMetadata is designed to be immutable, and all modifications should go throught TableMetadataBuilder. I think we should make that api as deprecated, and remove it in future.
  2. Yes, but it's weird when it has a branch argument. And the reason I suggest this is that I want to keep api consistent with java implementation.
  3. Similar to 2, I want to keep api name consistent to java. And you are right the name could be either branch or tag. I'm ok with argument name, but I prefer to have function name consistent with java api.


/// Append a snapshot to the specified branch.
/// If branch is not specified, the snapshot is appended to the main branch.
/// The `ref` must already exist. Retention settings from the `ref` are re-used.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think in iceberg's term tag is a special branch, while ref typicall means SnapshotRef which includes more things like retention time, etc. I've updated it to "branch or tag", what do you think?

crates/iceberg/src/spec/table_metadata_builder.rs Outdated Show resolved Hide resolved
self.metadata.snapshot_log.clear();
}

if self.metadata.refs.remove(ref_name).is_some() || ref_name == MAIN_BRANCH {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that if ref_name == MAIN_BRANCH, we have changed it. My assumpation is that we always have MAIN_BRANCH since it's default branch? Please correct me if I'm wrong.

return Ok(self);
}

// ToDo Discuss: Java builds a fresh spec here:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. I agree that we should keep add_schema for now.
  2. I agree that we should implement the issue proposed by @Fokko , which will be part of our trable transaction api.
  3. I don't think we should discourage the use of add_schema since they have different responsibilities. TableMetadataBuilder do some necessary checks to avoid broke table metadata, but it doesn't guarantee that behavior. We should be careful with our transaction, which should guarantee to produce valid table metadata.

In fact, I'm thinking about make all TableMetadataBuilder methods crate private. Java makes them public since java's access modifier is limited and doesn't support complex path visiblity like rust. But I think these methods should be called by transaction api or applying TableUpdate. What do you think?

@c-thiel
Copy link
Collaborator Author

c-thiel commented Nov 15, 2024

@liurenjie1024 for set_branch_snapshot I was missing an easy way to determine the type of a ref.
I introduced an enum ReferenceType, but if you don't like that, I can also just implement is_branch for SnapshotReference.

@c-thiel
Copy link
Collaborator Author

c-thiel commented Nov 16, 2024

@liurenjie1024 for set_branch_snapshot I was missing an easy way to determine the type of a ref. I introduced an enum ReferenceType, but if you don't like that, I can also just implement is_branch for SnapshotReference.

Slept over it and ReferenceType felt wrong - removed it.

/// Remove snapshots by its ids from the table metadata.
/// Does nothing if a snapshot id is not present.
/// Keeps as changes only the snapshots that were actually removed.
pub fn remove_snapshots(mut self, snapshot_ids: &[i64]) -> Self {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently when removing snapshots we might have parent_snapshot_ids of other snapshots pointing to non-existing snapshots. This follows the behavior in Java.

Is this desirable? Or would it be more correct to set them to None as there is no parent available anymore?
@Fokko

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@c-thiel I think it is more correct to set it to None. I'll follow up on the Java side to get this fixed.

@Fokko Fokko mentioned this pull request Nov 18, 2024
28 tasks
/// # Errors
/// - The ref is unknown.
/// - Any of the preconditions of `self.add_snapshot` are not met.
pub fn append_snapshot(self, snapshot: Snapshot, ref_name: Option<&str>) -> Result<Self> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for misclarification, I mean deprecating the append_snapshot method in TableMetadata, and planning to remove it in future release. I think adding #[deprecate] annotation would be enough since cargo will warn about this. Also I think we should remove this method in TableMetadataBuilder?

@@ -912,14 +950,14 @@ mod tests {
#[test]
fn test_check_last_assigned_partition_id() {
let metadata = metadata();

println!("{:?}", metadata.last_partition_id);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove this debug statement?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -376,6 +375,24 @@ impl Schema {
pub fn accessor_by_field_id(&self, field_id: i32) -> Option<Arc<StructAccessor>> {
self.field_id_to_accessor.get(&field_id).cloned()
}

/// Check if this schema is identical to another schema semantically - excluding schema id.
pub(crate) fn is_same_schema(&self, other: &SchemaRef) -> bool {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not implement PartialEq, Eq trait?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about this as well, but opted for a method with documentation.
This method excludes the schema_id which I can describe here in the docstring. Using Eq I would expect the schema_id to be equal too - especially because its used in tests.

}
}

impl From<TableMetadataBuildResult> for TableMetadata {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why we need this? metadata field already exposed as public field.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just for convenience - should I remove it?

partition_spec,
sort_order.unwrap_or(SortOrder::unsorted_order()),
location,
FormatVersion::V1,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In Java, we create V2 tables by default.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do now too :)

Fokko added a commit to apache/iceberg that referenced this pull request Nov 25, 2024
* Core,Open-API: Don't expose the `last-column-id`

Okay, I've added this to the spec a while ago:

#7445

But I think this was a mistake, and we should not expose this
to the public APIs, as it is much better to track this internally.

I noticed this while reviewing apache/iceberg-rust#587

Removing this as part of the APIs in Java, and the Open-API
update makes it much more resilient, and don't require the
clients to compute this value. For example. when there are two conflicting
schema changes, the last-column-id must be recomputed correctly when doing
the retry operation.

* Update the tests as well

* Add `deprecation` flag

* Wording

Co-authored-by: Eduard Tudenhoefner <[email protected]>

* Wording

Co-authored-by: Eduard Tudenhoefner <[email protected]>

* Wording

* Thanks Ryan!

* Remove `LOG`

---------

Co-authored-by: Eduard Tudenhoefner <[email protected]>
Copy link
Contributor

@liurenjie1024 liurenjie1024 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @c-thiel for this great pr, LGTM!

@liurenjie1024 liurenjie1024 merged commit 1138364 into apache:main Nov 26, 2024
16 checks passed
shaeqahmed pushed a commit to matanolabs/iceberg-rust that referenced this pull request Dec 9, 2024
* Squash builder

* Address comments

* Address comments

* Match on FormatVersion to fail for V3

* Fix examples

* Fix tests

* Address comments

* Address comments

* Update crates/iceberg/src/spec/table_metadata_builder.rs

Co-authored-by: Renjie Liu <[email protected]>

* Remove ReferenceType

* Fix import

* Remove current_schema and last_updated_ms accessors

* Ensure main branch is not removed

* Address comments

* Fix tests

* Do not ensure ensure_main_branch_not_removed

* set_branch_snapshot create branch if not exists

---------

Co-authored-by: Renjie Liu <[email protected]>
zachdisc pushed a commit to zachdisc/iceberg that referenced this pull request Dec 23, 2024
* Core,Open-API: Don't expose the `last-column-id`

Okay, I've added this to the spec a while ago:

apache#7445

But I think this was a mistake, and we should not expose this
to the public APIs, as it is much better to track this internally.

I noticed this while reviewing apache/iceberg-rust#587

Removing this as part of the APIs in Java, and the Open-API
update makes it much more resilient, and don't require the
clients to compute this value. For example. when there are two conflicting
schema changes, the last-column-id must be recomputed correctly when doing
the retry operation.

* Update the tests as well

* Add `deprecation` flag

* Wording

Co-authored-by: Eduard Tudenhoefner <[email protected]>

* Wording

Co-authored-by: Eduard Tudenhoefner <[email protected]>

* Wording

* Thanks Ryan!

* Remove `LOG`

---------

Co-authored-by: Eduard Tudenhoefner <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants