-
Notifications
You must be signed in to change notification settings - Fork 233
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: allow blob in write_fragments
#3235
base: main
Are you sure you want to change the base?
Conversation
71c464c
to
e0784b9
Compare
@westonpace Could you review it please? I'm wondering if the change in |
5c6da44
to
56cda28
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for working on this! Overall I think this looks like it is going in the right direction.
I have a few cleanup suggestions. Also, this will need a unit test before we can merge. test_commit_batch_append
in test_dataset.py
should be pretty close.
I think the problem is it's not possible to modify id of schema on python side
Shouldn't the id of the field remain the same? An append should only add new fragments, it should not modify the schema in any way.
If you can make a unit test (even if its a failing one) then I can debug further. At the moment it isn't clear to me exactly how you plan to use this so it is a bit hard to debug.
@@ -513,6 +513,67 @@ pub fn write_fragments( | |||
.collect() | |||
} | |||
|
|||
#[pyfunction(name = "_write_fragments_with_blobs")] | |||
#[pyo3(signature = (dest, reader, **kwargs))] | |||
pub fn write_fragments_with_blobs( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any reason we can't modify write_fragments
above instead of introducing a new method?
It's ok for there to be breaking changes at the rust level because the public API is the write_fragments
in fragment.py
.
|
||
let min_field_id = fragments.iter() | ||
.flat_map(|fragment| &fragment.files) | ||
.flat_map(|file| &file.fields) | ||
.min() | ||
.copied(); | ||
let new_schema = if let Some(min_id) = min_field_id { | ||
let filtered_fields: Vec<Field> = schema.fields | ||
.iter() | ||
.filter(|f| f.id >= min_id) | ||
.cloned() | ||
.collect(); | ||
|
||
if filtered_fields.is_empty() { | ||
return Err(PyValueError::new_err(format!( | ||
"No fields in schema have field_id >= {}", | ||
min_id | ||
))); | ||
} | ||
|
||
Schema { | ||
fields: filtered_fields, | ||
metadata: schema.metadata.clone(), | ||
} | ||
} else { | ||
schema | ||
}; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How are these changes related? Are you doing a overwrite operation?
if isinstance(operation, Transaction): | ||
new_ds = _Dataset.commit_transaction( | ||
base_uri, | ||
operation, | ||
commit_lock, | ||
storage_options=storage_options, | ||
enable_v2_manifest_paths=enable_v2_manifest_paths, | ||
detached=detached, | ||
max_retries=max_retries, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm...being able to send a transaction in directly seems like it introduces more complexity to the user than needed. Is it possible to modify https://github.com/lancedb/lance/blob/main/python/python/lance/dataset.py#L2494 (the Append
operation) so that it can take in a second list of blob fragments:
@dataclass
class Append(BaseOperation):
fragments: Iterable[FragmentMetadata]
blob_fragments: Iterable[FragmentMetadata] = []
I have found that the current implementation still has some issues, but I can first outline the difficulties I am currently encountering: I am using the code below for testing, and it currently needs to run on the modified version. import shutil
def make_table(offset, num_rows, big_val):
end = offset + num_rows
values = pa.array([big_val for _ in range(num_rows)], pa.large_binary())
idx = pa.array(range(offset, end), pa.uint64())
table = pa.record_batch(
[idx, values],
schema=pa.schema(
[
pa.field("idx", pa.uint64()),
pa.field(
"blobs",
pa.large_binary(),
metadata={
"lance-schema:storage-class": "blob",
},
),
]
),
)
return table
tbl = make_table(0, 10, b"0" * 1024 * 1024)
uri = "test1"
shutil.rmtree(uri, ignore_errors=True)
default_frags, blob_frags = lance.fragment.write_fragments(
pa.Table.from_batches([tbl]),
uri,
with_blobs=True,
enable_move_stable_row_ids=True,
)
blob_operation = lance.LanceOperation.Overwrite(tbl.schema, blob_frags)
operation = lance.LanceOperation.Overwrite(tbl.schema, default_frags)
transaction = lance.Transaction(
operation=operation,
blobs_op=blob_operation,
read_version=0,
)
ds = lance.LanceDataset.commit(
uri,
transaction,
enable_v2_manifest_paths=True,
)
ds._take_rows(range(10)) The problem lies in the Rust implementation where the operation._to_inner method is called. Additionally, the two fragments returned by write_fragments retain the column index information. Therefore, the Overwrite operation expects to correctly preserve the index in the schema. blob_operation = lance.LanceOperation.Overwrite(tbl.schema, blob_frags)
blob_operation._to_inner() this will return correct schema
I confirm it's correct by lance.write_dataset(tbl, "test2", tbl.schema)
lance.LanceDataset("test2/_blobs").lance_schema The key of problem is the id in pub fn set_field_id(&mut self, max_existing_id: Option<i32>) {
let schema_max_id = self.max_field_id().unwrap_or(-1);
let max_existing_id = max_existing_id.unwrap_or(-1);
let mut current_id = schema_max_id.max(max_existing_id) + 1;
self.fields
.iter_mut()
.for_each(|f| f.set_id(-1, &mut current_id));
} blob_operation = lance.LanceOperation.Overwrite(pa.schema(
[
pa.field(
"blobs",
pa.large_binary(),
metadata={
"lance-schema:storage-class": "blob",
},
),
]
), blob_frags)
blob_operation._to_inner() will return |
Allow user use
write_fragments
for lance with blob storage class by returning a nullable list of blob ops