-
Notifications
You must be signed in to change notification settings - Fork 17.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
proposal: archive/tar: support zero-copy reading/writing #70807
Comments
A similar optimization exists for the reading side, of course. |
Just to spell it out, I believe that the API change here is to define a new method on // ReadFrom implements [io.ReaderFrom].
func (tw *Writer) readFrom(r io.Reader) (int64, error) Note that I think you could get a similar effect without the API change by writing if tw, ok := fw.w.(*Writer); ok {
return tw.readFrom(r)
} CC @dsnet |
your suggestion certainly improves |
for the reader, this works.
|
It's probably me that was missing something. |
cc @dsnet given the TODO above |
Do we want to include logic to pad out the tar to align content files to the destination's blocksize if it's an Tar natively pad to 512 :'( Line 143 in e39e965
|
@Jorropo - Fascinating insight, thanks! I can confirm that on btrfs, if I set the blockSize to 4096, I can write a 2G tar file in 0.08s which is amazing. Unfortunately, it appears that the block size is not variable in the tar format, so this needs to be done in a different way. Fortunately, one could simply add as many empty files to pad out the tar file to 4096 or whatever the destination block size is. This can be done without changing the tar package at all. |
I am not suggesting we change that field, 512 is hardcoded part of the tar format. |
Proposal Details
the container ecosystem (podman,docker) spends its days creating and consuming huge .tar files. There is potential for significant speed-up here by having the tar package use zero-copy file transport.
The change is straightforward, but involves an API change, so opening a proposal.
with the following change, tarring up a 2G file from tmpfs to tmpfs goes from 2.0s to 1.3s
The text was updated successfully, but these errors were encountered: