-
Notifications
You must be signed in to change notification settings - Fork 208
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
storage: add runtime support for batch chunk #1289
Conversation
@hangvane , a new test job has been submitted. Please wait in patience. The test job url: https://tone.openanolis.cn/ws/nrh4nnio/test_result/74029 |
@hangvane , the code has been updated, so a new test job has been submitted. Please wait in patience. The test job url: https://tone.openanolis.cn/ws/nrh4nnio/test_result/74032 |
Codecov Report
@@ Coverage Diff @@
## master #1289 +/- ##
==========================================
- Coverage 45.90% 45.42% -0.49%
==========================================
Files 123 123
Lines 37033 37254 +221
Branches 37033 37254 +221
==========================================
- Hits 16999 16921 -78
- Misses 19139 19439 +300
+ Partials 895 894 -1
|
@hangvane , the title has been updated, so a new test job has been submitted. Please wait in patience. The test job url: https://tone.openanolis.cn/ws/nrh4nnio/test_result/74033 |
@hangvane , The CI test is completed, please check result:
Congratulations, your test job passed! |
@hangvane , The CI test is completed, please check result:
Congratulations, your test job passed! |
Good work! This seems to be valuable in most use cases.
|
2.The images are pulled from a private harbor registry in the same Campus Network with 100Mbps bandwidth. The benchmark is just a rough estimation for container startup time beacuse: (1) The startup time difference between the same image 1.Wordpress depends on an external database and does not have a short inspection command to exit the container after execution, like 3.It is said that the best practice of the size of read amplification is 128K in datacenter scenarios. Just from a performance point of view, I guess 256K-512K would be good as default value. With good prefetch list assisted at build time, maybe 1M is the best. |
@hangvane , the code has been updated, so a new test job has been submitted. Please wait in patience. The test job url: https://tone.openanolis.cn/ws/nrh4nnio/test_result/74301 |
@hangvane , The CI test is completed, please check result:
Congratulations, your test job passed! |
/retest |
@hangvane , the test job has been submitted. Please wait in patience. The test job url: https://tone.openanolis.cn/ws/nrh4nnio/test_result/76212 |
/retest |
@yqleng1987 , the test job has been submitted. Please wait in patience. The test job url: https://tone.openanolis.cn/ws/nrh4nnio/test_result/77009 |
@yqleng1987 , The CI test is completed, please check result:
Congratulations, your test job passed! |
@jiangliu , the code has been updated, so a new test job has been submitted. Please wait in patience. The test job url: https://tone.openanolis.cn/ws/nrh4nnio/test_result/78487 |
@jiangliu , The CI test is completed, please check result:
Congratulations, your test job passed! |
} else if first_entry.is_batch() { | ||
// Assert each entry in chunks is Batch chunk. | ||
|
||
let first_batch_idx = first_entry.get_batch_index(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should ensure that a chunk is batch chunk before calling get_batch_index(), otherwise it may trigger the assertion in BlobChunkInfoV2Ondisk::get_batch_index().
Add the runtime support for small chunk mergence.
basic usage
Add the
--batch-size
arg to command to enable chunk mergence for supported conversion types:benchmarks
Tested on 4C4G VM, pulled from private harbor registry in the same Campus Network. Cleaned containerd images and nydus-snapshotter cache before each run. Each value is averaged over five runs.
This PR is related to #1202, #884, #885, dragonflyoss/Dragonfly2#1858