-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DVC very slow for datasets with many files #7607
Comments
Hi @RBMindee !
From what you describe, your usage has nothing wrong and it's most likely an issue with DVC performance. Regarding point 1, it's discussed/tracked in #7390. We are currently working on improving status performance in #7597 Regarding point 2, it's discussed/tracked in #6222 . We are also working on improving the transfer speed, there is no P.R. yet though. Could you check, with a small subset of the data, if passing Would be useful if you could share, even for a small subset so it doesn't take much, a profile of the run (i.e.
|
Thank you very much for that quick answer! I will send the profiles
tomorrow.
Just a question: do you have a rough estimate of ETA of these improvements?
Especially, the last issue seems to be a few months old already.
We can certainly wait for a few weeks but if it's more, we will have to
bootstrap something ourselves. I guess tracking a collection of zip files
would do, or maybe using a shared cache. But I'm quite reluctant to do so
especially if a cleaner solution is in progress on your side.
Le mer. 20 avr. 2022 à 19:41, David de la Iglesia Castro <
***@***.***> a écrit :
… Hi @RBMindee <https://github.com/RBMindee> !
We are unsure if it is due to our poor use of the tool or if it is a real
bug.
For what you describe, your usage has nothing wrong and it's most likely
an issue with DVC performance.
------------------------------
Regarding point 1, it's discussed/tracked in #7390
<#7390>. We are currently working
on improving status performance in #7597
<#7597>
------------------------------
Regarding point 2, it's discussed/tracked in #6222
<#6222> . We are also working on
improving the transfer speed, there is no P.R. yet though.
Could you check, with a small subset of the data, if passing --jobs=1 has
an impact on the performance of the transfer?
Would be useful if you could share, even for a small subset so it doesn't
take much, a profile of the run (i.e. viztracer:
pip install viztracer
dvc pull --viztracer --viztracer-depth 6
—
Reply to this email directly, view it on GitHub
<#7607 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AV5EYTI74S57YM3YSND4R63VGA6WHANCNFSM5T4XCIGA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
I tried to run viztracer. I add to run a very small subset in order not to loose any information though (3000 files): I timed three configurations for benchmark:
It appears that:
For the sake of completed ness, please note that I add to run it on another machine. here is the new output of dvc doctor. |
Thanks @RBMindee ! Could you run the same but with |
Forgot about the job. Here it is Additionally I also timed aws s3 cp on a zip archive containing all of the cache and it takes 0.27s, thus about 100x faster than the original command. Zipping directories and subdirectories is originally proposed in #6222 but is not discussed. Is there a reason ? |
Ideally, we should reduce to the minimum (there is always going to be some) the performance gap between cloud API and DVC transfer. However, regardless of using DVC or not, transferring a compressed artifact is going to be faster than a directory of files. It's up to your use case and totally ok if you don't see any downsides. From the top of my mind, I could think as tradeoffs that transferring individual files allows you granular access (i.e. download a single file) and doesn't require the additional compressing step. In addition to the above, when using DVC there is an additional tradeoff. DVC handles deduplication at the file level and compressing the directory completely removes this functionality (although this could be addressed in #829). There is nothing stopping you from zipping the directory as part of a DVC stage and tracking and transferring the compressed artifact via DVC. We use this approach ourselves in our example dataset registry (https://github.com/iterative/dataset-registry/tree/master/tutorials/versioning) but we could improve the visibility of this alternative in the docs. |
To do Related: #3177 |
I have tested with different sizes and approaches, and even when dvc add is a noop (no files have changed), I end up with something that looks like the above where checkout takes most of the time. Any ideas why this could by the case? |
@dberenbaum Yes, it is because we force relink on |
Okay, sounds familiar although I also can't remember where we discussed before 😄 .
Let's create an issue for this if we don't have one specific to it? |
@dberenbaum We definitely have it somewhere, just failing to find it. E.g. related iterative/dvc-data#274 |
This blog post makes an argument for either zipping or consolidating your large file counts to something more manageable. They make the fair point that transfers via s3 or azure for millions of files are expensive, and that the filesystem doesn't like it either. So perhaps having too many files really should just be avoided, past a certain point. |
Revisited this scenario and wanted to leave notes here. It took about an hour, compared to 26 minutes to do raw md5 calculations. This isn't a fair comparison since dvc is also caching and linking the files, and most of the time is spent in checkout (which isn't be surprising since we still have not had a chance to address iterative/dvc-data#274 and stop relinking everything). Here's the |
If I set Line 1343 in dd2b515
|
As of dvc 3.50.1,
I don't think there's anything else actionable in this issue that isn't covered in other issues, so closing this as completed. |
dvc pull is still very slow because it seems that the "applying changes" runs single threaded? |
@mathematiguy it would make a lot more sense if dvc automatically reduced the number of files under the hood by compressing them, and then when you pull it extracts them. This way it would remove that overhead from the data management process. |
A 1.7Gb dataset containing 600k files takes many hours with dvc add or dvc push/pull.
Description
Hi all, we have been using dvc for a while on medium size datasets, but struggle when trying to use it with big ones. We are unsure if it is due to our poor use of the tool or if it is a real bug.
We have a dataset containing about 600k small files for a total of 1.7Gb. The repo is configured to be stored on s3. Updating the whole dataset with aws s3 cp to upload this dataset takes only a few minutes.
We add two problems and are unsure if that's a DVC problem or if it's due to us missusing the tool.:
If we run
dvc add /path/to/dataset
it takes a few tens of minutes to add, which is okay I think as all md5 must be computed.Then if we run dvc status, it goes fast. However, if we change a single file, all md5 are recomputed and just checking the status takes ages again.
We sort of solved that problem as the dataset is composed of many smaller subfolders, so instead, we add files with
dvc add /path/to/dataset/*/*
. Then, everything is fine, but this seems quite odd. Is there a better way to do it ?Once we are able to add, the problem is to do
dvc push/pull
. This takes between 6 and 8 hours, which seems too much for less than 2Gb. It seems that dvc is uploading every file separately ? Are we doing something wrong ?+Reproduce
I cannot put my own dataset but it weights 1.7Gb and contains 600k files splitted in subfolders which themselves have subfolders.
Expected
Uploading files to s3 should be reasonably faster I think ?
Environment information
Output of
dvc doctor
:$ dvc doctor
DVC version: 2.9.5 (deb)
Platform: Python 3.8.3 on Linux-5.4.0-107-generic-x86_64-with-glibc2.14
Supports:
azure (adlfs = 2022.2.0, knack = 0.9.0, azure-identity = 1.7.1),
gdrive (pydrive2 = 1.10.0),
gs (gcsfs = 2022.1.0),
hdfs (fsspec = 2022.1.0, pyarrow = 7.0.0),
webhdfs (fsspec = 2022.1.0),
http (aiohttp = 3.8.1, aiohttp-retry = 2.4.6),
https (aiohttp = 3.8.1, aiohttp-retry = 2.4.6),
s3 (s3fs = 2022.1.0, boto3 = 1.20.24),
ssh (sshfs = 2021.11.2),
oss (ossfs = 2021.8.0),
webdav (webdav4 = 0.9.4),
webdavs (webdav4 = 0.9.4)
Cache types: hardlink, symlink
Cache directory: ext4 on /dev/nvme0n1p2
Caches: local
Remotes: s3
Workspace directory: ext4 on /dev/nvme0n1p2
Additional Information (if any):
The text was updated successfully, but these errors were encountered: