-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible bottleneck? #13
Comments
Hi. In my case, I tried using
About the issue mentioned above, see Lightning-AI/pytorch-lightning#4171 This error is caused by pytorch-lightning and can be resolved by upgrading the version. As the error said, using DDP and num_workers>0 at once makes initializing and training speed faster.
|
In order to completely solve this problem, we need a version up of the PyTorch Lightning module. However, there are conflicts between the pl versions, so we plan to check them carefully. |
Unfortunate, accelerator='ddp' is not stable. Accelerator='None' is OK. File "/home/assem-vc/synthesizer_trainer.py", line 85, in |
I am got warning:
/usr/local/lib/python3.8/dist-packages/pytorch_lightning/utilities/distributed.py:45: UserWarning: Dataloader(num_workers>0) and ddp_spawn do not mix well! Your performance might suffer dramatically. Please consider setting distributed_backend=ddp to use num_workers > 0 (this is a bottleneck of Python .spawn() and PyTorch
is this Ok?
The text was updated successfully, but these errors were encountered: