You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I need to run code on on CUDA, but even colab doesn't have enough VRAM to run it. So i am trying to decrease the batch_size, but dont know where to modify it. Can you tell me where is it defined?
The text was updated successfully, but these errors were encountered:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.99 GiB (GPU 1; 23.70 GiB total capacity; 17.64 GiB already allocated; 1.24 GiB free; 21.34 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I need to run code on on CUDA, but even colab doesn't have enough VRAM to run it. So i am trying to decrease the batch_size, but dont know where to modify it. Can you tell me where is it defined?
The text was updated successfully, but these errors were encountered: