-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Working example on Google Colab? #95
Comments
I tested this, the execution breaks - |
Hi @hswlab !
|
Hi. @PABannier Sorry for asking here since I don't know how to contact you, but I want to know why bark_forward_fine_encoder tries to allocate 30GB of memory? None of the weights are at that size. Also using the original bark model(non ggml), it runs well.
|
Hi @akarshanbiswas ! Thanks for reaching out on this problem. Are you able to track back which operations cause this surge in memory? Also, which prompt did you input to the model? |
I followed the same instructions the OP has in his colab notebook. Additionally I tested with quantize weights using the scripts available in the repo. I also found out that the weight with I am yet to check the coredumps that I got which I will do in a few hours for now. (currently afk). 🙂 |
Yes! Codec weights are not currently quantized because it does not provide any significant speed-ups (the forward pass is already fast), but degrades the audio quality. |
I moved this discussion to a separate issue. |
@PABannier thank you, I tried to quantize the weights and could successfully generate an output.wav
|
it would be great to have #95 (comment) linked or incorporated into the README |
Can anyone show a working example on Google Colab where a concrete audio file is generated? In my attempts, execution strangely breaks after these lines.
Here is the link to my attempt on Google Colab:
https://colab.research.google.com/drive/1JVtJ6CDwxtKfFmEd8J4FGY2lzdL0d0jT?usp=sharing
The text was updated successfully, but these errors were encountered: