You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now the job takes more than 5 minutes generating metadata. Since every asset is filled sequentially it will only get worse linearly as more assets are added. This is especially noticeable when developing locally, but might also become an issue in GitHub Actions as the quotas are reached.
On a full site redeploy for example 8m 20s out of 11m 20s are spent in the asset phase, out of which 3m are compile time and the 5m 20s are running the generation.
The proposed solution is to parallelize metadata requests as much as possible while trying to not hit service throttling limits.
Edit: my bad, this should probably have been opened in the bevy-website repository.
The text was updated successfully, but these errors were encountered:
I think we should prefer link to crates.io instead of to the git repository, as metadata there are much more easier to get. Getting metadata will always be slow, and at some point we'll reach the maximum possible...
Currently there are 218 links to GitHub and 50 links to crates.io.
That makes sense. I mostly opened this as I thought I would have to regenerate it for every visual change to the assets page, and that was scary. But you only have to do it once when you set it up so it's not that big of an issue.
Still maybe a parameter to skip metadata could be useful to get fast data for development. Github and Gitlab are skipped anyway if the token is not present.
Right now the job takes more than 5 minutes generating metadata. Since every asset is filled sequentially it will only get worse linearly as more assets are added. This is especially noticeable when developing locally, but might also become an issue in GitHub Actions as the quotas are reached.
On a full site redeploy for example 8m 20s out of 11m 20s are spent in the asset phase, out of which 3m are compile time and the 5m 20s are running the generation.
The proposed solution is to parallelize metadata requests as much as possible while trying to not hit service throttling limits.
Edit: my bad, this should probably have been opened in the bevy-website repository.
The text was updated successfully, but these errors were encountered: