-
-
Notifications
You must be signed in to change notification settings - Fork 21.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Converter]: Improve performance and add option to set maximum line length to prevent freezes #64396
Conversation
40b8fa4
to
96ad63b
Compare
bb476a3
to
1cabf13
Compare
I tried this PR 1cabf13b1fd8bc834bc18e5e69a3b48abeef572a on a fresh copy of my gd3 project. It reports 2011 resources to do. It is indeed much faster and no longer hangs on the file I mentioned. Great work. First I tried with 1GB file size and 1MB on one line. It completed in a few minutes.
However, I had many instances of skipped lines. Yet none of them are displayed at the end, neither in summary nor detail. Those are now broken files. It should report the skipped lines files at the end so a user can take steps to fix them. It even processed my 353MB player scene file with 200 animations, a complex rig, and complete AAA quality human model. 88 seconds to process it is acceptable given what it is.
If it can process the above file, then it should be able to process any other file I have. The default 10,000 character limit is wayyyyy too low. 999,999 was fine. Yet it skipped lines > 1MB as I specified.
So I expanded it to 100MB which it should be able to allocate and process without sweating. However it hung on a tiny 4mb file. The whole thing could fit in an allocation smaller than many texture files. It's the same structure as described in #63672 with a LOD manger, a few ArrayMeshes.
I left this for about 10-15 minutes. Allocated memory is only 185mb. Processor is 100% on one core. 350mb Dorian only took 88 seconds and the process allocated up to 3GB. |
1cabf13
to
4c7c02f
Compare
Latest freeze with 4MB file should be fixed now. After recent changes, String
|
Ok, I let it work again on my project w/ 1GB files and 100MB lines:
Nothing was skipped, nothing hung. It chew through even monster files like the highlights included above. Dorian took twice as long, but still acceptable. Great work. Thank you! |
a1de7a8
to
c60e83f
Compare
Recently I only added changes which should speedup searching, especially with big files, so 2x slower checking of 300mb file is quite suspicious. I think that converting times cannot be really predictable due random OS CPU usage(converter should most of the time use 100% of one core), because one time entire godot demo repository converted most of the time in 32s but sometimes took even in 47s. The only possible way now to speedup searching is to move computations to multiple cores(which may I implement in future in other PR) |
Converting my 4gb project in 10 minutes is fine. The skipping and hanging issues were a bigger deal and those seem resolved. |
bea1b91
to
9f00e76
Compare
0eb744e
to
2b4d0e6
Compare
2b4d0e6
to
3b1259a
Compare
Thanks! |
Fixes #63672
This PR added option to choose maximum checked file size and maximum length of checked line.
By default only files smaller than 4MB are checked and lines shorter than 10000 characters, which seems for me quite reasonable(at the end there is additional info how much exactly lines were ignored).
Additionally due caching regex, searching sometimes can be faster by few percent(I was expecting bigger change, but looks that most of the time is spend on searching files with regex, not creating new regex thousands of times).
When testing projects with a lot of files and short lines, I got a little lower performance than before(this was probably caused by running regexes on lines instead full files like before - this change was required to be able to cache regex), but when checking tps-demo which contains files with longer lines, time to check for renames dropped from 120s to 13s
Edit:
I found with help of hotspot that running
regex.sub
on small lines is very time consuming, so usingstring.contains
before to decrease amount of regex to run can speedup this part even few timesSome benchmarks:
Edit2:
Also added(re-enable actually) check to validate if new function name already exists.