-
-
Notifications
You must be signed in to change notification settings - Fork 606
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't run anything with 1.01G of memory #1050
Comments
I have narrowed it down to the specific range of memory 1025-1054.1M. When one runs app with 1024 or less or 1054.2 or more it works. Is also see the exact same behavior on firecracker (it does not use ACPI which I suspected might somehow collide with how we map memory). With the memory 1054.1M OSv adds free ranges like so and added in this order:
with 1054.2
The only difference is the size of the last range around 31-32 MB. |
This more and more looks like some sort of bug that corrupts page tables. When I put a breakpoint just right after the memory is set up at the end of the arch-setup.cc:setup_free_memory(), I am able to print the value at the 1GB address like so:
However, if I continue and let it crash I see this in gdb:
So clearly the address 0xffff800040000000 was mapped but somehow got unmapped? Or in general everything above 1GB is not accessible anymore. Corrupt page tables? Running with 1025M of memory |
I have sent the patch that fixes this issue and #1049. Please see this debug output when setting up memory before the patch:
After patch:
|
I can easily run the "rogue" image (for example) with as little as 40M of memory, so unsurprisingly I have no problems running it with 1G, 2G, 4G or 8G of memory.
But when I try with 1.01G, I get this crash. Note that the crash happens before running the actual application, so it happens on every image - not just "rogue" (I first saw it on tst-huge.so which I was testing for #1049):
The gdb backtrace (I'm leaving out all the nested problems that happen after the first problem and confuse the situation further):
It seems we have a bug in linear_map() when the memory is a tiny bit over 1GB?
I don't know if this is a recent regression or a very old bug - I'm not sure I ever specifically tried to run with 1.01GB of memory.
The text was updated successfully, but these errors were encountered: