-
Notifications
You must be signed in to change notification settings - Fork 740
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PVF workers: consider zeroing all process memory #634
Comments
We have What if we just forward all the |
In this case we have to consider performance. I'm not sure if forwarding |
Is there some trick to tell libc and/or linux that we should no longer have access to the old pages? What if you give the PVF worker a new pid?
|
@burdges Interesting idea, we could fake a new process in-between PVF jobs, as right now all jobs run on the same process and previous jobs would provide randomness on the heap. OTOH if we benchmark zeroing all alloc'ed pages and the performance is okay, then this wouldn't be needed, otherwise we should explore it. |
Well, there's |
ISSUE
Overview
As part of our security work, we are concerned about attackers executing arbitrary code and getting access to sources of randomness. This might allow them to e.g. vote against a candidate with 50% chance, preventing consensus and stalling the chain.
@eskimor brought up zeroing out memory to eliminate a source of randomness (uninitialized memory), so I looked into it more. Unfortunately there are still other sources of randomness that we would need virtualization for: #652. It might still be good to do this where feasible.
Zeroing the heap
The modern OS already provides a zeroed-out, COW shared page on first allocation, but on subsequent allocations it reuses any pages that were released by the same process without zeroing them out. This mechanism is mainly intended to prevent data leaking from other processes. So an attacker could still abuse subtle indeterminacies in the allocator's memory reuse strategy from the same process to get some pseudo-randomness.1 If it's not too expensive, maybe we could have the allocator wrapper do a similar strategy to provide "zeroed" memory whenever it provides it to the process.
Zeroing the stack
We could zero-out up to the native stack limit. To prevent the operation from being optimized away we can use something like
explicit_bzero
which is supported by Rust libc, but it apparently does not provide full protection. Might be better than nothing. We should also make sure that stack protection is turned on (see below).Alternative: binary hardening checks
Modern OS/architectures do provide some limited protections against exploits of this kind, as long as the binary was built properly. I raised this issue to add some checks for that.
Footnotes
On the other hand though, subtle indeterminacies will always exist and it's not feasible to eliminate them all. ↩
The text was updated successfully, but these errors were encountered: