-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments for "https://os.phil-opp.com/allocator-designs/" #720
Comments
Great! |
Thanks for the post! As usual, it is very interesting!
looks like typo |
The
Or, if you use powers of two as argument instead:
|
I should mention that when using a First, while you normally can't delete entries from the allocator, you can delete the last entry. In other words, if you know what you are doing, you can allow a Second, it is common practice to allocate from both the front and the back of the memory region. This is useful where some of the allocations are temporary. You put temporary allocations at the end, while longer-lasting allocation are put in the front. Anyway, both these methods are quite dangerous, but in terms of raw speed, there's no comparison. |
Thanks for your comments!
Good point! I'll extend that section. The fastest variant I'm aware of is relying on the optimized #[no_mangle]
fn align_up(addr: usize, align: usize) -> usize {
let offset = (addr as *const u8).align_offset(align);
addr + offset
}
One could also check in deallocate whether the end address of the freed allocation equals
Good point! While this is difficult to implement for a global allocator, it definitely works for manual allocations. I try to update the bump allocator discussion section with both your suggestions. |
@senseiod Thanks! |
@MikailBag Thank you! Could you maybe clarify what the typo is? I don't see it right now… |
Nice comparison, thanks! For more on bump allocation, see this post (in Rust, even): Also small typo in the beginning: |
@amosonn Thanks for you comment!
I already link this post in the Dicussion section as "can be optimized to just a few assembly operations". I deliberatly decided against bumping from the end because the intention of the post is to explain a basic implementation, not to maximally optimize it. Regarding the alignment function: I think the
Good point! I'll prepare a PR to fix this.
Thanks! Fixed in 4b8c902. |
Ah sorry, I missed the link :). Regarding the various alignment implementations: |
Thanks for investigating! I'll update #721 to use the |
In the code for LinkedListAllocator, |
@Menschenkindlein Thanks! Fixed in 00fedc8 and 670ac60. |
Thank you so much for these posts! And all of this integrated into an actual working/testable(!) system, not just bits and pieces without the oh-so-necessary glue to make them work together. I'm a beginner in both Rust and the nitty-gritty details of a kernel and this is the best source of information for it I ever found, period. I'm looking forward for the next post! |
@eagle2com Thank you so much for your comment! Yes, it is a lot of work to create these posts, but comments like yours make it definitely worth it :). It's great to hear that the blog is useful to you as a beginner! |
Hello! I have a question about memory alignment. When we want to allocate a memory region with a specific alignment, we might increment the starting address of a memory region, and we return that address in our Then when the Thank you! |
Regarding |
That's what I think too. If the address must be aligned, then a new block must be created when allocating, which will start at the original block's address and end before the aligned address. This means that the space between the original block's address and the aligned address must be big enough to hold a block. Alternatively before the returned address the original address could be stored in memory, which would waste some space. |
There is no need in the new block, because the block is already there. But |
@Mandragorian @Lowentwickler Good catch, thanks a lot for reporting! Seems like I forgot to handle this case. (I handle it in the Unfortunately, I don't have the time to fix it right now because it might require some non-trivial code changes. I therefore opened #796 to track this and I will do my best to fix it soon. |
This is pretty good! Reminds me of my CS university's course on re-implementing malloc. Question: do you have any resources you would recommend for someone looking to reimplement a slightly more complex allocator? I'm not looking do to anything state-of-the-art, I'm just itching to implement something a little more clean, with less fragmentation and better time complexity, with multiple fixed-sized pools and fallback allocators and the like. |
Great to hear that you like it. Unfortunately I'm not aware of any such resources. I would recommend looking at the code and documentation of existing allocator such as jemalloc. |
In this code, where does the mut ListNode 'node' live? I know this must be my poor understanding of Rust, does the fact that ListNode::new is const have an effect? |
@danwt The
The only effect of |
I see, thank you! |
Hi Phil! Might be useful mentioning, in brackets, after the names of some allocators other names they go by. For example, when I've come across a bump-allocator before it was always referred to as a "stack allocator", also the fixed size block allocator may be known as a "pool" allocator. For us with plenty of experience we know this, but for less experienced types it may be helpful to learn synonyms for the concepts you present here. |
@RoidoChan That's a good idea. Do you know any good resource that gives an overview over common names? Normally wikipedia has something like this, but the articles on memory allocation are fairly general. Also, it seems like some names are not really well-defined and used for quite different kinds of allocators (e.g. slab allocator). |
@phil-opp, hmmm not off the top of my head. A cursory google brings up many cs course web pages, and a wikipedia page about mem allocation, but if searching for "mem allocator list", well, of course you get lots of articles about linked list allocators! |
@RoidoChan Ok, I think it makes sense to just stick to the most common names then. I just pushed 5e37a0e to at least mention the "stack allocator" and "pool allocator" names you suggested. |
After going through the various allocator designs and following along with the implementation, it occurred to me that it would be useful to have a test which ensured that the block allocator is correctly falling back to the linked-list allocator. I wanted to post what I came up with to sanity-check my code as I'm not 100% confident that what I have written actually will correctly allocate a large block of memory on the heap.
I know that the size of this array should be 2400 bytes (usize * 300), which I have verified with core::mem::size_of. My understanding is that creating a Box around this array will therefore allocate 2400 bytes on the heap (thus validating that the fallback allocator did its job). Am I going about this right, and how could/should I verify for myself that the heap allocation did take place? I assume there is a way, but I'm just not seeing it due to being pretty far in the deep end of my programming abilities here. :) Loved the series on memory btw! It's probably the hardest thing to wrap my head around yet, but with a lot of re-reading I think I'm coming to understand how everything fits together here. |
@drzewiec The most direct way to verify for yourself that the fallback allocator code is running is to put a debug print there. This is a little tricky, but you can look here for hints: https://fasterthanli.me/articles/small-strings-in-rust (a nice, but rather long article; the part with prints from the allocator is quite at the beginning). This is a bit of work though. Simpler methods - but which will all only tell you memory was allocated, not by which allocator - include:
This last one is something that you can actually write a unit test for; however this only checks that some heap allocation was done, not by which allocator. Hope this helps! |
@amosonn That does help, thank you! I am somewhat ashamed that I didn't think of putting a debug print in the allocator code. I appreciate all the alternatives, as well! |
First off, thanks for the excellent series, it's been extremely interesting to go through! Just a small note, if you skip the implementation of the linked list allocator, you won't already have:
in your lib.rs. Just something I ran into. |
…eBlockAllocator The post explicitly allows readers to skip the `LinkedListAllocator` implementation, so we should not rely that the reader already enabled the unstable `const_fn` function there. Reported in #720 (comment).
@diminishedprime Thanks for reporting! I pushed 10d84fa to fix this issue. |
As small note, |
@jiayihu Sorry for the late reply!
This depends on whether you declare the See: blog_os/src/allocator/fixed_size_block.rs Lines 14 to 20 in ca3dfc7
|
I found an issue with the allocators which was caused by using a mutable reference in a constant function (both in linked list allocator and in fixed size allocator), and removing that caused the issue |
@DeathBySpork Thanks for sharing your problem and solution. Yes, adding
(See https://os.phil-opp.com/allocator-designs/#the-allocator-type and https://os.phil-opp.com/allocator-designs/#the-allocator-type-1) |
ah, I guess I didnt notice the change, I am sorry! |
@DeathBySpork No worries, thanks for reporting your problems! |
I still got error "calls in statics are limited to constant functions, tuple structs and tuple variants". I've add #![feature(const_in_array_repeat_expressions)] and #![feature(const_mut_refs)] in main.rs but still got error how to solve it ? My Code in https://github.com/Ananta98/PetraOS. Thank you. |
@Ananta98 Looks like you forgot to make your diff --git a/src/mm/allocator.rs b/src/mm/allocator.rs
index 1ee24e3..8b911b1 100644
--- a/src/mm/allocator.rs
+++ b/src/mm/allocator.rs
@@ -15,7 +15,7 @@ pub struct Locked<A> {
}
impl<A> Locked<A> {
- pub fn new(inner : A) -> Self {
+ pub const fn new(inner : A) -> Self {
Locked {
inner : Mutex::new(inner),
} After this change, it works for me. |
Thank you @phil-opp it works now. sorry this is human fault. |
Great to hear that! No worries, I'm happy to help. |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
This is a general purpose comment thread for the Allocator Designs post.
The text was updated successfully, but these errors were encountered: