Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

quic: fix memory fragmentation #33912

Closed
wants to merge 1 commit into from

Conversation

jasnell
Copy link
Member

@jasnell jasnell commented Jun 16, 2020

Previously, QuicPacket was allocating an std::vector<uint8_t> of NGTCP2_MAX_PKT_SIZE bytes, then the packet would be serialized into the buffer, and the std::vector would be resized based on the number of bytes serialized. I suspect the memory fragmentation that you're seeing is because of those resize operations not freeing memory in chunks that are aligned with the allocation. This changes QuicPacket to use a stack allocation that is always NGTCP2_MAX_PKT_SIZE bytes and the size of the serialized packet is just recorded without any resizing. When the memory is freed now, it should be freed in large enough chunks to cover subsequent allocations.

Original PR: nodejs/quic#388

cc: @nodejs/quic

Checklist
  • make -j4 test (UNIX), or vcbuild test (Windows) passes
  • tests and/or benchmarks are included
  • documentation is changed or added
  • commit message follows commit guidelines

@nodejs-github-bot nodejs-github-bot added the c++ Issues and PRs that require attention from people who are familiar with C++. label Jun 16, 2020
@jasnell jasnell requested review from addaleax and danbev June 16, 2020 19:34
@addaleax
Copy link
Member

I’ll echo this comment (nodejs/quic#388 (comment)):

std::vector::resize() never frees memory. If that is the expectation here, you should call std::vector::shrink_to_fit() after it. I suspect that that would be the right fix here if that’s what’s causing the problems… this PR in its current state should only ever increase memory usage because it makes QuickPacket never use less than the maximum amount of data?

i.e. I think this PR should not change the situation at all, and a call to .shrink_to_fit() is what’s missing here.

@jasnell
Copy link
Member Author

jasnell commented Jun 16, 2020

I can convert this to use shrink to fit but the change here does appear to fix the originally reported problem: nodejs/quic#388 (comment) ... specifically, I think we can actually get a better result here by avoiding the use of the vector entirely.

@jasnell jasnell force-pushed the quic-memory-fragmentation branch from cdd2864 to bcea9f1 Compare June 16, 2020 20:36
@jasnell
Copy link
Member Author

jasnell commented Jun 16, 2020

Let's see how CI goes across all platforms... ;-) https://ci.nodejs.org/job/node-test-commit/39059/

Copy link
Member

@mcollina mcollina left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@jasnell jasnell added the quic Issues and PRs related to the QUIC implementation / HTTP/3. label Jun 17, 2020
@nodejs-github-bot
Copy link
Collaborator

nodejs-github-bot commented Jun 18, 2020

Normal (non-QUIC) CI: https://ci.nodejs.org/job/node-test-pull-request/31924/

@jasnell jasnell force-pushed the quic-memory-fragmentation branch from bcea9f1 to de4a6e2 Compare June 18, 2020 17:27
@jasnell jasnell force-pushed the quic-memory-fragmentation branch from de4a6e2 to 41b6b11 Compare June 18, 2020 18:20
Original PR: nodejs/quic#388

Previously, QuicPacket was allocating an std::vector<uint8_t> of
NGTCP2_MAX_PKT_SIZE bytes, then the packet would be serialized into the
buffer, and the std::vector would be resized based on the number of
bytes serialized. I suspect the memory fragmentation that you're seeing
is because of those resize operations not freeing memory in chunks that
are aligned with the allocation. This changes QuicPacket to use a stack
allocation that is always NGTCP2_MAX_PKT_SIZE bytes and the size of the
serialized packet is just recorded without any resizing. When the memory
is freed now, it should be freed in large enough chunks to cover
subsequent allocations.

Signed-off-by: James M Snell <[email protected]>
@jasnell jasnell force-pushed the quic-memory-fragmentation branch from 41b6b11 to cb99d08 Compare June 18, 2020 18:22
@nodejs-github-bot
Copy link
Collaborator

jasnell added a commit that referenced this pull request Jun 18, 2020
Original PR: nodejs/quic#388

Previously, QuicPacket was allocating an std::vector<uint8_t> of
NGTCP2_MAX_PKT_SIZE bytes, then the packet would be serialized into the
buffer, and the std::vector would be resized based on the number of
bytes serialized. I suspect the memory fragmentation that you're seeing
is because of those resize operations not freeing memory in chunks that
are aligned with the allocation. This changes QuicPacket to use a stack
allocation that is always NGTCP2_MAX_PKT_SIZE bytes and the size of the
serialized packet is just recorded without any resizing. When the memory
is freed now, it should be freed in large enough chunks to cover
subsequent allocations.

Signed-off-by: James M Snell <[email protected]>

PR-URL: #33912
Reviewed-By: David Carlier <[email protected]>
Reviewed-By: Matteo Collina <[email protected]>
Reviewed-By: Denys Otrishko <[email protected]>
@jasnell
Copy link
Member Author

jasnell commented Jun 18, 2020

Landed in d7d79f2

@addaleax
Copy link
Member

I can convert this to use shrink to fit but the change here does appear to fix the originally reported problem: nodejs/quic#388 (comment) ... specifically, I think we can actually get a better result here by avoiding the use of the vector entirely.

@jasnell Okay, do you mind if I open a PR that does shrink memory? Because I don’t think unconditionally allocating the maximum packet length is an acceptable solution, tbh.

@jasnell
Copy link
Member Author

jasnell commented Jun 19, 2020

What I've found in testing is either (a) we are sending very small generally fixed sized packets or (b) we are sending very close to full sized packets, there's rarely anything in between and we generally know when we're going to send either. What I think I'd like to see first is: when we know we're going to send a smaller packet size, we just create the QuicPacket with the smaller size so we can avoid resizing it at all.

The other thing I've been considering is have a ring buffer of QuicPacket instances that we can just reuse. Grab one, serialize into it, send it, put it back on the queue when it's down to be reused so that we can avoid the extra allocations almost entirely.

That said, you know I never mind a PR :-)

@addaleax
Copy link
Member

Grab one, serialize into it, send it, put it back on the queue when it's down to be reused so that we can avoid the extra allocations almost entirely.

To be clear, my concern wasn’t so much the time spent on allocations, but more the fact that these packets now keep memory usage up for a potentially long time.

@jasnell
Copy link
Member Author

jasnell commented Jun 19, 2020

QuicPacket objects should only exist long enough to receive the serialized frames from ngtcp2 and be sent by the UDP wrap. They should not be long lived at all, and if they are, there's another problem to be fixed, as they should be freed as soon as libuv signals that the UDP packet has been sent. Before making further changes here, let's do some profiling to see how long these are kept alive.

@addaleax
Copy link
Member

Ah yeah, if they aren’t kept alive until an ACK from the receiving side I think that should be okay :)

@jasnell
Copy link
Member Author

jasnell commented Jun 19, 2020

Yeah, the acks only impact the buffer at the pre-serialized state. The QuicPacket only ever contains serialized frames which are always transient. That's why I'm thinking a ring buffer may be beneficial here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
c++ Issues and PRs that require attention from people who are familiar with C++. experimental Issues and PRs related to experimental features. quic Issues and PRs related to the QUIC implementation / HTTP/3.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants