Fix close_write() being called too eagerly #2840
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fix #2831
The problem turned out to be very stupid:
Calling
close_write()
is just a shortcut foroutgoing_buffer = None
.So you're supposed to call
close_write()
when you have nothing more to write.The problem is that we're calling
write_from_vec_deque(data)
, which transfers data fromdata
tooutgoing_buffer
. Then we realize thatdata
is empty (because it's been entirely copied tooutgoing_buffer
), and callclose_write()
. This setsoutgoing_buffer
toNone
and thus erases the data that we've just copied.Instead, we now call
close_write
at the next iteration (when the data will have been extracted out), which is what you're supposed to do.Additionally, as a safety measure, I've replaced
close_write
withclose_write_if_empty
. I think thatclose_write
is too dangerous. There's never a situation where you would want to cancel sending data that you've already queued for writing.