-
Notifications
You must be signed in to change notification settings - Fork 30k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Option to configure highWaterMark
of HTTP response
#30107
Comments
highWaterMark
of HTTP response
highWaterMark
of HTTP response
It looks like passing diff --git a/lib/_http_incoming.js b/lib/_http_incoming.js
index ad3699cc44..4f9c501377 100644
--- a/lib/_http_incoming.js
+++ b/lib/_http_incoming.js
@@ -37,7 +37,9 @@ function readStop(socket) {
/* Abstract base class for ServerRequest and ClientResponse. */
function IncomingMessage(socket) {
- Stream.Readable.call(this);
+ let streamOptions;
+
+ if (socket) {
+ streamOptions = {
+ highWaterMark: socket.readableHighWaterMark
+ };
+ }
+
+ Stream.Readable.call(this, streamOptions);
this._readableState.readingMore = true; |
@cjihrig thanks for looking into this! To clarify, does that mean the patch above will need to land for this to work as desired? The end result being you'd use it like so: http.request('https://example.com', { readableHighWaterMark: 1000000 }, (response) => {
console.log(response.readableHighWaterMark); // 1000000
}); |
So, thinking about this a bit more, I don't think we'd want to officially support passing Unfortunately, I think we'd still need something like the patch above to set the I've opened #30135 as a possible solution. |
Thanks for getting this implemented so quickly! I took a look at the PR and your solution makes sense and works for our use case 💯 |
This commit causes http.IncomingMessage instances to set their readableHighWaterMark value the same value used in the underlying socket. PR-URL: #30135 Fixes: #30107 Reviewed-By: Anna Henningsen <[email protected]> Reviewed-By: Luigi Pinca <[email protected]> Reviewed-By: Yongsheng Zhang <[email protected]> Reviewed-By: Trivikram Kamat <[email protected]>
This commit causes http.IncomingMessage instances to set their readableHighWaterMark value the same value used in the underlying socket. PR-URL: #30135 Fixes: #30107 Reviewed-By: Anna Henningsen <[email protected]> Reviewed-By: Luigi Pinca <[email protected]> Reviewed-By: Yongsheng Zhang <[email protected]> Reviewed-By: Trivikram Kamat <[email protected]>
This commit causes http.IncomingMessage instances to set their readableHighWaterMark value the same value used in the underlying socket. PR-URL: #30135 Fixes: #30107 Reviewed-By: Anna Henningsen <[email protected]> Reviewed-By: Luigi Pinca <[email protected]> Reviewed-By: Yongsheng Zhang <[email protected]> Reviewed-By: Trivikram Kamat <[email protected]>
Is there a way to do this on server side? |
Is your feature request related to a problem? Please describe.
We have a Node service that loads a bunch of data over an HTTP connection using Server-Sent Events. We noticed, when comparing to implementations in other languages, that the data we receive winds up being in many more "chunks" in Node which leads to some performance issues.
After some digging, it seems that the issue is that
http.ServerResponse
received when usinghttp.request()
uses the defaulthighWaterMark
value of16kb
inherited from the defaultReadableStream
class. This is much smaller than some of the messages we're processing which we believe is the cause of the perf issues since we must reconstruct the messages before parsing them fully.Describe the solution you'd like
Ideally, an option to configure the
highWaterMark
size of aresponse
when callinghttp.request
.It might be that there is a way to do what we need here, but I was not able to find it if so. So I'm open to feedback on that front.
Describe alternatives you've considered
Some way to override the default
highWaterMark
size of a stream (e.g., a CLI option), but that feels more expansive than needed.Thanks in advance for your help/consideration 😃
The text was updated successfully, but these errors were encountered: