-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Blocked IO Thread not woken #5605
Comments
You send a request with content, but the request is not authorized and it's responded with a 401. If it's your application that sends the 401, then your application should read the request content before sending the 401. If you have setup some Jetty Authenticator and it's Jetty that replies with the 401, there is not much that can be done aside making sure beforehand that your request can be accepted by the server. Jetty cannot read the content as that would be an attack vector (imagine uploading a 16 GiB movie whenever a client attacker discovers a URI that is replied with a 401 -- the server would waste resources reading the request content just to discard it). If you have an authenticator that times out on the server, you have to make sure you renew the credentials before they expire. |
I believe this 401 is normal as I see the same response in other calls which do not error
|
On the first dump look at the The abort occurred at |
The content is xml Yes I agree, that is the same conclusion I arrived at What I don't understand is why that happened and if there's something I can do/configure to avoid this issue And to confirm, this is just jetty side http parsing correct? I am trying to conclude if the issue is on jetty, the web application, or potentially a 3rd party library |
As I said, if your application is generating the 401, it must read the request content up to -1. In the logs above no content is read, the response is generated and few lines after you can see
What happens next is that Jetty tries to consume the content that arrived so far, exactly to try to be as nice as possible. It so happens that there are 13140 bytes arrived that can be read; after that, Jetty has to close the connection. It may not happen all the times because TCP may split content differently, and there may be other reasons such as chunking, etc. If all the content has arrived, the request content will all be consumed and no error would happen. It's not a Jetty issue. It's an application (or a library) not reading the content before the 401. If it's Jetty producing the 401 because you have setup some Jetty |
Isn't jetty reading the content? The weird thing is it is happening intermittently, meaning for several hours maybe only show up once and all the other requests are processed fine We never came across this issue when running the same application with the 4.2.x jetty Thanks for your feedback |
Jetty is reading the content on behalf of the application. It is the application that calls If the application does not call As I said, the fact that you get this intermittently may be due to the fact that the whole content is already arrived to the server when the 401 is produced. It may depend on the size of the content, how it is split by TCP, by the HTTP transfer encoding and by the content encoding too. |
The interesting thing to consider here is that it is jetty not the application that is sending the 401. So perhaps we do have a responsibility to consume the input... or to at least make an attempt at it. I'm wondering if internally we should be doing the following before committing the 401:
@sbordet how would that work for HTTP2? Would it be that we send the 401 header frame and then a reset? Alternately, we could asynchronously consume all the input on 401 responses - thus reducing the attack vector as we'd not block a thread. |
@sbordet I have verified that we don't normally add a So in summary, I think we are doing the right thing, by not reading the content. However we could be clearer by sending a |
@gregw +1 on adding For HTTP/2, since it's multiplexed, if the client sends the whole content, the server will read it and discard it. |
Please make the |
@joakime well I'd say that sending content that may need to be authenticated is wasteful :) But yes at the very least we will only add a Content: close if there is content, so most GETs will be OK. If it is simple then we might try a consumeAll prior to sending the 401, so we can know if all content can be consumed or not. @tchgitav In your usecase, where you are sending content that might be refused every so often, it may be better to send |
Thanks for the discussion I am currently looking into the source code of the Apache cxf and woodstox utf8Reader to see what might have happened Regarding the 401, I will focus on that afterwards. |
Add Connection:close if content can't be consumed during a sendError. Processed after the request has returned to the container. Signed-off-by: Greg Wilkins <[email protected]>
I am testing the results if I upgrade the woodstox-core dependency to 6.2.1 For the 401 response above, I think this is normal basic Authenticate behavior In my initial debug log snippet, I don't see logs for HttpInput.read() like below
|
@tchgitav The 401 is indeed a standard part of the auth conversation. So other than the warning you see, is this actually causing any problems for your client? Is it failing the next request? If so, any chance you can try building/testing #5637, which will give your client a Connection:close warning in those cases. |
I believe my issue is during the authenticated request and not the initial un-authenticated request I think if you change the 401 response to a connection close, it might not play well with basicAuth Update: I tried running with the updated woodstox dependencies, still observe this exception |
@tchgitav The request can't be authenticated, else we would not be sending a 401. So can you answer if you are just seeing the exception or you are also seeing bad behaviour (eg missed request/response?) |
Yes, we are seeing the missed request on the application side No exceptions on the server logs |
@tchgitav So the missed request on the application side is the one that the 401 is sent for? Do you know why it is getting a 401? |
The missed request is for the one with the credentials which was being parsed but for some reason the connection was aborted due to the IOException: unconsumed input.
The client does not resend the request if there was a error in the connection
I thought we agreed that was the normal behavior with basic authentication We are looking to test pre-emptive authentication to see if that helps |
@tchgitav Consider any server that uses BASIC authentication. An attacker can send an unauthenticated POST request with a 4GB body. Because it does not go to the applications, there is no change for the app to look at that and say - whoa that's too big I'm not going to consume that. Instead jetty is sending the 401. Now if jetty just tried to read that 4GB to keep the connection open, then a thread would be easilyt consumed for a long time. The attacker could pretty simply consume the thread pool of the server, whilst never passing authentication. So instead, jetty tries to consume the request to keep the connection open, but if it cannot consume without blocking, it will instead abort the connection... which a server is always entitled to do. The PR we are working on tries to give a bit more notice by including a The problem for you however, is that the request is a POST, so that the client is not going to retry sending it when it sees a closed connection... not unless told to do so, The fundamental problem is not the closing connection, but that the client will not retry a POST if it has sent it already. Preemptive auth should help. It may also help to use |
@gregw Enabling pre-emptive authenticate did seem to help us I understand your point regarding the possibility of the server closing the connection due to the 401
I can pass that to the dev team. They are using the c# libraries so might not have the capability to do so In regards to jetty closing the connection due to the 401, why do we see it intermittently Also why did we not have issues with jetty 9.2.x; was it more relaxed in this connection close area? |
GETs are safe and idempotent, so clients are allowed to retry them automatically. So your client should be able to retry on a new connection when it sees the 401 and closed connection.
It depends on the speed of your network, the size of your OS buffers and dispatching of threads. It may be that sometimes the server is successfully able to consume all the content without blocking because it has all arrived and is in local buffers. But sometimes it may not be able to read it all without blocking, so it closes the connection instead of blocking.
I think 9.2 just blocked reading and discarding the entire body. So it worked for you all the time, but the server is vulnerable to the DOS attack I described. |
Thanks for concluding the cause of why we are seeing this issue. Besides pre-emptive authentication is the only other client recourse the retry logic? |
* Issue #5605 unconsumed input on sendError Add Connection:close if content can't be consumed during a sendError. Processed after the request has returned to the container. Signed-off-by: Greg Wilkins <[email protected]> * Update from review + Add close on all uncommitted requests when content cannot be consumed. * Update from review + fixed comment + space comma * Only consume input in COMPLETE if response is >=200 (ie not an upgrade or similar) * Updated to be less adventurous I do not think it was valid to always consumeAll in COMPLETE as this could break upgrades with both 101s and 200s Instead I have reverted to having this consumeAll logic only: + in sendError once control has passed back to the container and we are about to generate an error page. + in front of all the sendRedirection that we do without calling the application first. Extra tests also added * Updated to be less adventurous reverted test * Testcase for odd sendError(400) issue. Signed-off-by: Joakim Erdfelt <[email protected]> * Fix for odd sendError(400) issue. Signed-off-by: Simone Bordet <[email protected]> * Testcase for odd sendError(400) issue. Signed-off-by: Joakim Erdfelt <[email protected]> * Always try to consumeAll on all requests * Refinements after testing in 10 * Refinements after testing in 10 Fixed test * Fixed comment from review * Updates from review + added redirect methods that consumeAll + ensureContentConsumedOrConnectionClose renamed to ensureConsumeAllOrNotPersistent + ensureConsumeAllOrNotPersistent now handles HTTP/1.0 and HTTP/1.1 differently * better consumeAll implementation * update from review + better javadoc + filter out keep-alive + added more tests * update from review + better javadoc * update from review + fixed form redirection test for http 1.0 and 1.1 * update from review + HttpGenerator removes keep-alive if close present + Use isRedirection Co-authored-by: Joakim Erdfelt <[email protected]> Co-authored-by: Simone Bordet <[email protected]>
Signed-off-by: Joakim Erdfelt <[email protected]>
…st parsing. Writing content in separate writes may result in the server only reading partial content, producing a response with `Connection: close` that would cause the client socket to stop receiving data for the next response, failing the test. Signed-off-by: Simone Bordet <[email protected]>
@lorban Small update, while we can now run jetty 10.x for hours which was not possible earlier, some threads still get stuck, either like this:
or like this:
This is with jetty at https://github.com/eclipse/jetty.project/tree/bf9318f2b8a53c8d8c1d572ad7f3e982e2613c82, without the latest commit 4ec51fb, I will retest the newest version... |
@jaroslawr I just pushed what should be the final version in my PR with some cleanups. |
@lorban Thanks, we are testing this version, it works stable so far, but this last issue is sporadic enough it will take a few days before it can be crossed off for good. |
…-threads Fix #5605 Unblock non container Threads
…d-threads Jetty 10.0.x Fix #5605 Unblock non container Threads
@lorban Everything has been working great since the last fix, thank you! |
…02 - CVE-2020-27218 Bump jetty.version to 9.4.35.v20201120. The [release notes](https://github.com/eclipse/jetty.project/releases/tag/jetty-9.4.35.v20201120) mention [issue 5605](jetty/jetty.project#5605): > java.io.IOException: unconsumed input during http request parsing which seems to match the description of [CVE-2020-27218](http://cve.circl.lu/cve/CVE-2020-27218) Author: Damien Diederen <[email protected]> Reviewers: Enrico Olivelli <[email protected]>, Norbert Kalmar <[email protected]>, Andor Molnar <[email protected]>, Patrick D. Hunt <[email protected]> Closes apache#1552 from ztzg/jetty-upgrade-CVE-2020-27218
…02 - CVE-2020-27218 Bump jetty.version to 9.4.35.v20201120. The [release notes](https://github.com/eclipse/jetty.project/releases/tag/jetty-9.4.35.v20201120) mention [issue 5605](jetty/jetty.project#5605): > java.io.IOException: unconsumed input during http request parsing which seems to match the description of [CVE-2020-27218](http://cve.circl.lu/cve/CVE-2020-27218) Author: Damien Diederen <[email protected]> Reviewers: Enrico Olivelli <[email protected]>, Norbert Kalmar <[email protected]>, Andor Molnar <[email protected]>, Patrick D. Hunt <[email protected]> Closes apache#1552 from ztzg/jetty-upgrade-CVE-2020-27218
…02 - CVE-2020-27218 Bump jetty.version to 9.4.35.v20201120. The [release notes](https://github.com/eclipse/jetty.project/releases/tag/jetty-9.4.35.v20201120) mention [issue 5605](jetty/jetty.project#5605): > java.io.IOException: unconsumed input during http request parsing which seems to match the description of [CVE-2020-27218](http://cve.circl.lu/cve/CVE-2020-27218) Author: Damien Diederen <[email protected]> Reviewers: Enrico Olivelli <[email protected]>, Norbert Kalmar <[email protected]>, Andor Molnar <[email protected]>, Patrick D. Hunt <[email protected]> Closes apache#1552 from ztzg/jetty-upgrade-CVE-2020-27218
…02 - CVE-2020-27218 Bump jetty.version to 9.4.35.v20201120. The [release notes](https://github.com/eclipse/jetty.project/releases/tag/jetty-9.4.35.v20201120) mention [issue 5605](jetty/jetty.project#5605): > java.io.IOException: unconsumed input during http request parsing which seems to match the description of [CVE-2020-27218](http://cve.circl.lu/cve/CVE-2020-27218) Author: Damien Diederen <[email protected]> Reviewers: Enrico Olivelli <[email protected]>, Norbert Kalmar <[email protected]>, Andor Molnar <[email protected]>, Patrick D. Hunt <[email protected]> Closes apache#1552 from ztzg/jetty-upgrade-CVE-2020-27218
Jetty version 9.4.33.v20201020
Java version 1.8
OS type/version Redhat 7.4
Description
I am seeing intermittent occurrences of connection loss from the server
It is not often but we haven't seen this issue when using the 9.2.x version before the upgrade
From the client we see
KeepAliveFailure The underlying connection was closed: A connection that was expected to be kept alive was closed by the server.
With the jetty debug logs enabled, it looks like jetty is aborting the connection due to a IOException: unconsumed input
It is not always reproducible.
Wanted insight from the jetty team to help determine the cause and how to avoid
Below is a snippet of the debug logs which captures the failing request
Any help is appreciated
The text was updated successfully, but these errors were encountered: