-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
possible bug (deadlock) when posting batch request content over a certain length #1687
Comments
Also experiencing this issue. Any advice would be appreciated. |
Also hitting this with BatchRequestContent.getBatchRequestContent(), same issue with the PipedInputStream. v3 was very much unaffected. JavaDoc advises against using a PipedInputStream and PipedOutputStream on the same thread for this very reason. https://docs.oracle.com/javase/8/docs/api/java/io/PipedInputStream.html For my purposes, this was sufficient, as I'm writing the content of the BatchRequestContent to my own request container and emitting it through our own IO.
|
@Ndiritu BatchRequestContent shall not be using PipedInputStream, could you please check? |
This part in BatchRequestContent
will always block forever when the output stream has more than 1024 bytes. A quick fix (at the cost of some additional memory usage, because
This passes all the existing unit tests including one I added that triggers the issue described here. To avoid using extra memory, you could subclass ByteArrayOutputStream and add a |
Just got the same issue today. I was trying to batch When I chunk the users in say 5 per batch post request it works, however this really defeats the purpose of batching requests. |
Also encountering this bug. Used this documentation for reference during implementation, but I'm performing list item updates with batches of 20. Here a snippet of my implementation for reference: // Create batch request content
final BatchRequestContent batchRequestContent = new BatchRequestContent(graphServiceClient);
final Map<String, String> requestIds = new HashMap<>();
docFieldValueSetMap.forEach((sharePointId, fieldValueSet) -> {
final RequestInformation patchRequestInformation = graphServiceClient
.sites()
.bySiteId(siteId)
.lists()
.byListId(documentLibraryListId)
.items()
.byListItemId(sharePointId)
.fields()
.toPatchRequestInformation(fieldValueSet);
final String requestId = batchRequestContent.addBatchRequestStep(patchRequestInformation);
requestIds.put(sharePointId, requestId);
});
// Send the batch request content to the /$batch endpoint (this is where it hangs)
final BatchResponseContent batchResponseContent = Objects.requireNonNull(graphServiceClient
.getBatchRequestBuilder()
.post(batchRequestContent, null)); Actually, it won't even complete with a batch size of 1. 😕 |
Also encountering this bug. Any update on when a fix will be released or simple workaround? |
Same here. This seems to have been an issue for months. I downgraded to v5 and it seems to work. This is clearly not the ideal situation, but I really don't know what was changed between v5 and v6 to cause this bug. |
I just ran into this issue, glad to see recent activity reporting it. Commenting to bump this issue <3 |
Hey everyone, |
There seems to be a bug with posting batch request content over a certain length (greater than
DEFAULT_PIPE_SIZE = 1024
ofPipedInputStream
).Calling
graphClient.getBatchRequestBuilder().post(batchRequestContent, null))
goes into a deadlock.Could you please check this?
Expected behavior
Graph API executes the batch request and returns some kind of response.
Actual behavior
Deadlock in
PipedInputStream.awaitSpace() line: 273
PipedInputStream.receive(byte[], int, int) line: 231
PipedOutputStream.write(byte[], int, int) line: 149
ByteArrayOutputStream.writeTo(OutputStream) line: 167
BatchRequestContent.getBatchRequestContent() line: 177
CustomBatchRequestBuilder(BatchRequestBuilder).toPostRequestInformation(BatchRequestContent) line: 84
CustomBatchRequestBuilder(BatchRequestBuilder).post(BatchRequestContent, Map<String,ParsableFactory>) line: 49
CustomBatchRequestBuilder.post(BatchRequestContent, Map<String,ParsableFactory>) line: 41
...
when calling
Steps to reproduce the behavior
Maven pom.xml
Code:
The text was updated successfully, but these errors were encountered: