Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

possible bug (deadlock) when posting batch request content over a certain length #1687

Closed
dan-drewes opened this issue May 10, 2024 · 10 comments · Fixed by #1688
Closed

possible bug (deadlock) when posting batch request content over a certain length #1687

dan-drewes opened this issue May 10, 2024 · 10 comments · Fixed by #1688
Assignees

Comments

@dan-drewes
Copy link

dan-drewes commented May 10, 2024

There seems to be a bug with posting batch request content over a certain length (greater than DEFAULT_PIPE_SIZE = 1024 of PipedInputStream).

Calling graphClient.getBatchRequestBuilder().post(batchRequestContent, null)) goes into a deadlock.

Could you please check this?

Expected behavior

Graph API executes the batch request and returns some kind of response.

Actual behavior

Deadlock in

PipedInputStream.awaitSpace() line: 273
PipedInputStream.receive(byte[], int, int) line: 231
PipedOutputStream.write(byte[], int, int) line: 149
ByteArrayOutputStream.writeTo(OutputStream) line: 167
BatchRequestContent.getBatchRequestContent() line: 177
CustomBatchRequestBuilder(BatchRequestBuilder).toPostRequestInformation(BatchRequestContent) line: 84
CustomBatchRequestBuilder(BatchRequestBuilder).post(BatchRequestContent, Map<String,ParsableFactory>) line: 49
CustomBatchRequestBuilder.post(BatchRequestContent, Map<String,ParsableFactory>) line: 41
...

when calling

final BatchResponseContent batchResponseContent = Objects.requireNonNull(
            graphClient.getBatchRequestBuilder().post(batchRequestContent, null));

Steps to reproduce the behavior

Maven pom.xml

<dependency>
	      <groupId>com.microsoft.graph</groupId>
	      <artifactId>microsoft-graph</artifactId>
	      <version>6.8.0</version>
</dependency>
<dependency>
	      <groupId>com.azure</groupId>
	      <artifactId>azure-identity</artifactId>
	      <version>1.12.1</version>
</dependency>

Code:

// Create the batch request content with the steps
final BatchRequestContent batchRequestContent = new BatchRequestContent(
		graphClient);

//add some request steps to increase content length
for (int i = 1; i <= 9; i++)
{
	// Use the Graph client to generate the requestInformation object for GET /me
	final RequestInformation meRequestInformation = graphClient.me()
			.toGetRequestInformation();

	// Add the requestInformation objects to the batch request content
	batchRequestContent
			.addBatchRequestStep(meRequestInformation);
}

try
{
	// Send the batch request content to the /$batch endpoint
	final BatchResponseContent batchResponseContent = Objects.requireNonNull(
			graphClient.getBatchRequestBuilder().post(batchRequestContent, null));

	System.out.println("response received");

}
catch (IOException e)
{
	e.printStackTrace();
}
@kbullock-re
Copy link

Also experiencing this issue. Any advice would be appreciated.

@cschenkeltherolf
Copy link

cschenkeltherolf commented May 22, 2024

Also hitting this with BatchRequestContent.getBatchRequestContent(), same issue with the PipedInputStream. v3 was very much unaffected.

JavaDoc advises against using a PipedInputStream and PipedOutputStream on the same thread for this very reason.

https://docs.oracle.com/javase/8/docs/api/java/io/PipedInputStream.html

For my purposes, this was sufficient, as I'm writing the content of the BatchRequestContent to my own request container and emitting it through our own IO.

return outputStream.toString(StandardCharsets.UTF_8);

@petrhollayms
Copy link

@Ndiritu BatchRequestContent shall not be using PipedInputStream, could you please check?

@debuhr
Copy link

debuhr commented Jun 5, 2024

This part in BatchRequestContent

PipedInputStream in = new PipedInputStream();
try(final PipedOutputStream out = new PipedOutputStream(in)) {
    outputStream.writeTo(out);
    return in;
}

will always block forever when the output stream has more than 1024 bytes.

A quick fix (at the cost of some additional memory usage, because toByteArray() copies the internal array) would be to replace above code with:

return new ByteArrayInputStream(outputStream.toByteArray());

This passes all the existing unit tests including one I added that triggers the issue described here.

To avoid using extra memory, you could subclass ByteArrayOutputStream and add a toInputStream() method that uses the internal array directly instead of copying it.

@petarov
Copy link

petarov commented Jul 6, 2024

microsoft-graph:6.13.0

Just got the same issue today. I was trying to batch graphClient.users().byUserId(id).memberOf().toGetRequestInformation() for 15 users, but the .post() call blocks indefinitely.

When I chunk the users in say 5 per batch post request it works, however this really defeats the purpose of batching requests.

@Ndiritu Ndiritu self-assigned this Jul 9, 2024
@KrashLeviathan
Copy link

KrashLeviathan commented Jul 9, 2024

Also encountering this bug. Used this documentation for reference during implementation, but I'm performing list item updates with batches of 20. Here a snippet of my implementation for reference:

// Create batch request content
final BatchRequestContent batchRequestContent = new BatchRequestContent(graphServiceClient);
final Map<String, String> requestIds = new HashMap<>();
docFieldValueSetMap.forEach((sharePointId, fieldValueSet) -> {
    final RequestInformation patchRequestInformation = graphServiceClient
            .sites()
            .bySiteId(siteId)
            .lists()
            .byListId(documentLibraryListId)
            .items()
            .byListItemId(sharePointId)
            .fields()
            .toPatchRequestInformation(fieldValueSet);
    final String requestId = batchRequestContent.addBatchRequestStep(patchRequestInformation);
    requestIds.put(sharePointId, requestId);
});

// Send the batch request content to the /$batch endpoint (this is where it hangs)
final BatchResponseContent batchResponseContent = Objects.requireNonNull(graphServiceClient
        .getBatchRequestBuilder()
        .post(batchRequestContent, null));

Actually, it won't even complete with a batch size of 1. 😕

@AFrieze
Copy link

AFrieze commented Jul 31, 2024

Also encountering this bug. Any update on when a fix will be released or simple workaround?

@kbullock-re
Copy link

Actually, it won't even complete with a batch size of 1. 😕

Same here. This seems to have been an issue for months. I downgraded to v5 and it seems to work. This is clearly not the ideal situation, but I really don't know what was changed between v5 and v6 to cause this bug.

@baywet baywet transferred this issue from microsoftgraph/msgraph-sdk-java Jul 31, 2024
@azatecas
Copy link

I just ran into this issue, glad to see recent activity reporting it. Commenting to bump this issue <3

@baywet
Copy link
Member

baywet commented Jul 31, 2024

Hey everyone,
Thank you for your patience with the delay, this had been buried under other higher priority items for the past couple of months.
I authored #1688 to address the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet