Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HDDS-3009. Create Container should retry other volumes if writes fail #5300

Merged
merged 1 commit into from
Sep 19, 2023

Conversation

sodonnel
Copy link
Contributor

What changes were proposed in this pull request?

The reported issue had a datanode with 2 disks:

/dev/nvme2n1    985G  935G     0 100% /ozone-data
/dev/nvme4n1    985G  453G  482G  49% /ozone-data1

One disk is full, but the create container call still failed with this stack:

2020-02-13 10:58:01,097 ERROR org.apache.hadoop.ozone.container.keyvalue.helpers.KeyValueContainerUtil: Unable to create directory for metadata storage. Path: /ozone-data/hdds/cf793a84-8529-4897-8bf9-f18ec79e8ad6/current/containerDir1/536/metadata
2020-02-13 10:58:01,097 INFO org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler: Operation: CreateContainer , Trace ID:  , Message: Container creation failed. Unable to create directory for metadata storage. Path: /ozone-data/hdds/cf793a84-8529-4897-8bf9-f18ec79e8ad6/current/containerDir1/536/metadata , Result: CONTAINER_INTERNAL_ERROR , StorageContainerException Occurred.
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: Container creation failed. Unable to create directory for metadata storage. Path: /ozone-data/hdds/cf793a84-8529-4897-8bf9-f18ec79e8ad6/current/containerDir1/536/metadata
        at org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.create(KeyValueContainer.java:177)
        at org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handleCreateContainer(KeyValueHandler.java:244)
        at org.apache.hadoop.ozone.container.keyvalue.KeyValueHandler.handle(KeyValueHandler.java:164)
        at org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.createContainer(HddsDispatcher.java:412)
        at org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatchRequest(HddsDispatcher.java:248)
        at org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:162)
        at org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(ContainerStateMachine.java:396)
        at org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.runCommand(ContainerStateMachine.java:406)
        at org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$handleWriteChunk$2(ContainerStateMachine.java:441)
        at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Unable to create directory for metadata storage. Path: /ozone-data/hdds/cf793a84-8529-4897-8bf9-f18ec79e8ad6/current/containerDir1/536/metadata
        at org.apache.hadoop.ozone.container.keyvalue.helpers.KeyValueContainerUtil.createContainerMetaData(KeyValueContainerUtil.java:73)
        at org.apache.hadoop.ozone.container.keyvalue.KeyValueContainer.create(KeyValueContainer.java:142)
        ... 12 more

The reason is that the volume choosing policy filters out disks it knows are full, but if there is some lag or mis-calculation in the free space, the disk could be full and the volume policy may not know this yet. Or, it could have quickly filled before it can be used.

Further, if the disk is bad, any IO error would not be caught and the container creation would fail.

This change lets other disks get tried by removing the "failed disk" from section and trying again until there are no further disks left to try.

What is the link to the Apache JIRA

https://issues.apache.org/jira/browse/HDDS-3009

How was this patch tested?

New unit test to reproduce and validate the fix.

Copy link
Contributor

@adoroszlai adoroszlai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @sodonnel for the patch, LGTM.

@sodonnel sodonnel merged commit 65e1c9b into apache:master Sep 19, 2023
31 checks passed
@sodonnel
Copy link
Contributor Author

Thanks for the review @adoroszlai

@sokui sokui mentioned this pull request Dec 11, 2024
@swamirishi swamirishi mentioned this pull request Dec 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants