Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The default value of the property client.rm.tableMetaCheckEnable should be changed to false to align with the behavior of Seata Client 2.1.0 and earlier #7042

Closed
linghengqian opened this issue Dec 2, 2024 · 3 comments
Assignees
Labels
good first issue Good for newcomers task: help-wanted Extra attention is needed type: bug Category issues or prs related to bug.

Comments

@linghengqian
Copy link
Member

linghengqian commented Dec 2, 2024

Why you need it?

Is your feature request related to a problem? Please describe in details

  • The default value of the property client.rm.tableMetaCheckEnable is always true, but this property never took effect in Seata Client 2.1.0 and before until bugfix: fix cache scheduled refresh issue. #6661 was merged on the Seata 2.2.0 milestone.
  • client.rm.tableMetaCheckEnable remains true, which is like entering the big world of Otome game for unit testing. It will continue to try to send requests after the TM and RM of Seata Client are destroyed, which looks like a connection leak.
  • I created a unit test at https://github.com/linghengqian/seata-table-meta-check-enable-test . We introduce a premise that the default value of client.rm.tableMetaCheckerInterval is 60000L, corresponding to 60 seconds. The unit test deliberately takes 2 minutes to complete through Awaitility.await().timeout(Duration.ofMinutes(5L)).pollDelay(Duration.ofMinutes(2L)).until(() -> true);. Verified unit test under Ubuntu 22.04.4 LTS with SDKMAN! and Docker CE.
sdk install java 23-open

git clone [email protected]:linghengqian/seata-table-meta-check-enable-test.git
cd ./seata-table-meta-check-enable-test/
sdk use java 23-open
./mvnw -T 1C clean test
  • The core logic is as follows:
@SuppressWarnings("resource")
public class SimpleTest {
    @Test
    void test() {
        assertThat(System.getProperty("service.default.grouplist"), is(nullValue()));
        try (GenericContainer<?> seataContainer = new GenericContainer<>("apache/seata-server:2.2.0")
                .withExposedPorts(7091, 8091)
                .waitingFor(Wait.forHttp("/health").forPort(7091).forStatusCode(200).forResponsePredicate("ok"::equals))
        ) {
            seataContainer.start();
            System.setProperty("service.default.grouplist", "127.0.0.1:" + seataContainer.getMappedPort(8091));
            TMClient.init("test-first", "default_tx_group");
            RMClient.init("test-first", "default_tx_group");
            HikariConfig config = new HikariConfig();
            config.setDriverClassName("org.testcontainers.jdbc.ContainerDatabaseDriver");
            config.setJdbcUrl("jdbc:tc:postgresql:17.1-bookworm://test/demo_ds_0?TC_INITSCRIPT=init.sql");
            try (HikariDataSource hikariDataSource = new HikariDataSource(config)) {
                DataSourceProxy seataDataSource = new DataSourceProxy(hikariDataSource);
                Awaitility.await().atMost(Duration.ofSeconds(15L)).ignoreExceptions().until(() -> {
                    seataDataSource.getConnection().close();
                    return true;
                });
            }
            RmNettyRemotingClient.getInstance().destroy();
            TmNettyRemotingClient.getInstance().destroy();
            System.clearProperty("service.default.grouplist");
        }
        Awaitility.await().timeout(Duration.ofMinutes(5L)).pollDelay(Duration.ofMinutes(2L)).until(() -> true);
    }
}
  • The log is as follows.
[ERROR] 2024-12-02 23:09:56.314 [tableMetaRefresh_1_1] o.a.s.r.d.s.s.TableMetaCacheFactory - table refresh error:HikariDataSource HikariDataSource (HikariPool-1) has been closed.
java.sql.SQLException: HikariDataSource HikariDataSource (HikariPool-1) has been closed.
        at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:95)
        at org.apache.seata.rm.datasource.DataSourceProxy.getConnection(DataSourceProxy.java:212)
        at org.apache.seata.rm.datasource.sql.struct.TableMetaCacheFactory$TableMetaRefreshHolder.lambda$new$0(TableMetaCacheFactory.java:129)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
        at java.base/java.lang.Thread.run(Thread.java:1575)
[ERROR] 2024-12-02 23:10:56.323 [tableMetaRefresh_1_1] o.a.s.r.d.s.s.TableMetaCacheFactory - table refresh error:HikariDataSource HikariDataSource (HikariPool-1) has been closed.
java.sql.SQLException: HikariDataSource HikariDataSource (HikariPool-1) has been closed.
        at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:95)
        at org.apache.seata.rm.datasource.DataSourceProxy.getConnection(DataSourceProxy.java:212)
        at org.apache.seata.rm.datasource.sql.struct.TableMetaCacheFactory$TableMetaRefreshHolder.lambda$new$0(TableMetaCacheFactory.java:129)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
        at java.base/java.lang.Thread.run(Thread.java:1575)
  • There is almost no way to prevent Seata Client from printing Error Log in batches at a frequency of 60 seconds. Unless I configure it explicitly,
client {
    rm {
        tableMetaCheckEnable = "false"
    }
}

How it could be?

A clear and concise description of what you want to happen. You can explain more about input of the feature, and output of it.

  • The default value of the property client.rm.tableMetaCheckEnable should be changed to false to align with the behavior of Seata Client 2.1.0 and earlier.

Other related information

Add any other context or screenshots about the feature request here.

@funky-eyes
Copy link
Contributor

I believe that after the corresponding datasource is closed, the associated refresh tableMeta task should also be closed. We should submit a PR to resolve this issue

@funky-eyes
Copy link
Contributor

A simple solution could be to check the exception, and if the exception indicates that the datasource has already been closed, the task can simply exit.

A more complex solution would be to establish a table-to-datasource association. When the datasource is closed, it actually calls the close method of the datasource proxy. At this point, we need to identify the thread tasks under this table and close them accordingly.

@funky-eyes funky-eyes added task: help-wanted Extra attention is needed type: bug Category issues or prs related to bug. good first issue Good for newcomers labels Dec 3, 2024
@LegGasai
Copy link
Contributor

LegGasai commented Dec 3, 2024

I'd like to claim this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers task: help-wanted Extra attention is needed type: bug Category issues or prs related to bug.
Projects
None yet
Development

No branches or pull requests

3 participants