-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: automatically detect database dialect #1677
Conversation
@@ -324,7 +329,7 @@ LeakedConnectionException getLeakedException() { | |||
|
|||
@Override | |||
public Dialect getDialect() { | |||
return options.getDialect(); | |||
return dbClient.getDialect(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I remember reading that the getDialect will be supported through options. Just putting a note in case my understanding is not correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The options will not support a getDialect
method, but there is a SessionPoolOptions#setAutoDetectDialect(boolean)
. Setting that option to true will make sure that the dialect detection query is executed automatically at startup in the background. That will normally make the DatabaseClient#getDialect
method non-blocking, unless it is called directly after creating a database client and the execution of the dialect query is still going on in the background. So to summarize:
getDialect()
can always be called.- The result of
getDialect
will always be cached. - The first call to
getDialect
can be blocking, unlessSessionPoolOptions#setAutoDetectDialect(true)
has been called.
IIRC from earlier planning, there were some Integration tests that may have additionally needed changes for supporting Automatic Dialect Detection. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have left 2 minor comments, otherwise Looks good to me.
@olavloite Have added a |
We have some integration tests that are parameterized for all supported dialects. See for example ITDmlTest. These tests still need the dialect as a parameter, as it needs to be specified specifically when creating a test database, and we also need the parameter to determine which of the two databases to use for the various tests, so we cannot use the automatic dialect detection feature in those tests. So those tests still work the same as before adding automatic dialect detection. |
@thiagotnunes Do you want to take a quick look before merging? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to change the integration tests that were parameterizing the dialect?
@@ -106,6 +106,17 @@ public static void startStaticServer() throws Exception { | |||
"This test is only supported on JDK11 and lower", | |||
JavaVersionUtil.getJavaMajorVersion() < 12); | |||
|
|||
// Use a little reflection to set the test tracer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this change included in this PR? Was it causing flakyness / test failures?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It caused a test failure and needed to be moved up because:
- If automatic dialect detection on startup is enabled, the session pool will execute the detection query in the background.
- That background query could be executed before the
failOnOverkillTraceComponent
was set, which would then cause a NullPointerException, as it would trigger the trace feature.
@@ -577,62 +576,6 @@ public void testSetCredentialsAndEncodedCredentials() throws Exception { | |||
"Cannot specify both a credentials URL and encoded credentials. Only set one of the properties."); | |||
} | |||
|
|||
@Test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there an option to set automatic dialect detection to false
or do you think we will not need that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ConnectionOptions
does not have that option, and I also don't think we need that, as a connection must always know the dialect. So I think that if we were to add that option, then we probably also need to add a setDialect
method (or the setAutomaticDialectDetection(false)
must automatically also state something like 'this will make the connection always assume that the dialect is Google standard SQL).
As this is a class and not an interface, we can add those methods later if we find that it would be beneficial without introducing a breaking change.
No, those integration tests still use the dialect parameter to determine which type of database to create. |
The dialect of the database that a
DatabaseClient
is connected to can be automatically detected:DatabaseClient#getDialect()
has been added. This method always returns the dialect of the underlying database. It will do so by executing a query that detects the dialect. This query can also be executed and cached automatically in the background during startup (see below).SessionPoolOptions#setAutoDetectDialect(true)
will cause the dialect detection query to be executed in the background automatically when a new client is created. This is disabled by default, except for when a Connection API connection (or anything that depends on that, such as JDBC) is opened. The reason for this default behavior is that a normal Spanner instance does normally not need to know what the dialect of the underlying database is, while the Connection API does. This reduces the number of times the detection query will be executed during production use.