-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Excessive dependencies of the java libraries are causing cascading failures when attempting to upgrade #1921
Comments
Thanks for the report, @turneand. Let me raise this with our Cloud Java team to see if what we can do here. |
@meltsufin Do you know of any related issues covering dependency sizes? |
I think it would be helpful to get into the specifics. Yes, we hope that the users can adopt the Libraries BOM which ensures dependency compatibilities. Which dependencies are preventing you from adopting the BOM? |
So there's a few different examples here, but I'll try to cover them...
|
@turneand Thanks for the explanation. It seems like the root issue is that the Cloud SQL connectors use GAX, but are not in the Libraries BOM. I believe you can just exclude GAX from Cloud SQL dependencies because it seems to be only there for GraalVM support. cc: @suztomo @mpeddada1 |
The GAX dependency scope to |
So that is going to help on one of the issues, but the same underlying issue around dependencies still remains. For example, we've now found some show-stopper bugs for us in the pubsub libraries, that means I think we are going to have to downgrade them. However, due to the complex dependencies between these cloudsql libraries, and the pubsub libraries, we are going to have a bit of a problem finding something that is compatible. Which is also problematic as even if we used the libraries-bom, we'd have issues, as I still cannot find anything around compatibility of these cloudsql drivers and libraries-bom As far as I understand it, I think the only option we've really got now is to go back to using the cloud-sql-proxy? But even that seems overkill for a service running in GCE/GKE. |
I agree -- it seems unnecessary to have to use the Proxy when the Java Connector would otherwise work just as well. Is IAM authentication the primary motivation for using the Java Connector? I could show you how to do IAM authentication with a plain HikariCP data source if there's interest. |
@enocom - would definitely be interested in a more "native" option for IAM authentication for when we don't need the full capabilities of the proxy options. All examples I found regarding IAM explicitly stated to use these connectors, but if we have a lighter option would be good. |
Here's how you get IAM authentication with token refresh without the Connectors. First, subclass package dev.enocom.dbaccess;
import com.google.auth.oauth2.AccessToken;
import com.google.auth.oauth2.GoogleCredentials;
import com.zaxxer.hikari.HikariConfig;
import com.zaxxer.hikari.HikariDataSource;
import java.io.IOException;
public class CloudSqlAutoIamAuthnDataSource extends HikariDataSource {
public CloudSqlAutoIamAuthnDataSource(HikariConfig configuration) {
super(configuration);
}
@Override
public String getPassword() {
GoogleCredentials credentials;
try {
credentials = GoogleCredentials.getApplicationDefault();
} catch (IOException err) {
throw new RuntimeException(
"Unable to obtain credentials to communicate with the Cloud SQL API", err);
}
// Scope the token to ensure it's scoped to logins only.
GoogleCredentials scoped = credentials.createScoped(
"https://www.googleapis.com/auth/sqlservice.login");
try {
scoped.refresh();
} catch (IOException e) {
throw new RuntimeException(e);
}
AccessToken accessToken = scoped.getAccessToken();
return accessToken.getTokenValue();
}
} Then use the data source like this: package dev.enocom.dbaccess;
import com.zaxxer.hikari.HikariConfig;
import com.zaxxer.hikari.HikariDataSource;
import javax.sql.DataSource;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
@SpringBootApplication
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
@Bean
DataSource getDataSource() {
HikariConfig config = new HikariConfig();
config.setJdbcUrl("jdbc:postgresql://10.0.0.2/postgres");
config.setUsername("[email protected]");
config.addDataSourceProperty("ssl", "true");
// You can enforce this on the server too (without needed client certs)
config.addDataSourceProperty("sslmode", "require");
return new CloudSqlAutoIamAuthnDataSource(config);
}
} |
Thanks @enocom , is there a recommended implementation for r2dbc? |
The R2DBC version would look like this: ConnectionFactoryOptions options = ConnectionFactoryOptions.parse("r2dbc:postgresql://host/database");
ConnectionFactory connectionFactoryStub = ConnectionFactories.get(options);
Mono<? extends Connection> connectionPublisher = Mono.defer(() -> {
GoogleCredentials credentials;
try {
credentials = GoogleCredentials.getApplicationDefault();
} catch (IOException err) {
throw new RuntimeException(
"Unable to obtain credentials to communicate with the Cloud SQL API", err);
}
// Scope the token to ensure it's scoped to logins only.
GoogleCredentials scoped = credentials.createScoped(
"https://www.googleapis.com/auth/sqlservice.login");
try {
scoped.refresh();
} catch (IOException e) {
throw new RuntimeException(e);
}
AccessToken accessToken = scoped.getAccessToken();
ConnectionFactoryOptions optionsToUse = options.mutate()
// provide a new password each time we see a connect request
.option(ConnectionFactoryOptions.PASSWORD, accessToken.getTokenValue())
.build();
return Mono.from(ConnectionFactories.get(optionsToUse).create());
});
ConnectionFactory myCustomConnectionFactory = new ConnectionFactory() {
@Override
public Publisher<? extends Connection> create() {
return connectionPublisher;
}
@Override
public ConnectionFactoryMetadata getMetadata() {
return connectionFactoryStub.getMetadata();
}
};
ConnectionPoolConfiguration poolConfiguration = ConnectionPoolConfiguration.builder().connectionFactory(myCustomConnectionFactory).build();
ConnectionPool pool = new ConnectionPool(poolConfiguration); |
And as for providing these as a lightweight library, yes, we've been thinking about that but haven't made a decision. cc @jackwotherspoon as FYI |
Unfortunately, we cannot commit to this effort right now. We will keep this in mind for future work. |
I am reopening this as a feature request. We will continue to consider this as we plan our future work. |
Bug Description
Not sure if this is the best place to raise this issue, as it's a general concern we keep facing with all of the google provided java APIs. But, we are specifically trying to acquire the fix in this library in version 1.17.0 to fix r2dbc health when using REMOTE validation, but are currently unable to.
Where our web applications use dependency management frameworks (such as spring-boot or micronaut) that expose netty servers, and are complex applications in their own right that need version management, upgrades, etc, we find ourselves having to manually tweak all of the google client libraries every time an upgrade comes through, and hope there's no breaking changes. For the r2dbc issue we have gax and netty incompatibilities with those provided by other google libraries (pubsub, otel, etc). We've got places where the guava version used has been overwritten from the "jre" version to the "android" version due to the cloud-sql-connector-r2dbc-postgres itself requiring the android version, but its own transitive dependencies require the jre version (this causes compatibility issues with the google otel libraries that require the jre, and crash out at runtime if only the android version is installed.
I understand the intention is that we should use the bom, but this is incompatible with other management frameworks, as it overrides "standard" libraries (such as netty), and we cannot really use the uber-jar's as if we did that for each library we'd end up our releases being several hundreds of MB (even with just a single google client dependency, it is already around the 100MB mark even for trivial apps).
So, the ask is, is it possible to reduce the dependencies of these libraries, or have more parts defined as optional as for example when deploying an simple web application within a GKE application using the r2dbc libraries, we do not use hardly any of them.
Example code (or command)
No response
Stacktrace
No response
Steps to reproduce?
Create new project with just the "cloud-sql-connector-r2dbc-postgres" dependency added, and look at the dependency tree.
Environment
all
Additional Details
No response
The text was updated successfully, but these errors were encountered: