-
Notifications
You must be signed in to change notification settings - Fork 146
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kafka Connect: Sink connector eventually gets stuck in retry loop #120
Comments
* Handle a null schema in the sink connector. * Fix concurrent access on iteration of synchronized set. * Reduce the ackIds iteration synchronization to the shortest period needed * Clear list of ackIds as soon as acks are sent. This will make acks best effort, but also prevents us from forgetting to send some acks if a poll happens while there is an outstanding ack request * Better error message when verifySubscription fails. * When an exception happens on pull, just return an empty list to poll. * Ensure partition number is non-negative in CPS source connector. * Shade out the guava library so the connector can work with a newer version of Kafka that depends on a newer version of the guava library. * Shade out the guava library so the connector can work with a newer version of Kafka that depends on a newer version of the guava library. * Update versions of gRPC libraries used by Kafka Connector. This should hopefully fix issue #120.
I have merged a change that I hope will fix the issue. The connector was using some fairly old gRPC versions and tI'm fairly certain this issue was fixed at some point in those libraries. Can you please try with a newly-built version of the library and see if it works for you? Thanks! |
* Fixing POM file to get rid of maven warnings * Changed formatting checks to check for the Google formatting rules and adapted code to these rules. See https://google.github.io/styleguide/javaguide.html * Changed to older version of checkstyle to be compatible with JDK 1.7 * Revert "Changed to older version of checkstyle to be compatible with JDK 1.7" This reverts commit 7376b6a. * Check Google code style only at the end and only using JDK1.8 * For the actual tests, stop at the integration-test phase rather than also verify. Run verify once, separately and using JDK8 only. * Reverting some changes in jms-light/pom.xml that might have somehow influenced the global behavior of maven * Added a timeout while waiting for the fake server to terminate * Fixing POM file to get rid of maven warnings * Changed formatting checks to check for the Google formatting rules and adapted code to these rules. See https://google.github.io/styleguide/javaguide.html * Changed to older version of checkstyle to be compatible with JDK 1.7 * Revert "Changed to older version of checkstyle to be compatible with JDK 1.7" This reverts commit 7376b6a. * Check Google code style only at the end and only using JDK1.8 * For the actual tests, stop at the integration-test phase rather than also verify. Run verify once, separately and using JDK8 only. * Reverting some changes in jms-light/pom.xml that might have somehow influenced the global behavior of maven * Added a timeout while waiting for the fake server to terminate * Put tests back, but disable them * add go subscriber loadtest (#74) * typo * add go subscriber * automatically compile go loadtest binary * pr comment * Add PubSubTemporaryTopic class * Improves error messages, enables Go message tracking. * Update README.md * Update README.md * Update README.md * Fix null key and exception about read-only ByteBuffer that happens wh… (#78) * Fix null key and exception about read-only ByteBuffer that happens when using a JsonConverter in a source connector. * Fix null key and exception about read-only ByteBuffer that happens when using a JsonConverter in a source connector. * added source file * nit * nit * nit * use public access * use extends * Moved mockito and junit to the test scope. Removed the dependency on the kafka client. This is provided by the connect-api. Changed the scope of connect-api to provided. Updated the kafka version to 0.10.2.0. (#84) * nit * Initial version of PubSubMessageConsumer. * added file * added constructor * nit * fix pmd * nit * nit * added file * added -Xlint:all to all compile * Xlint Configuration This is the configuration that I used. With this, I get the following warnings (the second one is probably independent of -Xlint): ... [WARNING] /usr/local/google/home/uhu/src/pubsub/jms-light/src/main/java/com/google/pubsub/jms/light/message/PubSubTextMessage.java:[39,16] unchecked cast required: T found: java.lang.String ... [WARN] /usr/local/google/home/uhu/src/pubsub/jms-light/src/main/java/com/google/pubsub/jms/light/PubSubTopicSession.java:9: Wrong lexicographical order for 'com.google.pubsub.jms.light.destination.PubSubTemporaryTopic' import. Should be before 'javax.jms.TemporaryTopic'. [CustomImportOrder] [WARN] /usr/local/google/home/uhu/src/pubsub/jms-light/src/main/java/com/google/pubsub/jms/light/PubSubTopicConnectionFactory.java:12: Line is longer than 100 characters (found 109). [LineLength] [WARN] /usr/local/google/home/uhu/src/pubsub/jms-light/src/main/java/com/google/pubsub/jms/light/PubSubTopicConnectionFactory.java:20: Line is longer than 100 characters (found 114). [LineLength] * Resolved unchecked cast * Cleaned code style, especially for test classes. This addresses Github JMS issue #92. * nit * nit * Handle a null schema in the sink connector. (#94) * Adding ${os.detected.classifier} to Pom (#96) Just adding ${os.detected.classifier} to Pom so the right netty-tcnative-boring-ssl-static library is selected. This is to ensure the connector can work in any OS (e.g. OSX). * Fix concurrent access on iteration of synchronized set. (#98) * Handle a null schema in the sink connector. * Fix concurrent access on iteration of synchronized set. * Reduce the ackIds iteration synchronization to the shortest period needed * Make acks best effort. (#99) * Clear list of ackIds as soon as acks are sent. This will make acks best effort, but also prevents us from forgetting to send some acks if a poll happens while there is an outstanding ack request * Better error message when verifySubscription fails. (#100) * Better error message when verifySubscription fails. * When an exception happens on pull, just return an empty list to poll. (#102) * Handle a null schema in the sink connector. * Fix concurrent access on iteration of synchronized set. * Reduce the ackIds iteration synchronization to the shortest period needed * Clear list of ackIds as soon as acks are sent. This will make acks best effort, but also prevents us from forgetting to send some acks if a poll happens while there is an outstanding ack request * Better error message when verifySubscription fails. * When an exception happens on pull, just return an empty list to poll. * don't depend on gRPC internals (#103) * Don't reference gRPC internal classes * don't depend on gRPC internals * Create README.md * Ensure that partition numbers are non-negative (#105) * Ensure partition number is non-negative in CPS source connector. * Update client library dependencies. * Update Framework with Java Beta Client (#108) * Update Framework with Java Beta Client Updates the framework with the Java Beta client, and removes the experimental client now that the changes have been merged into the google-cloud-java repository. * Remove previous "spi" from import paths * Minor bugfixes, removed VTK since it no longer exists * Remove unused startup scripts (#110) * Remove Alpha Client Libraries, Redirect to google-cloud-java (#109) * Remove Alpha Client Libraries, Redirect to google-cloud-java * Removed client from .travis.yml * readme fix * lint fixes * Added Ruby publisher * updated publisher * Added Ruby subscriber * fixed comments * fixes for CPS Subscriber, updates with new Python publisher * python subscriber * fix firewall * Update run.py Fix for python3 * Shade the guava library to avoid a NoMethodFoundError (#117) * Handle a null schema in the sink connector. * Fix concurrent access on iteration of synchronized set. * Reduce the ackIds iteration synchronization to the shortest period needed * Clear list of ackIds as soon as acks are sent. This will make acks best effort, but also prevents us from forgetting to send some acks if a poll happens while there is an outstanding ack request * Better error message when verifySubscription fails. * When an exception happens on pull, just return an empty list to poll. * Ensure partition number is non-negative in CPS source connector. * Shade out the guava library so the connector can work with a newer version of Kafka that depends on a newer version of the guava library. * Shade out the guava library so the connector can work with a newer version of Kafka that depends on a newer version of the guava library. * fix * Fix some broken markdown in the README (#119) Some newlines snuck their way into these links * Update run.py Uncomment Go * java: don't sync on shutdown (#125) * java: don't sync on shutdown Shutting down needs to wait for messages to be processed, and processing the messages need to lock the Task object. So, if the thread calling shutdown locks the Task object, we'll deadlock. * pr comment * increase parallel puller count (#124) * increase parallel puller count The new default number of parallel pullers is NumCPU. To keep up with publishers, we need to increase the number of pullers. This commit increaes the number to 5*NumCPU, the same as the old default. * bump version * Move away from very old versions of gRPC libraries (#130) * Handle a null schema in the sink connector. * Fix concurrent access on iteration of synchronized set. * Reduce the ackIds iteration synchronization to the shortest period needed * Clear list of ackIds as soon as acks are sent. This will make acks best effort, but also prevents us from forgetting to send some acks if a poll happens while there is an outstanding ack request * Better error message when verifySubscription fails. * When an exception happens on pull, just return an empty list to poll. * Ensure partition number is non-negative in CPS source connector. * Shade out the guava library so the connector can work with a newer version of Kafka that depends on a newer version of the guava library. * Shade out the guava library so the connector can work with a newer version of Kafka that depends on a newer version of the guava library. * Update versions of gRPC libraries used by Kafka Connector. This should hopefully fix issue #120. * Edit travis script.
* Handle a null schema in the sink connector. * Fix concurrent access on iteration of synchronized set. * Reduce the ackIds iteration synchronization to the shortest period needed * Clear list of ackIds as soon as acks are sent. This will make acks best effort, but also prevents us from forgetting to send some acks if a poll happens while there is an outstanding ack request * Better error message when verifySubscription fails. * When an exception happens on pull, just return an empty list to poll. * Ensure partition number is non-negative in CPS source connector. * Shade out the guava library so the connector can work with a newer version of Kafka that depends on a newer version of the guava library. * Shade out the guava library so the connector can work with a newer version of Kafka that depends on a newer version of the guava library. * Update versions of gRPC libraries used by Kafka Connector. This should hopefully fix issue #120. * Remove bad oracle jdk7 environment from Travis runs.
* Handle a null schema in the sink connector. * Fix concurrent access on iteration of synchronized set. * Reduce the ackIds iteration synchronization to the shortest period needed * Clear list of ackIds as soon as acks are sent. This will make acks best effort, but also prevents us from forgetting to send some acks if a poll happens while there is an outstanding ack request * Better error message when verifySubscription fails. * When an exception happens on pull, just return an empty list to poll. * Ensure partition number is non-negative in CPS source connector. * Shade out the guava library so the connector can work with a newer version of Kafka that depends on a newer version of the guava library. * Shade out the guava library so the connector can work with a newer version of Kafka that depends on a newer version of the guava library. * Update versions of gRPC libraries used by Kafka Connector. This should hopefully fix issue #120. * Remove bad oracle jdk7 environment from Travis runs. * Minor formatting fixes.
* Handle a null schema in the sink connector. * Fix concurrent access on iteration of synchronized set. * Reduce the ackIds iteration synchronization to the shortest period needed * Clear list of ackIds as soon as acks are sent. This will make acks best effort, but also prevents us from forgetting to send some acks if a poll happens while there is an outstanding ack request * Better error message when verifySubscription fails. * When an exception happens on pull, just return an empty list to poll. * Ensure partition number is non-negative in CPS source connector. * Shade out the guava library so the connector can work with a newer version of Kafka that depends on a newer version of the guava library. * Shade out the guava library so the connector can work with a newer version of Kafka that depends on a newer version of the guava library. * Update versions of gRPC libraries used by Kafka Connector. This should hopefully fix issue #120. * Remove bad oracle jdk7 environment from Travis runs. * Minor formatting fixes. * Add a new config property to the sink connector, metadata.publish. When this property is true, the following attributes are added to a message published to Cloud Pub/Sub via the sink connector: kafka.topic: The Kafka topic on which the message was originally published kafka.partition: The Kafka partition to which the message was originally published kafka.offset: The offset of the message in the Kafka topic/partition kafka.timestamp: The timestamp that Kafka associated with the message * Calculate message size at the end rather than along the way. * Remove the temporary variables for Kafka attributes
I am running the pubsub connector after building the latest kafka-connector . Running into the same issue in this thread. I opened an issue googleapis/google-cloud-java#3071 . Is any one facing the same issue ? Any work arounds available?
|
I have the same issue using |
* Handle a null schema in the sink connector. * Fix concurrent access on iteration of synchronized set. * Reduce the ackIds iteration synchronization to the shortest period needed * Clear list of ackIds as soon as acks are sent. This will make acks best effort, but also prevents us from forgetting to send some acks if a poll happens while there is an outstanding ack request * Better error message when verifySubscription fails. * When an exception happens on pull, just return an empty list to poll. * Ensure partition number is non-negative in CPS source connector. * Shade out the guava library so the connector can work with a newer version of Kafka that depends on a newer version of the guava library. * Shade out the guava library so the connector can work with a newer version of Kafka that depends on a newer version of the guava library. * Update versions of gRPC libraries used by Kafka Connector. This should hopefully fix issue #120. * Remove bad oracle jdk7 environment from Travis runs. * Minor formatting fixes. * Add a new config property to the sink connector, metadata.publish. When this property is true, the following attributes are added to a message published to Cloud Pub/Sub via the sink connector: kafka.topic: The Kafka topic on which the message was originally published kafka.partition: The Kafka partition to which the message was originally published kafka.offset: The offset of the message in the Kafka topic/partition kafka.timestamp: The timestamp that Kafka associated with the message * Calculate message size at the end rather than along the way. * Remove the temporary variables for Kafka attributes * Periodically recreate GRPC publishers and subscribers in order to avoid GOAWAY errors. * Formatting/syntactic fixes * Switch sink connector to client library. * Remove commented-out code * Fix java 7 compatibility
* Handle a null schema in the sink connector. * Fix concurrent access on iteration of synchronized set. * Reduce the ackIds iteration synchronization to the shortest period needed * Clear list of ackIds as soon as acks are sent. This will make acks best effort, but also prevents us from forgetting to send some acks if a poll happens while there is an outstanding ack request * Better error message when verifySubscription fails. * When an exception happens on pull, just return an empty list to poll. * Ensure partition number is non-negative in CPS source connector. * Shade out the guava library so the connector can work with a newer version of Kafka that depends on a newer version of the guava library. * Shade out the guava library so the connector can work with a newer version of Kafka that depends on a newer version of the guava library. * Update versions of gRPC libraries used by Kafka Connector. This should hopefully fix issue #120. * Remove bad oracle jdk7 environment from Travis runs. * Minor formatting fixes. * Add a new config property to the sink connector, metadata.publish. When this property is true, the following attributes are added to a message published to Cloud Pub/Sub via the sink connector: kafka.topic: The Kafka topic on which the message was originally published kafka.partition: The Kafka partition to which the message was originally published kafka.offset: The offset of the message in the Kafka topic/partition kafka.timestamp: The timestamp that Kafka associated with the message * Calculate message size at the end rather than along the way. * Remove the temporary variables for Kafka attributes * Periodically recreate GRPC publishers and subscribers in order to avoid GOAWAY errors. * Formatting/syntactic fixes * Switch sink connector to client library. * Remove commented-out code * Fix java 7 compatibility * Add parameters for timeouts for publishes to Cloud Pub/Sub.
* Handle a null schema in the sink connector. * Fix concurrent access on iteration of synchronized set. * Reduce the ackIds iteration synchronization to the shortest period needed * Clear list of ackIds as soon as acks are sent. This will make acks best effort, but also prevents us from forgetting to send some acks if a poll happens while there is an outstanding ack request * Better error message when verifySubscription fails. * When an exception happens on pull, just return an empty list to poll. * Ensure partition number is non-negative in CPS source connector. * Shade out the guava library so the connector can work with a newer version of Kafka that depends on a newer version of the guava library. * Shade out the guava library so the connector can work with a newer version of Kafka that depends on a newer version of the guava library. * Update versions of gRPC libraries used by Kafka Connector. This should hopefully fix issue #120. * Remove bad oracle jdk7 environment from Travis runs. * Minor formatting fixes. * Add a new config property to the sink connector, metadata.publish. When this property is true, the following attributes are added to a message published to Cloud Pub/Sub via the sink connector: kafka.topic: The Kafka topic on which the message was originally published kafka.partition: The Kafka partition to which the message was originally published kafka.offset: The offset of the message in the Kafka topic/partition kafka.timestamp: The timestamp that Kafka associated with the message * Calculate message size at the end rather than along the way. * Remove the temporary variables for Kafka attributes * Periodically recreate GRPC publishers and subscribers in order to avoid GOAWAY errors. * Formatting/syntactic fixes * Switch sink connector to client library. * Remove commented-out code * Fix java 7 compatibility * Add parameters for timeouts for publishes to Cloud Pub/Sub. * Fix handling of optional struct fields and error message for missing fields, which would show up as a message about unsupported nested types. * Add test case for optional field where the value is present.
* Handle a null schema in the sink connector. * Fix concurrent access on iteration of synchronized set. * Reduce the ackIds iteration synchronization to the shortest period needed * Clear list of ackIds as soon as acks are sent. This will make acks best effort, but also prevents us from forgetting to send some acks if a poll happens while there is an outstanding ack request * Better error message when verifySubscription fails. * When an exception happens on pull, just return an empty list to poll. * Ensure partition number is non-negative in CPS source connector. * Shade out the guava library so the connector can work with a newer version of Kafka that depends on a newer version of the guava library. * Shade out the guava library so the connector can work with a newer version of Kafka that depends on a newer version of the guava library. * Update versions of gRPC libraries used by Kafka Connector. This should hopefully fix issue #120. * Remove bad oracle jdk7 environment from Travis runs. * Minor formatting fixes. * Add a new config property to the sink connector, metadata.publish. When this property is true, the following attributes are added to a message published to Cloud Pub/Sub via the sink connector: kafka.topic: The Kafka topic on which the message was originally published kafka.partition: The Kafka partition to which the message was originally published kafka.offset: The offset of the message in the Kafka topic/partition kafka.timestamp: The timestamp that Kafka associated with the message * Calculate message size at the end rather than along the way. * Remove the temporary variables for Kafka attributes * Periodically recreate GRPC publishers and subscribers in order to avoid GOAWAY errors. * Formatting/syntactic fixes * Switch sink connector to client library. * Remove commented-out code * Fix java 7 compatibility * Add parameters for timeouts for publishes to Cloud Pub/Sub. * Fix handling of optional struct fields and error message for missing fields, which would show up as a message about unsupported nested types. * Add test case for optional field where the value is present. * Set maximum inbound message size on source connector to ensure it is possible to receive a message up to largest Pub/Sub message size (10MB).
* Handle a null schema in the sink connector. * Fix concurrent access on iteration of synchronized set. * Reduce the ackIds iteration synchronization to the shortest period needed * Clear list of ackIds as soon as acks are sent. This will make acks best effort, but also prevents us from forgetting to send some acks if a poll happens while there is an outstanding ack request * Better error message when verifySubscription fails. * When an exception happens on pull, just return an empty list to poll. * Ensure partition number is non-negative in CPS source connector. * Shade out the guava library so the connector can work with a newer version of Kafka that depends on a newer version of the guava library. * Shade out the guava library so the connector can work with a newer version of Kafka that depends on a newer version of the guava library. * Update versions of gRPC libraries used by Kafka Connector. This should hopefully fix issue #120. * Remove bad oracle jdk7 environment from Travis runs. * Minor formatting fixes. * Add a new config property to the sink connector, metadata.publish. When this property is true, the following attributes are added to a message published to Cloud Pub/Sub via the sink connector: kafka.topic: The Kafka topic on which the message was originally published kafka.partition: The Kafka partition to which the message was originally published kafka.offset: The offset of the message in the Kafka topic/partition kafka.timestamp: The timestamp that Kafka associated with the message * Calculate message size at the end rather than along the way. * Remove the temporary variables for Kafka attributes * Periodically recreate GRPC publishers and subscribers in order to avoid GOAWAY errors. * Formatting/syntactic fixes * Switch sink connector to client library. * Remove commented-out code * Fix java 7 compatibility * Add parameters for timeouts for publishes to Cloud Pub/Sub. * Fix handling of optional struct fields and error message for missing fields, which would show up as a message about unsupported nested types. * Add test case for optional field where the value is present. * Set maximum inbound message size on source connector to ensure it is possible to receive a message up to largest Pub/Sub message size (10MB). * Added instructions to Kafka Connector about downloading from a release.
* Handle a null schema in the sink connector. * Fix concurrent access on iteration of synchronized set. * Reduce the ackIds iteration synchronization to the shortest period needed * Clear list of ackIds as soon as acks are sent. This will make acks best effort, but also prevents us from forgetting to send some acks if a poll happens while there is an outstanding ack request * Better error message when verifySubscription fails. * When an exception happens on pull, just return an empty list to poll. * Ensure partition number is non-negative in CPS source connector. * Shade out the guava library so the connector can work with a newer version of Kafka that depends on a newer version of the guava library. * Shade out the guava library so the connector can work with a newer version of Kafka that depends on a newer version of the guava library. * Update versions of gRPC libraries used by Kafka Connector. This should hopefully fix issue #120. * Remove bad oracle jdk7 environment from Travis runs. * Minor formatting fixes. * Add a new config property to the sink connector, metadata.publish. When this property is true, the following attributes are added to a message published to Cloud Pub/Sub via the sink connector: kafka.topic: The Kafka topic on which the message was originally published kafka.partition: The Kafka partition to which the message was originally published kafka.offset: The offset of the message in the Kafka topic/partition kafka.timestamp: The timestamp that Kafka associated with the message * Calculate message size at the end rather than along the way. * Remove the temporary variables for Kafka attributes * Periodically recreate GRPC publishers and subscribers in order to avoid GOAWAY errors. * Formatting/syntactic fixes * Switch sink connector to client library. * Remove commented-out code * Fix java 7 compatibility * Add parameters for timeouts for publishes to Cloud Pub/Sub. * Fix handling of optional struct fields and error message for missing fields, which would show up as a message about unsupported nested types. * Add test case for optional field where the value is present. * Set maximum inbound message size on source connector to ensure it is possible to receive a message up to largest Pub/Sub message size (10MB). * Added instructions to Kafka Connector about downloading from a release. * Clear the list of futures whenever we flush. If we do not do this, then when an error occurs, all subsequent flush attempts will fail.
* Handle a null schema in the sink connector. * Fix concurrent access on iteration of synchronized set. * Reduce the ackIds iteration synchronization to the shortest period needed * Clear list of ackIds as soon as acks are sent. This will make acks best effort, but also prevents us from forgetting to send some acks if a poll happens while there is an outstanding ack request * Better error message when verifySubscription fails. * When an exception happens on pull, just return an empty list to poll. * Ensure partition number is non-negative in CPS source connector. * Shade out the guava library so the connector can work with a newer version of Kafka that depends on a newer version of the guava library. * Shade out the guava library so the connector can work with a newer version of Kafka that depends on a newer version of the guava library. * Update versions of gRPC libraries used by Kafka Connector. This should hopefully fix issue #120. * Remove bad oracle jdk7 environment from Travis runs. * Minor formatting fixes. * Add a new config property to the sink connector, metadata.publish. When this property is true, the following attributes are added to a message published to Cloud Pub/Sub via the sink connector: kafka.topic: The Kafka topic on which the message was originally published kafka.partition: The Kafka partition to which the message was originally published kafka.offset: The offset of the message in the Kafka topic/partition kafka.timestamp: The timestamp that Kafka associated with the message * Calculate message size at the end rather than along the way. * Remove the temporary variables for Kafka attributes * Periodically recreate GRPC publishers and subscribers in order to avoid GOAWAY errors. * Formatting/syntactic fixes * Switch sink connector to client library. * Remove commented-out code * Fix java 7 compatibility * Add parameters for timeouts for publishes to Cloud Pub/Sub. * Fix handling of optional struct fields and error message for missing fields, which would show up as a message about unsupported nested types. * Add test case for optional field where the value is present. * Set maximum inbound message size on source connector to ensure it is possible to receive a message up to largest Pub/Sub message size (10MB). * Added instructions to Kafka Connector about downloading from a release. * Clear the list of futures whenever we flush. If we do not do this, then when an error occurs, all subsequent flush attempts will fail. * Update Google dependencies to newer versions * Back to older netty library; new one results in inability to connect.
* Handle a null schema in the sink connector. * Fix concurrent access on iteration of synchronized set. * Reduce the ackIds iteration synchronization to the shortest period needed * Clear list of ackIds as soon as acks are sent. This will make acks best effort, but also prevents us from forgetting to send some acks if a poll happens while there is an outstanding ack request * Better error message when verifySubscription fails. * When an exception happens on pull, just return an empty list to poll. * Ensure partition number is non-negative in CPS source connector. * Shade out the guava library so the connector can work with a newer version of Kafka that depends on a newer version of the guava library. * Shade out the guava library so the connector can work with a newer version of Kafka that depends on a newer version of the guava library. * Update versions of gRPC libraries used by Kafka Connector. This should hopefully fix issue #120. * Remove bad oracle jdk7 environment from Travis runs. * Minor formatting fixes. * Add a new config property to the sink connector, metadata.publish. When this property is true, the following attributes are added to a message published to Cloud Pub/Sub via the sink connector: kafka.topic: The Kafka topic on which the message was originally published kafka.partition: The Kafka partition to which the message was originally published kafka.offset: The offset of the message in the Kafka topic/partition kafka.timestamp: The timestamp that Kafka associated with the message * Calculate message size at the end rather than along the way. * Remove the temporary variables for Kafka attributes * Periodically recreate GRPC publishers and subscribers in order to avoid GOAWAY errors. * Formatting/syntactic fixes * Switch sink connector to client library. * Remove commented-out code * Fix java 7 compatibility * Add parameters for timeouts for publishes to Cloud Pub/Sub. * Fix handling of optional struct fields and error message for missing fields, which would show up as a message about unsupported nested types. * Add test case for optional field where the value is present. * Set maximum inbound message size on source connector to ensure it is possible to receive a message up to largest Pub/Sub message size (10MB). * Added instructions to Kafka Connector about downloading from a release. * Clear the list of futures whenever we flush. If we do not do this, then when an error occurs, all subsequent flush attempts will fail. * Update Google dependencies to newer versions * Back to older netty library; new one results in inability to connect. * Remove jms-light code
I'm running this pubsub connector from
master
as of opening this ticket, within a docker container based onconfluentinc/cp-kafka-connect:3.3.0-1
, which features kafka 0.11.0Eventually, while piping messages from kafka to pubsub, the task will run into an HTTP/2 error and never be able to recover, repeating the error message below. This results in an increasing backlog, and a constantly repeated batch of messages.
The only thing that comes to mind with this is some sort of network address cache, causing the task to try and connect to a pubsub node which has been shut down. I found that the java included in the confluent container does not define a value for java's
networkaddress.cache.ttl
setting, so it defaults to forever. I then changed the value to 30 seconds (by forking the container to change the file itself) and restarted my workers, but the problem came back after a few hours.The text was updated successfully, but these errors were encountered: