Skip to content

Instantly share code, notes, and snippets.

@rmetzger
Created May 19, 2015 14:11
Show Gist options
  • Save rmetzger/ddbb0fead5efdd58a539 to your computer and use it in GitHub Desktop.
Save rmetzger/ddbb0fead5efdd58a539 to your computer and use it in GitHub Desktop.
Kafka Producer fails (no leader?)
Travis build: https://travis-ci.org/StephanEwen/incubator-flink/jobs/63147661
10:38:15,603 INFO org.apache.flink.streaming.connectors.kafka.KafkaITCase - Starting KafkaITCase.prepare()
10:38:15,611 INFO org.apache.flink.streaming.connectors.kafka.KafkaITCase - Starting Zookeeper
10:38:15,816 INFO org.apache.zookeeper.server.ZooKeeperServerMain - Starting server
10:38:15,828 INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
10:38:15,829 INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:host.name=testing-worker-linux-docker-e8c0cfc7-5418-linux-2.prod.travis-ci.org
10:38:15,829 INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.version=1.7.0_76
10:38:15,829 INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.vendor=Oracle Corporation
10:38:15,829 INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.home=/usr/lib/jvm/java-7-oracle/jre
10:38:15,829 INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.class.path=/home/travis/build/StephanEwen/incubator-flink/flink-staging/flink-streaming/flink-streaming-connectors/target/test-classes:/home/travis/build/StephanEwen/incubator-flink/flink-staging/flink-streaming/flink-streaming-connectors/target/classes:/home/travis/build/StephanEwen/incubator-flink/flink-staging/flink-streaming/flink-streaming-core/target/flink-streaming-core-0.9-SNAPSHOT.jar:/home/travis/.m2/repository/org/apache/commons/commons-math/2.2/commons-math-2.2.jar:/home/travis/.m2/repository/org/apache/kafka/kafka_2.11/0.8.2.0/kafka_2.11-0.8.2.0.jar:/home/travis/.m2/repository/org/apache/kafka/kafka-clients/0.8.2.0/kafka-clients-0.8.2.0.jar:/home/travis/.m2/repository/net/jpountz/lz4/lz4/1.2.0/lz4-1.2.0.jar:/home/travis/.m2/repository/org/scala-lang/modules/scala-xml_2.11/1.0.2/scala-xml_2.11-1.0.2.jar:/home/travis/.m2/repository/com/yammer/metrics/metrics-core/2.2.0/metrics-core-2.2.0.jar:/home/travis/.m2/repository/org/scala-lang/modules/scala-parser-combinators_2.11/1.0.2/scala-parser-combinators_2.11-1.0.2.jar:/home/travis/.m2/repository/com/101tec/zkclient/0.3/zkclient-0.3.jar:/home/travis/.m2/repository/org/scala-lang/scala-library/2.11.4/scala-library-2.11.4.jar:/home/travis/.m2/repository/org/apache/zookeeper/zookeeper/3.4.6/zookeeper-3.4.6.jar:/home/travis/.m2/repository/jline/jline/0.9.94/jline-0.9.94.jar:/home/travis/.m2/repository/org/apache/curator/curator-test/2.7.1/curator-test-2.7.1.jar:/home/travis/.m2/repository/org/javassist/javassist/3.18.1-GA/javassist-3.18.1-GA.jar:/home/travis/.m2/repository/com/google/guava/guava/16.0.1/guava-16.0.1.jar:/home/travis/.m2/repository/com/rabbitmq/amqp-client/3.3.1/amqp-client-3.3.1.jar:/home/travis/.m2/repository/org/apache/flume/flume-ng-core/1.5.0/flume-ng-core-1.5.0.jar:/home/travis/.m2/repository/org/apache/flume/flume-ng-sdk/1.5.0/flume-ng-sdk-1.5.0.jar:/home/travis/.m2/repository/org/apache/flume/flume-ng-configuration/1.5.0/flume-ng-configuration-1.5.0.jar:/home/travis/.m2/repository/org/apache/avro/avro-ipc/1.7.6/avro-ipc-1.7.6.jar:/home/travis/.m2/repository/io/netty/netty/3.5.12.Final/netty-3.5.12.Final.jar:/home/travis/.m2/repository/joda-time/joda-time/2.5/joda-time-2.5.jar:/home/travis/.m2/repository/org/apache/mina/mina-core/2.0.4/mina-core-2.0.4.jar:/home/travis/.m2/repository/com/twitter/hbc-core/2.2.0/hbc-core-2.2.0.jar:/home/travis/.m2/repository/org/apache/httpcomponents/httpclient/4.2.5/httpclient-4.2.5.jar:/home/travis/.m2/repository/org/apache/httpcomponents/httpcore/4.2.4/httpcore-4.2.4.jar:/home/travis/.m2/repository/commons-logging/commons-logging/1.1.1/commons-logging-1.1.1.jar:/home/travis/.m2/repository/commons-codec/commons-codec/1.6/commons-codec-1.6.jar:/home/travis/.m2/repository/com/twitter/joauth/6.0.2/joauth-6.0.2.jar:/home/travis/.m2/repository/com/google/code/findbugs/jsr305/1.3.9/jsr305-1.3.9.jar:/home/travis/.m2/repository/org/apache/sling/org.apache.sling.commons.json/2.0.6/org.apache.sling.commons.json-2.0.6.jar:/home/travis/build/StephanEwen/incubator-flink/flink-core/target/flink-core-0.9-SNAPSHOT.jar:/home/travis/build/StephanEwen/incubator-flink/flink-shaded-hadoop/flink-shaded-hadoop1/target/flink-shaded-hadoop1-0.9-SNAPSHOT.jar:/home/travis/.m2/repository/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar:/home/travis/.m2/repository/com/esotericsoftware/kryo/kryo/2.24.0/kryo-2.24.0.jar:/home/travis/.m2/repository/com/esotericsoftware/minlog/minlog/1.2/minlog-1.2.jar:/home/travis/.m2/repository/org/objenesis/objenesis/2.1/objenesis-2.1.jar:/home/travis/.m2/repository/com/twitter/chill_2.11/0.5.2/chill_2.11-0.5.2.jar:/home/travis/.m2/repository/com/twitter/chill-java/0.5.2/chill-java-0.5.2.jar:/home/travis/build/StephanEwen/incubator-flink/flink-runtime/target/flink-runtime-0.9-SNAPSHOT.jar:/home/travis/.m2/repository/commons-cli/commons-cli/1.2/commons-cli-1.2.jar:/home/travis/.m2/repository/commons-io/commons-io/2.4/commons-io-2.4.jar:/home/travis/.m2/repository/org/eclipse/jetty/jetty-server/8.0.0.M1/jetty-server-8.0.0.M1.jar:/home/travis/.m2/repository/org/mortbay/jetty/servlet-api/3.0.20100224/servlet-api-3.0.20100224.jar:/home/travis/.m2/repository/org/eclipse/jetty/jetty-continuation/8.0.0.M1/jetty-continuation-8.0.0.M1.jar:/home/travis/.m2/repository/org/eclipse/jetty/jetty-http/8.0.0.M1/jetty-http-8.0.0.M1.jar:/home/travis/.m2/repository/org/eclipse/jetty/jetty-io/8.0.0.M1/jetty-io-8.0.0.M1.jar:/home/travis/.m2/repository/org/eclipse/jetty/jetty-util/8.0.0.M1/jetty-util-8.0.0.M1.jar:/home/travis/.m2/repository/org/eclipse/jetty/jetty-security/8.0.0.M1/jetty-security-8.0.0.M1.jar:/home/travis/.m2/repository/org/eclipse/jetty/jetty-servlet/8.0.0.M1/jetty-servlet-8.0.0.M1.jar:/home/travis/.m2/repository/com/amazonaws/aws-java-sdk/1.8.1/aws-java-sdk-1.8.1.jar:/home/travis/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.1.1/jackson-core-2.1.1.jar:/home/travis/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.1.1/jackson-databind-2.1.1.jar:/home/travis/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.1.1/jackson-annotations-2.1.1.jar:/home/travis/.m2/repository/io/netty/netty-all/4.0.27.Final/netty-all-4.0.27.Final.jar:/home/travis/.m2/repository/org/codehaus/jettison/jettison/1.1/jettison-1.1.jar:/home/travis/.m2/repository/stax/stax-api/1.0.1/stax-api-1.0.1.jar:/home/travis/.m2/repository/com/typesafe/akka/akka-actor_2.11/2.3.7/akka-actor_2.11-2.3.7.jar:/home/travis/.m2/repository/com/typesafe/config/1.2.1/config-1.2.1.jar:/home/travis/.m2/repository/com/typesafe/akka/akka-remote_2.11/2.3.7/akka-remote_2.11-2.3.7.jar:/home/travis/.m2/repository/org/uncommons/maths/uncommons-maths/1.2.2a/uncommons-maths-1.2.2a.jar:/home/travis/.m2/repository/com/typesafe/akka/akka-slf4j_2.11/2.3.7/akka-slf4j_2.11-2.3.7.jar:/home/travis/.m2/repository/org/clapper/grizzled-slf4j_2.11/1.0.2/grizzled-slf4j_2.11-1.0.2.jar:/home/travis/.m2/repository/com/github/scopt/scopt_2.11/3.2.0/scopt_2.11-3.2.0.jar:/home/travis/.m2/repository/io/dropwizard/metrics/metrics-core/3.1.0/metrics-core-3.1.0.jar:/home/travis/.m2/repository/io/dropwizard/metrics/metrics-jvm/3.1.0/metrics-jvm-3.1.0.jar:/home/travis/.m2/repository/io/dropwizard/metrics/metrics-json/3.1.0/metrics-json-3.1.0.jar:/home/travis/build/StephanEwen/incubator-flink/flink-clients/target/flink-clients-0.9-SNAPSHOT.jar:/home/travis/build/StephanEwen/incubator-flink/flink-optimizer/target/flink-optimizer-0.9-SNAPSHOT.jar:/home/travis/.m2/repository/commons-fileupload/commons-fileupload/1.3.1/commons-fileupload-1.3.1.jar:/home/travis/build/StephanEwen/incubator-flink/flink-java/target/flink-java-0.9-SNAPSHOT.jar:/home/travis/.m2/repository/org/apache/avro/avro/1.7.6/avro-1.7.6.jar:/home/travis/.m2/repository/org/codehaus/jackson/jackson-core-asl/1.9.13/jackson-core-asl-1.9.13.jar:/home/travis/.m2/repository/org/codehaus/jackson/jackson-mapper-asl/1.9.13/jackson-mapper-asl-1.9.13.jar:/home/travis/.m2/repository/com/thoughtworks/paranamer/paranamer/2.3/paranamer-2.3.jar:/home/travis/.m2/repository/org/xerial/snappy/snappy-java/1.0.5/snappy-java-1.0.5.jar:/home/travis/.m2/repository/org/apache/commons/commons-compress/1.4.1/commons-compress-1.4.1.jar:/home/travis/.m2/repository/org/tukaani/xz/1.0/xz-1.0.jar:/home/travis/.m2/repository/com/twitter/chill-avro_2.11/0.5.2/chill-avro_2.11-0.5.2.jar:/home/travis/.m2/repository/com/twitter/chill-bijection_2.11/0.5.2/chill-bijection_2.11-0.5.2.jar:/home/travis/.m2/repository/com/twitter/bijection-core_2.11/0.7.2/bijection-core_2.11-0.7.2.jar:/home/travis/.m2/repository/com/twitter/bijection-avro_2.11/0.7.2/bijection-avro_2.11-0.7.2.jar:/home/travis/.m2/repository/com/twitter/chill-protobuf/0.5.2/chill-protobuf-0.5.2.jar:/home/travis/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar:/home/travis/.m2/repository/com/twitter/chill-thrift/0.5.2/chill-thrift-0.5.2.jar:/home/travis/.m2/repository/org/apache/thrift/libthrift/0.6.1/libthrift-0.6.1.jar:/home/travis/.m2/repository/commons-lang/commons-lang/2.5/commons-lang-2.5.jar:/home/travis/.m2/repository/de/javakaffee/kryo-serializers/0.27/kryo-serializers-0.27.jar:/home/travis/.m2/repository/org/apache/commons/commons-lang3/3.3.2/commons-lang3-3.3.2.jar:/home/travis/.m2/repository/org/slf4j/slf4j-api/1.7.7/slf4j-api-1.7.7.jar:/home/travis/.m2/repository/org/slf4j/slf4j-log4j12/1.7.7/slf4j-log4j12-1.7.7.jar:/home/travis/.m2/repository/log4j/log4j/1.2.17/log4j-1.2.17.jar:/home/travis/.m2/repository/junit/junit/4.11/junit-4.11.jar:/home/travis/.m2/repository/org/hamcrest/hamcrest-core/1.3/hamcrest-core-1.3.jar:/home/travis/.m2/repository/org/mockito/mockito-all/1.9.5/mockito-all-1.9.5.jar:/home/travis/.m2/repository/org/powermock/powermock-module-junit4/1.5.5/powermock-module-junit4-1.5.5.jar:/home/travis/.m2/repository/org/powermock/powermock-module-junit4-common/1.5.5/powermock-module-junit4-common-1.5.5.jar:/home/travis/.m2/repository/org/powermock/powermock-core/1.5.5/powermock-core-1.5.5.jar:/home/travis/.m2/repository/org/powermock/powermock-reflect/1.5.5/powermock-reflect-1.5.5.jar:/home/travis/.m2/repository/org/powermock/powermock-api-mockito/1.5.5/powermock-api-mockito-1.5.5.jar:/home/travis/.m2/repository/org/powermock/powermock-api-support/1.5.5/powermock-api-support-1.5.5.jar:/home/travis/.m2/repository/org/hamcrest/hamcrest-all/1.3/hamcrest-all-1.3.jar:
10:38:15,829 INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
10:38:15,829 INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.io.tmpdir=/tmp
10:38:15,829 INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.compiler=<NA>
10:38:15,829 INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.name=Linux
10:38:15,829 INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.arch=amd64
10:38:15,829 INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.version=3.13.0-40-generic
10:38:15,829 INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:user.name=travis
10:38:15,829 INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:user.home=/home/travis
10:38:15,829 INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:user.dir=/home/travis/build/StephanEwen/incubator-flink/flink-staging/flink-streaming/flink-streaming-connectors/target
10:38:15,845 INFO org.apache.zookeeper.server.ZooKeeperServer - tickTime set to 3000
10:38:15,845 INFO org.apache.zookeeper.server.ZooKeeperServer - minSessionTimeout set to -1
10:38:15,845 INFO org.apache.zookeeper.server.ZooKeeperServer - maxSessionTimeout set to -1
10:38:15,864 INFO org.apache.zookeeper.server.NIOServerCnxnFactory - binding to port 0.0.0.0/0.0.0.0:56890
10:38:16,945 INFO org.apache.flink.streaming.connectors.kafka.KafkaITCase - Starting KafkaServer
10:38:17,336 INFO kafka.utils.VerifiableProperties - Verifying properties
10:38:17,370 INFO kafka.utils.VerifiableProperties - Property advertised.host.name is overridden to testing-worker-linux-docker-e8c0cfc7-5418-linux-2
10:38:17,370 INFO kafka.utils.VerifiableProperties - Property broker.id is overridden to 0
10:38:17,370 INFO kafka.utils.VerifiableProperties - Property log.dir is overridden to /tmp/junit6916562036045478963/junit7940965830033948962
10:38:17,370 INFO kafka.utils.VerifiableProperties - Property message.max.bytes is overridden to 36700160
10:38:17,371 INFO kafka.utils.VerifiableProperties - Property port is overridden to 41293
10:38:17,371 INFO kafka.utils.VerifiableProperties - Property replica.fetch.max.bytes is overridden to 36700160
10:38:17,371 INFO kafka.utils.VerifiableProperties - Property zookeeper.connect is overridden to localhost:56890
10:38:17,408 INFO kafka.server.KafkaServer - [Kafka Server 0], starting
10:38:17,414 INFO kafka.server.KafkaServer - [Kafka Server 0], Connecting to zookeeper on localhost:56890
10:38:17,424 INFO org.I0Itec.zkclient.ZkEventThread - Starting ZkClient event thread.
10:38:17,430 INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
10:38:17,430 INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=testing-worker-linux-docker-e8c0cfc7-5418-linux-2.prod.travis-ci.org
10:38:17,431 INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.7.0_76
10:38:17,431 INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Oracle Corporation
10:38:17,431 INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/java-7-oracle/jre
10:38:17,431 INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/home/travis/build/StephanEwen/incubator-flink/flink-staging/flink-streaming/flink-streaming-connectors/target/test-classes:/home/travis/build/StephanEwen/incubator-flink/flink-staging/flink-streaming/flink-streaming-connectors/target/classes:/home/travis/build/StephanEwen/incubator-flink/flink-staging/flink-streaming/flink-streaming-core/target/flink-streaming-core-0.9-SNAPSHOT.jar:/home/travis/.m2/repository/org/apache/commons/commons-math/2.2/commons-math-2.2.jar:/home/travis/.m2/repository/org/apache/kafka/kafka_2.11/0.8.2.0/kafka_2.11-0.8.2.0.jar:/home/travis/.m2/repository/org/apache/kafka/kafka-clients/0.8.2.0/kafka-clients-0.8.2.0.jar:/home/travis/.m2/repository/net/jpountz/lz4/lz4/1.2.0/lz4-1.2.0.jar:/home/travis/.m2/repository/org/scala-lang/modules/scala-xml_2.11/1.0.2/scala-xml_2.11-1.0.2.jar:/home/travis/.m2/repository/com/yammer/metrics/metrics-core/2.2.0/metrics-core-2.2.0.jar:/home/travis/.m2/repository/org/scala-lang/modules/scala-parser-combinators_2.11/1.0.2/scala-parser-combinators_2.11-1.0.2.jar:/home/travis/.m2/repository/com/101tec/zkclient/0.3/zkclient-0.3.jar:/home/travis/.m2/repository/org/scala-lang/scala-library/2.11.4/scala-library-2.11.4.jar:/home/travis/.m2/repository/org/apache/zookeeper/zookeeper/3.4.6/zookeeper-3.4.6.jar:/home/travis/.m2/repository/jline/jline/0.9.94/jline-0.9.94.jar:/home/travis/.m2/repository/org/apache/curator/curator-test/2.7.1/curator-test-2.7.1.jar:/home/travis/.m2/repository/org/javassist/javassist/3.18.1-GA/javassist-3.18.1-GA.jar:/home/travis/.m2/repository/com/google/guava/guava/16.0.1/guava-16.0.1.jar:/home/travis/.m2/repository/com/rabbitmq/amqp-client/3.3.1/amqp-client-3.3.1.jar:/home/travis/.m2/repository/org/apache/flume/flume-ng-core/1.5.0/flume-ng-core-1.5.0.jar:/home/travis/.m2/repository/org/apache/flume/flume-ng-sdk/1.5.0/flume-ng-sdk-1.5.0.jar:/home/travis/.m2/repository/org/apache/flume/flume-ng-configuration/1.5.0/flume-ng-configuration-1.5.0.jar:/home/travis/.m2/repository/org/apache/avro/avro-ipc/1.7.6/avro-ipc-1.7.6.jar:/home/travis/.m2/repository/io/netty/netty/3.5.12.Final/netty-3.5.12.Final.jar:/home/travis/.m2/repository/joda-time/joda-time/2.5/joda-time-2.5.jar:/home/travis/.m2/repository/org/apache/mina/mina-core/2.0.4/mina-core-2.0.4.jar:/home/travis/.m2/repository/com/twitter/hbc-core/2.2.0/hbc-core-2.2.0.jar:/home/travis/.m2/repository/org/apache/httpcomponents/httpclient/4.2.5/httpclient-4.2.5.jar:/home/travis/.m2/repository/org/apache/httpcomponents/httpcore/4.2.4/httpcore-4.2.4.jar:/home/travis/.m2/repository/commons-logging/commons-logging/1.1.1/commons-logging-1.1.1.jar:/home/travis/.m2/repository/commons-codec/commons-codec/1.6/commons-codec-1.6.jar:/home/travis/.m2/repository/com/twitter/joauth/6.0.2/joauth-6.0.2.jar:/home/travis/.m2/repository/com/google/code/findbugs/jsr305/1.3.9/jsr305-1.3.9.jar:/home/travis/.m2/repository/org/apache/sling/org.apache.sling.commons.json/2.0.6/org.apache.sling.commons.json-2.0.6.jar:/home/travis/build/StephanEwen/incubator-flink/flink-core/target/flink-core-0.9-SNAPSHOT.jar:/home/travis/build/StephanEwen/incubator-flink/flink-shaded-hadoop/flink-shaded-hadoop1/target/flink-shaded-hadoop1-0.9-SNAPSHOT.jar:/home/travis/.m2/repository/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar:/home/travis/.m2/repository/com/esotericsoftware/kryo/kryo/2.24.0/kryo-2.24.0.jar:/home/travis/.m2/repository/com/esotericsoftware/minlog/minlog/1.2/minlog-1.2.jar:/home/travis/.m2/repository/org/objenesis/objenesis/2.1/objenesis-2.1.jar:/home/travis/.m2/repository/com/twitter/chill_2.11/0.5.2/chill_2.11-0.5.2.jar:/home/travis/.m2/repository/com/twitter/chill-java/0.5.2/chill-java-0.5.2.jar:/home/travis/build/StephanEwen/incubator-flink/flink-runtime/target/flink-runtime-0.9-SNAPSHOT.jar:/home/travis/.m2/repository/commons-cli/commons-cli/1.2/commons-cli-1.2.jar:/home/travis/.m2/repository/commons-io/commons-io/2.4/commons-io-2.4.jar:/home/travis/.m2/repository/org/eclipse/jetty/jetty-server/8.0.0.M1/jetty-server-8.0.0.M1.jar:/home/travis/.m2/repository/org/mortbay/jetty/servlet-api/3.0.20100224/servlet-api-3.0.20100224.jar:/home/travis/.m2/repository/org/eclipse/jetty/jetty-continuation/8.0.0.M1/jetty-continuation-8.0.0.M1.jar:/home/travis/.m2/repository/org/eclipse/jetty/jetty-http/8.0.0.M1/jetty-http-8.0.0.M1.jar:/home/travis/.m2/repository/org/eclipse/jetty/jetty-io/8.0.0.M1/jetty-io-8.0.0.M1.jar:/home/travis/.m2/repository/org/eclipse/jetty/jetty-util/8.0.0.M1/jetty-util-8.0.0.M1.jar:/home/travis/.m2/repository/org/eclipse/jetty/jetty-security/8.0.0.M1/jetty-security-8.0.0.M1.jar:/home/travis/.m2/repository/org/eclipse/jetty/jetty-servlet/8.0.0.M1/jetty-servlet-8.0.0.M1.jar:/home/travis/.m2/repository/com/amazonaws/aws-java-sdk/1.8.1/aws-java-sdk-1.8.1.jar:/home/travis/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.1.1/jackson-core-2.1.1.jar:/home/travis/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.1.1/jackson-databind-2.1.1.jar:/home/travis/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.1.1/jackson-annotations-2.1.1.jar:/home/travis/.m2/repository/io/netty/netty-all/4.0.27.Final/netty-all-4.0.27.Final.jar:/home/travis/.m2/repository/org/codehaus/jettison/jettison/1.1/jettison-1.1.jar:/home/travis/.m2/repository/stax/stax-api/1.0.1/stax-api-1.0.1.jar:/home/travis/.m2/repository/com/typesafe/akka/akka-actor_2.11/2.3.7/akka-actor_2.11-2.3.7.jar:/home/travis/.m2/repository/com/typesafe/config/1.2.1/config-1.2.1.jar:/home/travis/.m2/repository/com/typesafe/akka/akka-remote_2.11/2.3.7/akka-remote_2.11-2.3.7.jar:/home/travis/.m2/repository/org/uncommons/maths/uncommons-maths/1.2.2a/uncommons-maths-1.2.2a.jar:/home/travis/.m2/repository/com/typesafe/akka/akka-slf4j_2.11/2.3.7/akka-slf4j_2.11-2.3.7.jar:/home/travis/.m2/repository/org/clapper/grizzled-slf4j_2.11/1.0.2/grizzled-slf4j_2.11-1.0.2.jar:/home/travis/.m2/repository/com/github/scopt/scopt_2.11/3.2.0/scopt_2.11-3.2.0.jar:/home/travis/.m2/repository/io/dropwizard/metrics/metrics-core/3.1.0/metrics-core-3.1.0.jar:/home/travis/.m2/repository/io/dropwizard/metrics/metrics-jvm/3.1.0/metrics-jvm-3.1.0.jar:/home/travis/.m2/repository/io/dropwizard/metrics/metrics-json/3.1.0/metrics-json-3.1.0.jar:/home/travis/build/StephanEwen/incubator-flink/flink-clients/target/flink-clients-0.9-SNAPSHOT.jar:/home/travis/build/StephanEwen/incubator-flink/flink-optimizer/target/flink-optimizer-0.9-SNAPSHOT.jar:/home/travis/.m2/repository/commons-fileupload/commons-fileupload/1.3.1/commons-fileupload-1.3.1.jar:/home/travis/build/StephanEwen/incubator-flink/flink-java/target/flink-java-0.9-SNAPSHOT.jar:/home/travis/.m2/repository/org/apache/avro/avro/1.7.6/avro-1.7.6.jar:/home/travis/.m2/repository/org/codehaus/jackson/jackson-core-asl/1.9.13/jackson-core-asl-1.9.13.jar:/home/travis/.m2/repository/org/codehaus/jackson/jackson-mapper-asl/1.9.13/jackson-mapper-asl-1.9.13.jar:/home/travis/.m2/repository/com/thoughtworks/paranamer/paranamer/2.3/paranamer-2.3.jar:/home/travis/.m2/repository/org/xerial/snappy/snappy-java/1.0.5/snappy-java-1.0.5.jar:/home/travis/.m2/repository/org/apache/commons/commons-compress/1.4.1/commons-compress-1.4.1.jar:/home/travis/.m2/repository/org/tukaani/xz/1.0/xz-1.0.jar:/home/travis/.m2/repository/com/twitter/chill-avro_2.11/0.5.2/chill-avro_2.11-0.5.2.jar:/home/travis/.m2/repository/com/twitter/chill-bijection_2.11/0.5.2/chill-bijection_2.11-0.5.2.jar:/home/travis/.m2/repository/com/twitter/bijection-core_2.11/0.7.2/bijection-core_2.11-0.7.2.jar:/home/travis/.m2/repository/com/twitter/bijection-avro_2.11/0.7.2/bijection-avro_2.11-0.7.2.jar:/home/travis/.m2/repository/com/twitter/chill-protobuf/0.5.2/chill-protobuf-0.5.2.jar:/home/travis/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar:/home/travis/.m2/repository/com/twitter/chill-thrift/0.5.2/chill-thrift-0.5.2.jar:/home/travis/.m2/repository/org/apache/thrift/libthrift/0.6.1/libthrift-0.6.1.jar:/home/travis/.m2/repository/commons-lang/commons-lang/2.5/commons-lang-2.5.jar:/home/travis/.m2/repository/de/javakaffee/kryo-serializers/0.27/kryo-serializers-0.27.jar:/home/travis/.m2/repository/org/apache/commons/commons-lang3/3.3.2/commons-lang3-3.3.2.jar:/home/travis/.m2/repository/org/slf4j/slf4j-api/1.7.7/slf4j-api-1.7.7.jar:/home/travis/.m2/repository/org/slf4j/slf4j-log4j12/1.7.7/slf4j-log4j12-1.7.7.jar:/home/travis/.m2/repository/log4j/log4j/1.2.17/log4j-1.2.17.jar:/home/travis/.m2/repository/junit/junit/4.11/junit-4.11.jar:/home/travis/.m2/repository/org/hamcrest/hamcrest-core/1.3/hamcrest-core-1.3.jar:/home/travis/.m2/repository/org/mockito/mockito-all/1.9.5/mockito-all-1.9.5.jar:/home/travis/.m2/repository/org/powermock/powermock-module-junit4/1.5.5/powermock-module-junit4-1.5.5.jar:/home/travis/.m2/repository/org/powermock/powermock-module-junit4-common/1.5.5/powermock-module-junit4-common-1.5.5.jar:/home/travis/.m2/repository/org/powermock/powermock-core/1.5.5/powermock-core-1.5.5.jar:/home/travis/.m2/repository/org/powermock/powermock-reflect/1.5.5/powermock-reflect-1.5.5.jar:/home/travis/.m2/repository/org/powermock/powermock-api-mockito/1.5.5/powermock-api-mockito-1.5.5.jar:/home/travis/.m2/repository/org/powermock/powermock-api-support/1.5.5/powermock-api-support-1.5.5.jar:/home/travis/.m2/repository/org/hamcrest/hamcrest-all/1.3/hamcrest-all-1.3.jar:
10:38:17,431 INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
10:38:17,431 INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp
10:38:17,431 INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=<NA>
10:38:17,431 INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux
10:38:17,432 INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64
10:38:17,432 INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=3.13.0-40-generic
10:38:17,432 INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=travis
10:38:17,432 INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/home/travis
10:38:17,432 INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/home/travis/build/StephanEwen/incubator-flink/flink-staging/flink-streaming/flink-streaming-connectors/target
10:38:17,433 INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=localhost:56890 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@34e1819
10:38:17,453 INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server localhost/127.0.0.1:56890. Will not attempt to authenticate using SASL (unknown error)
10:38:17,455 INFO org.apache.zookeeper.ClientCnxn - Socket connection established to localhost/127.0.0.1:56890, initiating session
10:38:17,455 INFO org.apache.zookeeper.server.NIOServerCnxnFactory - Accepted socket connection from /127.0.0.1:44525
10:38:17,462 INFO org.apache.zookeeper.server.ZooKeeperServer - Client attempting to establish new session at /127.0.0.1:44525
10:38:17,466 INFO org.apache.zookeeper.server.persistence.FileTxnLog - Creating new log file: log.1
10:38:17,483 INFO org.apache.zookeeper.server.ZooKeeperServer - Established session 0x14d6bc0a1600000 with negotiated timeout 6000 for client /127.0.0.1:44525
10:38:17,484 INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server localhost/127.0.0.1:56890, sessionid = 0x14d6bc0a1600000, negotiated timeout = 6000
10:38:17,486 INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (SyncConnected)
10:38:17,508 INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x14d6bc0a1600000 type:create cxid:0x4 zxid:0x3 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NoNode for /brokers
10:38:17,517 INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x14d6bc0a1600000 type:create cxid:0xa zxid:0x7 txntype:-1 reqpath:n/a Error Path:/config Error:KeeperErrorCode = NoNode for /config
10:38:17,535 INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x14d6bc0a1600000 type:create cxid:0x10 zxid:0xb txntype:-1 reqpath:n/a Error Path:/admin Error:KeeperErrorCode = NoNode for /admin
10:38:17,594 INFO kafka.log.LogManager - Loading logs.
10:38:17,603 INFO kafka.log.LogManager - Logs loading complete.
10:38:17,603 INFO kafka.log.LogManager - Starting log cleanup with a period of 300000 ms.
10:38:17,607 INFO kafka.log.LogManager - Starting log flusher with a default period of 9223372036854775807 ms.
10:38:17,635 INFO kafka.network.Acceptor - Awaiting socket connections on 0.0.0.0:41293.
10:38:17,636 INFO kafka.network.SocketServer - [Socket Server on Broker 0], Started
10:38:17,805 INFO kafka.utils.Mx4jLoader$ - Will not load MX4J, mx4j-tools.jar is not in the classpath
10:38:17,809 INFO kafka.controller.KafkaController - [Controller 0]: Controller starting up
10:38:17,848 INFO kafka.server.ZookeeperLeaderElector - 0 successfully elected as leader
10:38:17,849 INFO kafka.controller.KafkaController - [Controller 0]: Broker 0 starting become controller state transition
10:38:17,851 INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x14d6bc0a1600000 type:setData cxid:0x1a zxid:0xf txntype:-1 reqpath:n/a Error Path:/controller_epoch Error:KeeperErrorCode = NoNode for /controller_epoch
10:38:17,856 INFO kafka.controller.KafkaController - [Controller 0]: Controller 0 incremented epoch to 1
10:38:17,872 INFO kafka.controller.KafkaController - [Controller 0]: Partitions undergoing preferred replica election:
10:38:17,872 INFO kafka.controller.KafkaController - [Controller 0]: Partitions that completed preferred replica election:
10:38:17,873 INFO kafka.controller.KafkaController - [Controller 0]: Resuming preferred replica election for partitions:
10:38:17,876 INFO kafka.controller.KafkaController - [Controller 0]: Partitions being reassigned: Map()
10:38:17,876 INFO kafka.controller.KafkaController - [Controller 0]: Partitions already reassigned: List()
10:38:17,877 INFO kafka.controller.KafkaController - [Controller 0]: Resuming reassignment of partitions: Map()
10:38:17,881 INFO kafka.controller.KafkaController - [Controller 0]: List of topics to be deleted:
10:38:17,881 INFO kafka.controller.KafkaController - [Controller 0]: List of topics ineligible for deletion:
10:38:17,886 INFO kafka.controller.KafkaController - [Controller 0]: Currently active brokers in the cluster: Set()
10:38:17,886 INFO kafka.controller.KafkaController - [Controller 0]: Currently shutting brokers in the cluster: Set()
10:38:17,887 INFO kafka.controller.KafkaController - [Controller 0]: Current list of topics in the cluster: Set()
10:38:17,890 INFO kafka.controller.ReplicaStateMachine - [Replica state machine on controller 0]: Started replica state machine with initial state -> Map()
10:38:17,898 INFO kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Started partition state machine with initial state -> Map()
10:38:17,899 INFO kafka.controller.KafkaController - [Controller 0]: Broker 0 is ready to serve as the new controller with epoch 1
10:38:17,901 INFO kafka.controller.KafkaController - [Controller 0]: Starting preferred replica leader election for partitions
10:38:17,903 INFO kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Invoking state change to OnlinePartition for partitions
10:38:17,905 INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x14d6bc0a1600000 type:delete cxid:0x27 zxid:0x11 txntype:-1 reqpath:n/a Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election
10:38:17,908 INFO kafka.controller.KafkaController - [Controller 0]: starting the partition rebalance scheduler
10:38:17,909 INFO kafka.controller.KafkaController - [Controller 0]: Controller startup complete
10:38:17,936 INFO kafka.utils.ZkUtils$ - Registered broker 0 at path /brokers/ids/0 with address testing-worker-linux-docker-e8c0cfc7-5418-linux-2:41293.
10:38:17,949 INFO kafka.server.KafkaServer - [Kafka Server 0], started
10:38:17,950 INFO kafka.utils.VerifiableProperties - Verifying properties
10:38:17,950 INFO kafka.utils.VerifiableProperties - Property advertised.host.name is overridden to testing-worker-linux-docker-e8c0cfc7-5418-linux-2
10:38:17,950 INFO kafka.utils.VerifiableProperties - Property broker.id is overridden to 1
10:38:17,950 INFO kafka.utils.VerifiableProperties - Property log.dir is overridden to /tmp/junit6916562036045478963/junit3647141965421373810
10:38:17,950 INFO kafka.utils.VerifiableProperties - Property message.max.bytes is overridden to 36700160
10:38:17,950 INFO kafka.utils.VerifiableProperties - Property port is overridden to 52102
10:38:17,951 INFO kafka.utils.VerifiableProperties - Property replica.fetch.max.bytes is overridden to 36700160
10:38:17,951 INFO kafka.utils.VerifiableProperties - Property zookeeper.connect is overridden to localhost:56890
10:38:17,951 INFO kafka.server.KafkaServer - [Kafka Server 1], starting
10:38:17,951 INFO kafka.server.KafkaServer - [Kafka Server 1], Connecting to zookeeper on localhost:56890
10:38:17,951 INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=localhost:56890 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@656d471b
10:38:17,951 INFO org.I0Itec.zkclient.ZkEventThread - Starting ZkClient event thread.
10:38:17,952 INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server localhost/127.0.0.1:56890. Will not attempt to authenticate using SASL (unknown error)
10:38:17,952 INFO org.apache.zookeeper.server.NIOServerCnxnFactory - Accepted socket connection from /127.0.0.1:44526
10:38:17,952 INFO org.apache.zookeeper.ClientCnxn - Socket connection established to localhost/127.0.0.1:56890, initiating session
10:38:17,953 INFO org.apache.zookeeper.server.ZooKeeperServer - Client attempting to establish new session at /127.0.0.1:44526
10:38:17,954 INFO org.apache.zookeeper.server.ZooKeeperServer - Established session 0x14d6bc0a1600001 with negotiated timeout 6000 for client /127.0.0.1:44526
10:38:17,954 INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server localhost/127.0.0.1:56890, sessionid = 0x14d6bc0a1600001, negotiated timeout = 6000
10:38:17,954 INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (SyncConnected)
10:38:17,959 INFO kafka.log.LogManager - Loading logs.
10:38:17,959 INFO kafka.log.LogManager - Logs loading complete.
10:38:17,959 INFO kafka.log.LogManager - Starting log cleanup with a period of 300000 ms.
10:38:17,959 INFO kafka.log.LogManager - Starting log flusher with a default period of 9223372036854775807 ms.
10:38:17,963 INFO kafka.network.Acceptor - Awaiting socket connections on 0.0.0.0:52102.
10:38:17,963 INFO kafka.network.SocketServer - [Socket Server on Broker 1], Started
10:38:17,985 INFO kafka.utils.Mx4jLoader$ - Will not load MX4J, mx4j-tools.jar is not in the classpath
10:38:17,986 INFO kafka.controller.KafkaController - [Controller 1]: Controller starting up
10:38:18,056 INFO kafka.server.ZookeeperLeaderElector$LeaderChangeListener - New leader is 0
10:38:18,065 INFO kafka.controller.ReplicaStateMachine$BrokerChangeListener - [BrokerChangeListener on Controller 0]: Broker change listener fired for path /brokers/ids with children 0
10:38:18,074 INFO kafka.controller.KafkaController - [Controller 1]: Controller startup complete
10:38:18,080 INFO kafka.utils.ZkUtils$ - Registered broker 1 at path /brokers/ids/1 with address testing-worker-linux-docker-e8c0cfc7-5418-linux-2:52102.
10:38:18,080 INFO kafka.server.KafkaServer - [Kafka Server 1], started
10:38:18,081 INFO kafka.utils.VerifiableProperties - Verifying properties
10:38:18,081 INFO kafka.utils.VerifiableProperties - Property advertised.host.name is overridden to testing-worker-linux-docker-e8c0cfc7-5418-linux-2
10:38:18,081 INFO kafka.utils.VerifiableProperties - Property broker.id is overridden to 2
10:38:18,081 INFO kafka.utils.VerifiableProperties - Property log.dir is overridden to /tmp/junit6916562036045478963/junit4642240598388163910
10:38:18,082 INFO kafka.utils.VerifiableProperties - Property message.max.bytes is overridden to 36700160
10:38:18,082 INFO kafka.utils.VerifiableProperties - Property port is overridden to 44901
10:38:18,082 INFO kafka.utils.VerifiableProperties - Property replica.fetch.max.bytes is overridden to 36700160
10:38:18,082 INFO kafka.utils.VerifiableProperties - Property zookeeper.connect is overridden to localhost:56890
10:38:18,082 INFO kafka.server.KafkaServer - [Kafka Server 2], starting
10:38:18,082 INFO kafka.server.KafkaServer - [Kafka Server 2], Connecting to zookeeper on localhost:56890
10:38:18,082 INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=localhost:56890 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@7561f05
10:38:18,083 INFO org.I0Itec.zkclient.ZkEventThread - Starting ZkClient event thread.
10:38:18,083 INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server localhost/0:0:0:0:0:0:0:1:56890. Will not attempt to authenticate using SASL (unknown error)
10:38:18,083 INFO org.apache.zookeeper.server.NIOServerCnxnFactory - Accepted socket connection from /0:0:0:0:0:0:0:1:55939
10:38:18,084 INFO org.apache.zookeeper.ClientCnxn - Socket connection established to localhost/0:0:0:0:0:0:0:1:56890, initiating session
10:38:18,085 INFO org.apache.zookeeper.server.ZooKeeperServer - Client attempting to establish new session at /0:0:0:0:0:0:0:1:55939
10:38:18,092 INFO org.apache.zookeeper.server.ZooKeeperServer - Established session 0x14d6bc0a1600002 with negotiated timeout 6000 for client /0:0:0:0:0:0:0:1:55939
10:38:18,092 INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server localhost/0:0:0:0:0:0:0:1:56890, sessionid = 0x14d6bc0a1600002, negotiated timeout = 6000
10:38:18,092 INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (SyncConnected)
10:38:18,097 INFO kafka.log.LogManager - Loading logs.
10:38:18,097 INFO kafka.log.LogManager - Logs loading complete.
10:38:18,097 INFO kafka.log.LogManager - Starting log cleanup with a period of 300000 ms.
10:38:18,097 INFO kafka.log.LogManager - Starting log flusher with a default period of 9223372036854775807 ms.
10:38:18,101 INFO kafka.network.Acceptor - Awaiting socket connections on 0.0.0.0:44901.
10:38:18,101 INFO kafka.network.SocketServer - [Socket Server on Broker 2], Started
10:38:18,106 INFO kafka.utils.Mx4jLoader$ - Will not load MX4J, mx4j-tools.jar is not in the classpath
10:38:18,106 INFO kafka.controller.KafkaController - [Controller 2]: Controller starting up
10:38:18,127 INFO kafka.controller.KafkaController - [Controller 2]: Controller startup complete
10:38:18,135 INFO kafka.utils.ZkUtils$ - Registered broker 2 at path /brokers/ids/2 with address testing-worker-linux-docker-e8c0cfc7-5418-linux-2:44901.
10:38:18,135 INFO kafka.server.KafkaServer - [Kafka Server 2], started
10:38:18,135 INFO org.apache.flink.streaming.connectors.kafka.KafkaITCase - ZK and KafkaServer started.
10:38:18,139 INFO kafka.utils.VerifiableProperties - Verifying properties
10:38:18,139 INFO kafka.utils.VerifiableProperties - Property auto.commit.enable is overridden to false
10:38:18,139 INFO kafka.utils.VerifiableProperties - Property auto.offset.reset is overridden to smallest
10:38:18,139 INFO kafka.utils.VerifiableProperties - Property group.id is overridden to flink-tests
10:38:18,140 INFO kafka.utils.VerifiableProperties - Property zookeeper.connect is overridden to localhost:56890
10:38:18,157 INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=localhost:56890 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@67543094
10:38:18,160 INFO kafka.controller.ReplicaStateMachine$BrokerChangeListener - [BrokerChangeListener on Controller 0]: Newly added brokers: 0, deleted brokers: , all live brokers: 0
10:38:18,163 INFO org.I0Itec.zkclient.ZkEventThread - Starting ZkClient event thread.
10:38:18,168 INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server localhost/127.0.0.1:56890. Will not attempt to authenticate using SASL (unknown error)
10:38:18,168 INFO org.apache.zookeeper.server.NIOServerCnxnFactory - Accepted socket connection from /127.0.0.1:44528
10:38:18,168 INFO org.apache.zookeeper.ClientCnxn - Socket connection established to localhost/127.0.0.1:56890, initiating session
10:38:18,169 INFO org.apache.zookeeper.server.ZooKeeperServer - Client attempting to establish new session at /127.0.0.1:44528
10:38:18,173 INFO org.apache.zookeeper.server.ZooKeeperServer - Established session 0x14d6bc0a1600003 with negotiated timeout 6000 for client /127.0.0.1:44528
10:38:18,174 INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server localhost/127.0.0.1:56890, sessionid = 0x14d6bc0a1600003, negotiated timeout = 6000
10:38:18,174 INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (SyncConnected)
10:38:18,175 INFO kafka.controller.RequestSendThread - [Controller-0-to-broker-0-send-thread], Controller 0 connected to id:0,host:testing-worker-linux-docker-e8c0cfc7-5418-linux-2,port:41293 for sending state change requests
10:38:18,178 INFO org.apache.flink.streaming.connectors.kafka.KafkaITCase - Starting testPersistentSourceWithOffsetUpdates()
10:38:18,179 INFO kafka.controller.KafkaController - [Controller 0]: New broker startup callback for 0
10:38:18,183 INFO kafka.controller.RequestSendThread - [Controller-0-to-broker-0-send-thread], Starting
10:38:18,183 INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=localhost:56890 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@35060a12
10:38:18,183 INFO org.I0Itec.zkclient.ZkEventThread - Starting ZkClient event thread.
10:38:18,184 INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server localhost/0:0:0:0:0:0:0:1:56890. Will not attempt to authenticate using SASL (unknown error)
10:38:18,184 INFO org.apache.zookeeper.ClientCnxn - Socket connection established to localhost/0:0:0:0:0:0:0:1:56890, initiating session
10:38:18,184 INFO kafka.controller.ReplicaStateMachine$BrokerChangeListener - [BrokerChangeListener on Controller 0]: Broker change listener fired for path /brokers/ids with children 2,1,0
10:38:18,184 INFO org.apache.zookeeper.server.NIOServerCnxnFactory - Accepted socket connection from /0:0:0:0:0:0:0:1:55942
10:38:18,185 INFO org.apache.zookeeper.server.ZooKeeperServer - Client attempting to establish new session at /0:0:0:0:0:0:0:1:55942
10:38:18,186 INFO org.apache.zookeeper.server.ZooKeeperServer - Established session 0x14d6bc0a1600004 with negotiated timeout 6000 for client /0:0:0:0:0:0:0:1:55942
10:38:18,186 INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server localhost/0:0:0:0:0:0:0:1:56890, sessionid = 0x14d6bc0a1600004, negotiated timeout = 6000
10:38:18,186 INFO org.I0Itec.zkclient.ZkClient - zookeeper state changed (SyncConnected)
10:38:18,199 INFO org.apache.flink.streaming.connectors.kafka.KafkaITCase - Creating topic testOffsetHacking
10:38:18,225 INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x14d6bc0a1600004 type:setData cxid:0x3 zxid:0x19 txntype:-1 reqpath:n/a Error Path:/config/topics/testOffsetHacking Error:KeeperErrorCode = NoNode for /config/topics/testOffsetHacking
10:38:18,228 INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x14d6bc0a1600004 type:create cxid:0x4 zxid:0x1a txntype:-1 reqpath:n/a Error Path:/config/topics Error:KeeperErrorCode = NodeExists for /config/topics
10:38:18,237 INFO kafka.admin.AdminUtils$ - Topic creation {"version":1,"partitions":{"2":[1,2],"1":[0,1],"0":[2,0]}}
10:38:18,240 INFO org.apache.flink.streaming.connectors.kafka.KafkaITCase - Writing sequence from 0 to 99 to topic testOffsetHacking
10:38:18,330 INFO kafka.controller.ReplicaStateMachine$BrokerChangeListener - [BrokerChangeListener on Controller 0]: Newly added brokers: 2,1, deleted brokers: , all live brokers: 2,1,0
10:38:18,348 INFO kafka.controller.RequestSendThread - [Controller-0-to-broker-2-send-thread], Controller 0 connected to id:2,host:testing-worker-linux-docker-e8c0cfc7-5418-linux-2,port:44901 for sending state change requests
10:38:18,370 INFO kafka.controller.RequestSendThread - [Controller-0-to-broker-1-send-thread], Controller 0 connected to id:1,host:testing-worker-linux-docker-e8c0cfc7-5418-linux-2,port:52102 for sending state change requests
10:38:18,371 INFO kafka.controller.RequestSendThread - [Controller-0-to-broker-2-send-thread], Starting
10:38:18,371 INFO kafka.controller.KafkaController - [Controller 0]: New broker startup callback for 2,1
10:38:18,384 INFO kafka.controller.RequestSendThread - [Controller-0-to-broker-1-send-thread], Starting
10:38:18,391 INFO kafka.controller.PartitionStateMachine$TopicChangeListener - [TopicChangeListener on Controller 0]: New topics: [Set(testOffsetHacking)], deleted topics: [Set()], new partition replica assignment [Map([testOffsetHacking,1] -> List(0, 1), [testOffsetHacking,0] -> List(2, 0), [testOffsetHacking,2] -> List(1, 2))]
10:38:18,392 INFO kafka.controller.KafkaController - [Controller 0]: New topic creation callback for [testOffsetHacking,1],[testOffsetHacking,0],[testOffsetHacking,2]
10:38:18,401 INFO kafka.controller.KafkaController - [Controller 0]: New partition creation callback for [testOffsetHacking,1],[testOffsetHacking,0],[testOffsetHacking,2]
10:38:18,402 INFO kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Invoking state change to NewPartition for partitions [testOffsetHacking,1],[testOffsetHacking,0],[testOffsetHacking,2]
10:38:18,445 INFO kafka.controller.ReplicaStateMachine - [Replica state machine on controller 0]: Invoking state change to NewReplica for replicas [Topic=testOffsetHacking,Partition=1,Replica=1],[Topic=testOffsetHacking,Partition=0,Replica=2],[Topic=testOffsetHacking,Partition=2,Replica=2],[Topic=testOffsetHacking,Partition=0,Replica=0],[Topic=testOffsetHacking,Partition=1,Replica=0],[Topic=testOffsetHacking,Partition=2,Replica=1]
10:38:18,455 INFO kafka.controller.PartitionStateMachine - [Partition state machine on Controller 0]: Invoking state change to OnlinePartition for partitions [testOffsetHacking,1],[testOffsetHacking,0],[testOffsetHacking,2]
10:38:18,461 INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x14d6bc0a1600000 type:create cxid:0x45 zxid:0x1d txntype:-1 reqpath:n/a Error Path:/brokers/topics/testOffsetHacking/partitions/1 Error:KeeperErrorCode = NoNode for /brokers/topics/testOffsetHacking/partitions/1
10:38:18,463 INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x14d6bc0a1600000 type:create cxid:0x46 zxid:0x1e txntype:-1 reqpath:n/a Error Path:/brokers/topics/testOffsetHacking/partitions Error:KeeperErrorCode = NoNode for /brokers/topics/testOffsetHacking/partitions
10:38:18,509 INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x14d6bc0a1600000 type:create cxid:0x4a zxid:0x22 txntype:-1 reqpath:n/a Error Path:/brokers/topics/testOffsetHacking/partitions/0 Error:KeeperErrorCode = NoNode for /brokers/topics/testOffsetHacking/partitions/0
10:38:18,517 INFO org.apache.zookeeper.server.PrepRequestProcessor - Got user-level KeeperException when processing sessionid:0x14d6bc0a1600000 type:create cxid:0x4d zxid:0x25 txntype:-1 reqpath:n/a Error Path:/brokers/topics/testOffsetHacking/partitions/2 Error:KeeperErrorCode = NoNode for /brokers/topics/testOffsetHacking/partitions/2
10:38:18,539 INFO org.apache.flink.streaming.util.ClusterUtil - Running on mini cluster
10:38:18,554 INFO kafka.controller.ReplicaStateMachine - [Replica state machine on controller 0]: Invoking state change to OnlineReplica for replicas [Topic=testOffsetHacking,Partition=1,Replica=1],[Topic=testOffsetHacking,Partition=0,Replica=2],[Topic=testOffsetHacking,Partition=2,Replica=2],[Topic=testOffsetHacking,Partition=0,Replica=0],[Topic=testOffsetHacking,Partition=1,Replica=0],[Topic=testOffsetHacking,Partition=2,Replica=1]
10:38:18,603 INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 1] Removed fetcher for partitions [testOffsetHacking,2]
10:38:18,606 INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 0] Removed fetcher for partitions [testOffsetHacking,1]
10:38:18,607 INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 2] Removed fetcher for partitions [testOffsetHacking,0]
10:38:18,660 INFO kafka.log.Log - Completed load of log testOffsetHacking-0 with log end offset 0
10:38:18,664 INFO kafka.log.LogManager - Created log for partition [testOffsetHacking,0] in /tmp/junit6916562036045478963/junit4642240598388163910 with properties {segment.index.bytes -> 10485760, file.delete.delay.ms -> 60000, segment.bytes -> 1073741824, flush.ms -> 9223372036854775807, delete.retention.ms -> 86400000, index.interval.bytes -> 4096, retention.bytes -> -1, min.insync.replicas -> 1, cleanup.policy -> delete, unclean.leader.election.enable -> true, segment.ms -> 604800000, max.message.bytes -> 36700160, flush.messages -> 9223372036854775807, min.cleanable.dirty.ratio -> 0.5, retention.ms -> 604800000, segment.jitter.ms -> 0}.
10:38:18,664 INFO kafka.log.Log - Completed load of log testOffsetHacking-2 with log end offset 0
10:38:18,667 INFO kafka.log.LogManager - Created log for partition [testOffsetHacking,2] in /tmp/junit6916562036045478963/junit3647141965421373810 with properties {segment.index.bytes -> 10485760, file.delete.delay.ms -> 60000, segment.bytes -> 1073741824, flush.ms -> 9223372036854775807, delete.retention.ms -> 86400000, index.interval.bytes -> 4096, retention.bytes -> -1, min.insync.replicas -> 1, cleanup.policy -> delete, unclean.leader.election.enable -> true, segment.ms -> 604800000, max.message.bytes -> 36700160, flush.messages -> 9223372036854775807, min.cleanable.dirty.ratio -> 0.5, retention.ms -> 604800000, segment.jitter.ms -> 0}.
10:38:18,667 INFO kafka.log.Log - Completed load of log testOffsetHacking-1 with log end offset 0
10:38:18,670 INFO kafka.log.LogManager - Created log for partition [testOffsetHacking,1] in /tmp/junit6916562036045478963/junit7940965830033948962 with properties {segment.index.bytes -> 10485760, file.delete.delay.ms -> 60000, segment.bytes -> 1073741824, flush.ms -> 9223372036854775807, delete.retention.ms -> 86400000, index.interval.bytes -> 4096, retention.bytes -> -1, min.insync.replicas -> 1, cleanup.policy -> delete, unclean.leader.election.enable -> true, segment.ms -> 604800000, max.message.bytes -> 36700160, flush.messages -> 9223372036854775807, min.cleanable.dirty.ratio -> 0.5, retention.ms -> 604800000, segment.jitter.ms -> 0}.
10:38:18,674 WARN kafka.cluster.Partition - Partition [testOffsetHacking,2] on broker 1: No checkpointed highwatermark is found for partition [testOffsetHacking,2]
10:38:18,680 WARN kafka.cluster.Partition - Partition [testOffsetHacking,1] on broker 0: No checkpointed highwatermark is found for partition [testOffsetHacking,1]
10:38:18,681 WARN kafka.cluster.Partition - Partition [testOffsetHacking,0] on broker 2: No checkpointed highwatermark is found for partition [testOffsetHacking,0]
10:38:18,697 INFO kafka.log.Log - Completed load of log testOffsetHacking-2 with log end offset 0
10:38:18,700 INFO kafka.log.Log - Completed load of log testOffsetHacking-1 with log end offset 0
10:38:18,702 INFO kafka.log.LogManager - Created log for partition [testOffsetHacking,1] in /tmp/junit6916562036045478963/junit3647141965421373810 with properties {segment.index.bytes -> 10485760, file.delete.delay.ms -> 60000, segment.bytes -> 1073741824, flush.ms -> 9223372036854775807, delete.retention.ms -> 86400000, index.interval.bytes -> 4096, retention.bytes -> -1, min.insync.replicas -> 1, cleanup.policy -> delete, unclean.leader.election.enable -> true, segment.ms -> 604800000, max.message.bytes -> 36700160, flush.messages -> 9223372036854775807, min.cleanable.dirty.ratio -> 0.5, retention.ms -> 604800000, segment.jitter.ms -> 0}.
10:38:18,702 WARN kafka.cluster.Partition - Partition [testOffsetHacking,1] on broker 1: No checkpointed highwatermark is found for partition [testOffsetHacking,1]
10:38:18,705 INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 1] Removed fetcher for partitions [testOffsetHacking,1]
10:38:18,713 INFO kafka.log.Log - Completed load of log testOffsetHacking-0 with log end offset 0
10:38:18,715 INFO kafka.log.LogManager - Created log for partition [testOffsetHacking,0] in /tmp/junit6916562036045478963/junit7940965830033948962 with properties {segment.index.bytes -> 10485760, file.delete.delay.ms -> 60000, segment.bytes -> 1073741824, flush.ms -> 9223372036854775807, delete.retention.ms -> 86400000, index.interval.bytes -> 4096, retention.bytes -> -1, min.insync.replicas -> 1, cleanup.policy -> delete, unclean.leader.election.enable -> true, segment.ms -> 604800000, max.message.bytes -> 36700160, flush.messages -> 9223372036854775807, min.cleanable.dirty.ratio -> 0.5, retention.ms -> 604800000, segment.jitter.ms -> 0}.
10:38:18,715 WARN kafka.cluster.Partition - Partition [testOffsetHacking,0] on broker 0: No checkpointed highwatermark is found for partition [testOffsetHacking,0]
10:38:18,924 INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 0] Removed fetcher for partitions [testOffsetHacking,0]
10:38:18,929 INFO kafka.log.Log - Truncating log testOffsetHacking-1 to offset 0.
10:38:18,725 INFO kafka.log.LogManager - Created log for partition [testOffsetHacking,2] in /tmp/junit6916562036045478963/junit4642240598388163910 with properties {segment.index.bytes -> 10485760, file.delete.delay.ms -> 60000, segment.bytes -> 1073741824, flush.ms -> 9223372036854775807, delete.retention.ms -> 86400000, index.interval.bytes -> 4096, retention.bytes -> -1, min.insync.replicas -> 1, cleanup.policy -> delete, unclean.leader.election.enable -> true, segment.ms -> 604800000, max.message.bytes -> 36700160, flush.messages -> 9223372036854775807, min.cleanable.dirty.ratio -> 0.5, retention.ms -> 604800000, segment.jitter.ms -> 0}.
10:38:18,932 WARN kafka.cluster.Partition - Partition [testOffsetHacking,2] on broker 2: No checkpointed highwatermark is found for partition [testOffsetHacking,2]
10:38:18,933 INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 2] Removed fetcher for partitions [testOffsetHacking,2]
10:38:18,933 INFO kafka.log.Log - Truncating log testOffsetHacking-2 to offset 0.
10:38:18,944 INFO kafka.log.Log - Truncating log testOffsetHacking-0 to offset 0.
10:38:19,394 INFO akka.event.slf4j.Slf4jLogger - Slf4jLogger started
10:38:19,415 INFO org.apache.flink.runtime.blob.BlobServer - Created BLOB server storage directory /tmp/blobStore-d6f6d057-2a66-40e6-8915-69c0c22b62a8
10:38:19,416 INFO org.apache.flink.runtime.blob.BlobServer - Started BLOB server at 0.0.0.0:46220 - max concurrent requests: 50 - max backlog: 1000
10:38:19,436 INFO org.apache.flink.runtime.jobmanager.JobManager - Starting JobManager at akka://flink/user/jobmanager#1973411457.
10:38:19,442 INFO org.apache.flink.runtime.taskmanager.TaskManager - Messages between TaskManager and JobManager have a max timeout of 100000 milliseconds
10:38:19,463 INFO org.apache.flink.runtime.taskmanager.TaskManager - Temporary file directory '/tmp': total 15 GB, usable 9 GB (60.00% usable)
10:38:19,505 INFO org.apache.flink.runtime.io.network.buffer.NetworkBufferPool - Allocated 64 MB for network buffer pool (number of memory segments: 2048, bytes per segment: 32768).
10:38:19,506 INFO org.apache.flink.runtime.taskmanager.TaskManager - Using 200 MB for Flink managed memory.
10:38:20,344 INFO org.apache.flink.runtime.io.disk.iomanager.IOManager - I/O manager uses directory /tmp/flink-io-b3d946fc-97b2-4a91-8dd6-d90d56992643 for spill files.
10:38:20,372 INFO org.apache.flink.runtime.filecache.FileCache - User file cache uses directory /tmp/flink-dist-cache-22ce7c11-fed8-469d-bd7b-7fc9a2f0c040
10:38:20,550 INFO org.apache.flink.runtime.taskmanager.TaskManager - Starting TaskManager actor at akka://flink/user/taskmanager_1#306060796.
10:38:20,551 INFO org.apache.flink.runtime.taskmanager.TaskManager - TaskManager data connection information: localhost (dataPort=37546)
10:38:20,552 INFO org.apache.flink.runtime.taskmanager.TaskManager - TaskManager has 3 task slot(s).
10:38:20,552 INFO org.apache.flink.runtime.taskmanager.TaskManager - Memory usage stats: [HEAP: 295/714/714 MB, NON HEAP: 28/53/130 MB (used/committed/max)]
10:38:20,557 INFO org.apache.flink.runtime.taskmanager.TaskManager - Trying to register at JobManager akka://flink/user/jobmanager (attempt 1, timeout: 500 milliseconds)
10:38:20,569 INFO org.apache.flink.runtime.instance.InstanceManager - Registered TaskManager at localhost (akka://flink/user/taskmanager_1) as 12fef6451d3abdef5ceda5e54079c8c0. Current number of registered hosts is 1.
10:38:20,575 INFO org.apache.flink.runtime.taskmanager.TaskManager - Successful registration at JobManager (akka://flink/user/jobmanager), starting network stack and library cache.
10:38:20,586 INFO org.apache.flink.runtime.taskmanager.TaskManager - Determined BLOB server address to be localhost/127.0.0.1:46220. Starting BLOB cache.
10:38:20,587 INFO org.apache.flink.runtime.blob.BlobCache - Created BLOB cache storage directory /tmp/blobStore-a75b3ff8-4edb-48b9-a7c2-fadaf3edc75f
10:38:20,606 INFO org.apache.flink.runtime.client.JobClient - Sending message to JobManager akka://flink/user/jobmanager to submit job Write sequence from 0 to 99 to topic testOffsetHacking (de7d3f8b724ef06f7b1c6c901e163c24) and wait for progress
10:38:20,609 INFO org.apache.flink.runtime.jobmanager.JobManager - Received job de7d3f8b724ef06f7b1c6c901e163c24 (Write sequence from 0 to 99 to topic testOffsetHacking).
10:38:20,627 INFO org.apache.flink.runtime.jobmanager.JobManager - Scheduling job Write sequence from 0 to 99 to topic testOffsetHacking.
10:38:20,627 INFO org.apache.flink.runtime.client.JobClient - Job was successfully submitted to the JobManager
10:38:20,628 INFO org.apache.flink.runtime.client.JobClient - 05/19/2015 10:38:20 Job execution switched to status RUNNING.
10:38:20,630 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Custom source -> Stream Sink (1/3) (eb239f4299e9bdc71f552a2c3241c056) switched from CREATED to SCHEDULED
10:38:20,631 INFO org.apache.flink.runtime.client.JobClient - 05/19/2015 10:38:20 Custom source -> Stream Sink(1/3) switched to SCHEDULED
10:38:20,636 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Custom source -> Stream Sink (1/3) (eb239f4299e9bdc71f552a2c3241c056) switched from SCHEDULED to DEPLOYING
10:38:20,637 INFO org.apache.flink.runtime.client.JobClient - 05/19/2015 10:38:20 Custom source -> Stream Sink(1/3) switched to DEPLOYING
10:38:20,637 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying Custom source -> Stream Sink (1/3) (attempt #0) to localhost
10:38:20,640 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Custom source -> Stream Sink (2/3) (49fb290b3795470b31f19929895e470f) switched from CREATED to SCHEDULED
10:38:20,641 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Custom source -> Stream Sink (2/3) (49fb290b3795470b31f19929895e470f) switched from SCHEDULED to DEPLOYING
10:38:20,641 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying Custom source -> Stream Sink (2/3) (attempt #0) to localhost
10:38:20,642 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Custom source -> Stream Sink (3/3) (6e38b372ae659c492e8eec1e6213b0c5) switched from CREATED to SCHEDULED
10:38:20,642 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Custom source -> Stream Sink (3/3) (6e38b372ae659c492e8eec1e6213b0c5) switched from SCHEDULED to DEPLOYING
10:38:20,642 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Deploying Custom source -> Stream Sink (3/3) (attempt #0) to localhost
10:38:20,644 INFO org.apache.flink.runtime.client.JobClient - 05/19/2015 10:38:20 Custom source -> Stream Sink(2/3) switched to SCHEDULED
10:38:20,644 INFO org.apache.flink.runtime.jobmanager.JobManager - Status of job de7d3f8b724ef06f7b1c6c901e163c24 (Write sequence from 0 to 99 to topic testOffsetHacking) changed to RUNNING .
10:38:20,644 INFO org.apache.flink.runtime.client.JobClient - 05/19/2015 10:38:20 Custom source -> Stream Sink(2/3) switched to DEPLOYING
10:38:20,644 INFO org.apache.flink.runtime.client.JobClient - 05/19/2015 10:38:20 Custom source -> Stream Sink(3/3) switched to SCHEDULED
10:38:20,645 INFO org.apache.flink.runtime.client.JobClient - 05/19/2015 10:38:20 Custom source -> Stream Sink(3/3) switched to DEPLOYING
10:38:20,649 INFO org.apache.flink.runtime.taskmanager.TaskManager - Received task Custom source -> Stream Sink (1/3)
10:38:20,649 INFO org.apache.flink.runtime.taskmanager.Task - Loading JAR files for task Custom source -> Stream Sink (1/3)
10:38:20,651 INFO org.apache.flink.runtime.taskmanager.TaskManager - Received task Custom source -> Stream Sink (2/3)
10:38:20,652 INFO org.apache.flink.runtime.taskmanager.Task - Registering task at network: Custom source -> Stream Sink (1/3) [DEPLOYING]
10:38:20,653 INFO org.apache.flink.runtime.taskmanager.TaskManager - Received task Custom source -> Stream Sink (3/3)
10:38:20,655 WARN org.apache.flink.runtime.jobgraph.tasks.AbstractInvokable - Environment did not contain an ExecutionConfig - using a default config.
10:38:20,666 INFO org.apache.flink.runtime.taskmanager.Task - Custom source -> Stream Sink (1/3) switched to RUNNING
10:38:20,671 INFO kafka.utils.VerifiableProperties - Verifying properties
10:38:20,671 WARN kafka.utils.VerifiableProperties - Property flink.kafka.wrapper.serialized is not valid
10:38:20,671 INFO kafka.utils.VerifiableProperties - Property key.serializer.class is overridden to kafka.serializer.DefaultEncoder
10:38:20,671 INFO kafka.utils.VerifiableProperties - Property message.send.max.retries is overridden to 10
10:38:20,671 INFO kafka.utils.VerifiableProperties - Property metadata.broker.list is overridden to localhost:41293,localhost:52102,localhost:44901,
10:38:20,672 INFO kafka.utils.VerifiableProperties - Property partitioner.class is overridden to org.apache.flink.streaming.connectors.kafka.api.config.PartitionerWrapper
10:38:20,672 INFO kafka.utils.VerifiableProperties - Property request.required.acks is overridden to -1
10:38:20,672 INFO kafka.utils.VerifiableProperties - Property serializer.class is overridden to kafka.serializer.DefaultEncoder
10:38:20,681 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 1 @ 1432031900681
10:38:20,652 INFO org.apache.flink.runtime.taskmanager.Task - Loading JAR files for task Custom source -> Stream Sink (2/3)
10:38:20,690 INFO org.apache.flink.runtime.taskmanager.Task - Loading JAR files for task Custom source -> Stream Sink (3/3)
10:38:20,696 INFO org.apache.flink.runtime.taskmanager.Task - Registering task at network: Custom source -> Stream Sink (3/3) [DEPLOYING]
10:38:20,697 INFO org.apache.flink.runtime.taskmanager.Task - Registering task at network: Custom source -> Stream Sink (2/3) [DEPLOYING]
10:38:20,697 WARN org.apache.flink.runtime.jobgraph.tasks.AbstractInvokable - Environment did not contain an ExecutionConfig - using a default config.
10:38:20,697 INFO org.apache.flink.streaming.connectors.kafka.KafkaITCase - Starting source.
10:38:20,697 INFO org.apache.flink.streaming.connectors.kafka.KafkaITCase - Writing 0 to partition 0
10:38:20,696 WARN org.apache.flink.runtime.jobgraph.tasks.AbstractInvokable - Environment did not contain an ExecutionConfig - using a default config.
10:38:20,710 INFO org.apache.flink.runtime.taskmanager.Task - Custom source -> Stream Sink (3/3) switched to RUNNING
10:38:20,710 INFO kafka.utils.VerifiableProperties - Verifying properties
10:38:20,710 WARN kafka.utils.VerifiableProperties - Property flink.kafka.wrapper.serialized is not valid
10:38:20,711 INFO kafka.utils.VerifiableProperties - Property key.serializer.class is overridden to kafka.serializer.DefaultEncoder
10:38:20,711 INFO kafka.utils.VerifiableProperties - Property message.send.max.retries is overridden to 10
10:38:20,711 INFO kafka.utils.VerifiableProperties - Property metadata.broker.list is overridden to localhost:41293,localhost:52102,localhost:44901,
10:38:20,711 INFO kafka.utils.VerifiableProperties - Property partitioner.class is overridden to org.apache.flink.streaming.connectors.kafka.api.config.PartitionerWrapper
10:38:20,711 INFO kafka.utils.VerifiableProperties - Property request.required.acks is overridden to -1
10:38:20,713 INFO kafka.utils.VerifiableProperties - Property serializer.class is overridden to kafka.serializer.DefaultEncoder
10:38:20,714 INFO org.apache.flink.streaming.connectors.kafka.KafkaITCase - Starting source.
10:38:20,714 INFO org.apache.flink.streaming.connectors.kafka.KafkaITCase - Writing 0 to partition 2
10:38:20,715 INFO org.apache.flink.runtime.taskmanager.Task - Custom source -> Stream Sink (2/3) switched to RUNNING
10:38:20,715 INFO kafka.utils.VerifiableProperties - Verifying properties
10:38:20,715 WARN kafka.utils.VerifiableProperties - Property flink.kafka.wrapper.serialized is not valid
10:38:20,715 INFO kafka.utils.VerifiableProperties - Property key.serializer.class is overridden to kafka.serializer.DefaultEncoder
10:38:20,716 INFO kafka.utils.VerifiableProperties - Property message.send.max.retries is overridden to 10
10:38:20,716 INFO kafka.utils.VerifiableProperties - Property metadata.broker.list is overridden to localhost:41293,localhost:52102,localhost:44901,
10:38:20,716 INFO kafka.utils.VerifiableProperties - Property partitioner.class is overridden to org.apache.flink.streaming.connectors.kafka.api.config.PartitionerWrapper
10:38:20,716 INFO kafka.utils.VerifiableProperties - Property request.required.acks is overridden to -1
10:38:20,716 INFO kafka.utils.VerifiableProperties - Property serializer.class is overridden to kafka.serializer.DefaultEncoder
10:38:20,717 INFO org.apache.flink.streaming.connectors.kafka.KafkaITCase - Starting source.
10:38:20,717 INFO org.apache.flink.streaming.connectors.kafka.KafkaITCase - Writing 0 to partition 1
10:38:20,731 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 2 @ 1432031900731
10:38:20,782 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 3 @ 1432031900782
10:38:20,791 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:1,host:localhost,port:52102 with correlation id 0 for 1 topic(s) Set(testOffsetHacking)
10:38:20,792 INFO kafka.producer.SyncProducer - Connected to localhost:52102 for producing
10:38:20,779 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Custom source -> Stream Sink (1/3) (eb239f4299e9bdc71f552a2c3241c056) switched from DEPLOYING to RUNNING
10:38:20,794 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 1 on task Custom source -> Stream Sink
10:38:20,807 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Custom source -> Stream Sink (3/3) (6e38b372ae659c492e8eec1e6213b0c5) switched from DEPLOYING to RUNNING
10:38:20,808 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Custom source -> Stream Sink (2/3) (49fb290b3795470b31f19929895e470f) switched from DEPLOYING to RUNNING
10:38:20,794 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 1 on task Custom source -> Stream Sink
10:38:20,796 INFO org.apache.flink.runtime.client.JobClient - 05/19/2015 10:38:20 Custom source -> Stream Sink(1/3) switched to RUNNING
10:38:20,814 INFO org.apache.flink.runtime.client.JobClient - 05/19/2015 10:38:20 Custom source -> Stream Sink(3/3) switched to RUNNING
10:38:20,814 INFO org.apache.flink.runtime.client.JobClient - 05/19/2015 10:38:20 Custom source -> Stream Sink(2/3) switched to RUNNING
10:38:20,796 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:1,host:localhost,port:52102 with correlation id 0 for 1 topic(s) Set(testOffsetHacking)
10:38:20,796 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:0,host:localhost,port:41293 with correlation id 0 for 1 topic(s) Set(testOffsetHacking)
10:38:20,795 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 1 on task Custom source -> Stream Sink
10:38:20,815 INFO kafka.producer.SyncProducer - Connected to localhost:41293 for producing
10:38:20,816 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 3 on task Custom source -> Stream Sink
10:38:20,816 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 2 on task Custom source -> Stream Sink
10:38:20,814 INFO kafka.producer.SyncProducer - Connected to localhost:52102 for producing
10:38:20,815 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 3 on task Custom source -> Stream Sink
10:38:20,815 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 3 on task Custom source -> Stream Sink
10:38:20,817 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 2 on task Custom source -> Stream Sink
10:38:20,816 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed checkpoint 1
10:38:20,817 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 2 on task Custom source -> Stream Sink
10:38:20,831 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 4 @ 1432031900831
10:38:20,866 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed checkpoint 3
10:38:20,867 WARN org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Received late message for now expired checkpoint attempt 2
10:38:20,867 WARN org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Received late message for now expired checkpoint attempt 2
10:38:20,869 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 4 on task Custom source -> Stream Sink
10:38:20,878 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 4 on task Custom source -> Stream Sink
10:38:20,884 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 5 @ 1432031900884
10:38:20,884 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 4 on task Custom source -> Stream Sink
10:38:20,886 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed checkpoint 4
10:38:20,931 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 6 @ 1432031900931
10:38:20,953 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 5 on task Custom source -> Stream Sink
10:38:20,953 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 5 on task Custom source -> Stream Sink
10:38:20,953 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 5 on task Custom source -> Stream Sink
10:38:20,963 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed checkpoint 5
10:38:20,963 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 6 on task Custom source -> Stream Sink
10:38:20,964 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 6 on task Custom source -> Stream Sink
10:38:20,964 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 6 on task Custom source -> Stream Sink
10:38:20,965 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed checkpoint 6
10:38:20,981 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 7 @ 1432031900981
10:38:20,981 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 7 on task Custom source -> Stream Sink
10:38:20,982 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 7 on task Custom source -> Stream Sink
10:38:20,982 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 7 on task Custom source -> Stream Sink
10:38:20,982 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed checkpoint 7
10:38:21,006 INFO kafka.producer.SyncProducer - Disconnecting from localhost:52102
10:38:21,009 INFO kafka.producer.SyncProducer - Disconnecting from localhost:52102
10:38:21,009 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,010 INFO kafka.producer.SyncProducer - Disconnecting from localhost:41293
10:38:21,012 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:2,host:localhost,port:44901 with correlation id 1 for 1 topic(s) Set(testOffsetHacking)
10:38:21,009 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,010 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,016 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:1,host:localhost,port:52102 with correlation id 1 for 1 topic(s) Set(testOffsetHacking)
10:38:21,016 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:1,host:localhost,port:52102 with correlation id 1 for 1 topic(s) Set(testOffsetHacking)
10:38:21,016 INFO kafka.producer.SyncProducer - Connected to localhost:52102 for producing
10:38:21,017 INFO kafka.producer.SyncProducer - Connected to localhost:52102 for producing
10:38:21,026 INFO kafka.producer.SyncProducer - Connected to localhost:44901 for producing
10:38:21,031 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 8 @ 1432031901031
10:38:21,049 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 8 on task Custom source -> Stream Sink
10:38:21,084 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 9 @ 1432031901084
10:38:21,084 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 8 on task Custom source -> Stream Sink
10:38:21,085 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,088 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,068 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,053 INFO kafka.producer.SyncProducer - Disconnecting from localhost:52102
10:38:21,092 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,093 ERROR kafka.producer.async.DefaultEventHandler - Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: testOffsetHacking
10:38:21,093 INFO kafka.producer.async.DefaultEventHandler - Back off for 100 ms before retrying send. Remaining retries = 10
10:38:21,053 INFO kafka.producer.SyncProducer - Disconnecting from localhost:44901
10:38:21,093 INFO kafka.producer.SyncProducer - Disconnecting from localhost:52102
10:38:21,094 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,094 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,094 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,094 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,094 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,094 ERROR kafka.producer.async.DefaultEventHandler - Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: testOffsetHacking
10:38:21,094 ERROR kafka.producer.async.DefaultEventHandler - Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: testOffsetHacking
10:38:21,105 INFO kafka.producer.async.DefaultEventHandler - Back off for 100 ms before retrying send. Remaining retries = 10
10:38:21,105 INFO kafka.producer.async.DefaultEventHandler - Back off for 100 ms before retrying send. Remaining retries = 10
10:38:21,105 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 8 on task Custom source -> Stream Sink
10:38:21,105 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed checkpoint 8
10:38:21,108 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 9 on task Custom source -> Stream Sink
10:38:21,108 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 9 on task Custom source -> Stream Sink
10:38:21,108 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 9 on task Custom source -> Stream Sink
10:38:21,108 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed checkpoint 9
10:38:21,131 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 10 @ 1432031901131
10:38:21,132 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 10 on task Custom source -> Stream Sink
10:38:21,132 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 10 on task Custom source -> Stream Sink
10:38:21,132 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 10 on task Custom source -> Stream Sink
10:38:21,132 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed checkpoint 10
10:38:21,181 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 11 @ 1432031901181
10:38:21,182 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 11 on task Custom source -> Stream Sink
10:38:21,182 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 11 on task Custom source -> Stream Sink
10:38:21,183 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 11 on task Custom source -> Stream Sink
10:38:21,183 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed checkpoint 11
10:38:21,195 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:0,host:localhost,port:41293 with correlation id 2 for 1 topic(s) Set(testOffsetHacking)
10:38:21,196 INFO kafka.producer.SyncProducer - Connected to localhost:41293 for producing
10:38:21,199 INFO kafka.producer.SyncProducer - Disconnecting from localhost:41293
10:38:21,199 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,199 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,200 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:2,host:localhost,port:44901 with correlation id 3 for 1 topic(s) Set(testOffsetHacking)
10:38:21,200 INFO kafka.producer.SyncProducer - Connected to localhost:44901 for producing
10:38:21,203 INFO kafka.producer.SyncProducer - Disconnecting from localhost:44901
10:38:21,203 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,203 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,203 ERROR kafka.producer.async.DefaultEventHandler - Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: testOffsetHacking
10:38:21,204 INFO kafka.producer.async.DefaultEventHandler - Back off for 100 ms before retrying send. Remaining retries = 9
10:38:21,205 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:2,host:localhost,port:44901 with correlation id 2 for 1 topic(s) Set(testOffsetHacking)
10:38:21,205 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:2,host:localhost,port:44901 with correlation id 2 for 1 topic(s) Set(testOffsetHacking)
10:38:21,205 INFO kafka.producer.SyncProducer - Connected to localhost:44901 for producing
10:38:21,205 INFO kafka.producer.SyncProducer - Connected to localhost:44901 for producing
10:38:21,210 INFO kafka.producer.SyncProducer - Disconnecting from localhost:44901
10:38:21,210 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,211 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,211 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:1,host:localhost,port:52102 with correlation id 3 for 1 topic(s) Set(testOffsetHacking)
10:38:21,212 INFO kafka.producer.SyncProducer - Connected to localhost:52102 for producing
10:38:21,212 INFO kafka.producer.SyncProducer - Disconnecting from localhost:44901
10:38:21,213 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,213 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:1,host:localhost,port:52102 with correlation id 3 for 1 topic(s) Set(testOffsetHacking)
10:38:21,213 INFO kafka.producer.SyncProducer - Connected to localhost:52102 for producing
10:38:21,215 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,218 INFO kafka.producer.SyncProducer - Disconnecting from localhost:52102
10:38:21,219 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,219 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,219 ERROR kafka.producer.async.DefaultEventHandler - Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: testOffsetHacking
10:38:21,219 INFO kafka.producer.async.DefaultEventHandler - Back off for 100 ms before retrying send. Remaining retries = 9
10:38:21,219 INFO kafka.producer.SyncProducer - Disconnecting from localhost:52102
10:38:21,220 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,220 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,220 ERROR kafka.producer.async.DefaultEventHandler - Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: testOffsetHacking
10:38:21,220 INFO kafka.producer.async.DefaultEventHandler - Back off for 100 ms before retrying send. Remaining retries = 9
10:38:21,231 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 12 @ 1432031901231
10:38:21,231 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 12 on task Custom source -> Stream Sink
10:38:21,232 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 12 on task Custom source -> Stream Sink
10:38:21,232 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 12 on task Custom source -> Stream Sink
10:38:21,232 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed checkpoint 12
10:38:21,281 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 13 @ 1432031901281
10:38:21,282 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 13 on task Custom source -> Stream Sink
10:38:21,282 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 13 on task Custom source -> Stream Sink
10:38:21,282 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 13 on task Custom source -> Stream Sink
10:38:21,282 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed checkpoint 13
10:38:21,304 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:0,host:localhost,port:41293 with correlation id 4 for 1 topic(s) Set(testOffsetHacking)
10:38:21,305 INFO kafka.producer.SyncProducer - Connected to localhost:41293 for producing
10:38:21,308 INFO kafka.producer.SyncProducer - Disconnecting from localhost:41293
10:38:21,308 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,309 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:1,host:localhost,port:52102 with correlation id 5 for 1 topic(s) Set(testOffsetHacking)
10:38:21,309 INFO kafka.producer.SyncProducer - Connected to localhost:52102 for producing
10:38:21,309 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,312 INFO kafka.producer.SyncProducer - Disconnecting from localhost:52102
10:38:21,312 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,312 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,312 ERROR kafka.producer.async.DefaultEventHandler - Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: testOffsetHacking
10:38:21,312 INFO kafka.producer.async.DefaultEventHandler - Back off for 100 ms before retrying send. Remaining retries = 8
10:38:21,319 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:1,host:localhost,port:52102 with correlation id 4 for 1 topic(s) Set(testOffsetHacking)
10:38:21,320 INFO kafka.producer.SyncProducer - Connected to localhost:52102 for producing
10:38:21,321 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:2,host:localhost,port:44901 with correlation id 4 for 1 topic(s) Set(testOffsetHacking)
10:38:21,321 INFO kafka.producer.SyncProducer - Connected to localhost:44901 for producing
10:38:21,324 INFO kafka.producer.SyncProducer - Disconnecting from localhost:52102
10:38:21,324 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,324 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,325 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:1,host:localhost,port:52102 with correlation id 5 for 1 topic(s) Set(testOffsetHacking)
10:38:21,325 INFO kafka.producer.SyncProducer - Connected to localhost:52102 for producing
10:38:21,325 INFO kafka.producer.SyncProducer - Disconnecting from localhost:44901
10:38:21,326 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,326 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:2,host:localhost,port:44901 with correlation id 5 for 1 topic(s) Set(testOffsetHacking)
10:38:21,326 INFO kafka.producer.SyncProducer - Connected to localhost:44901 for producing
10:38:21,327 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,331 INFO kafka.producer.SyncProducer - Disconnecting from localhost:52102
10:38:21,331 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 14 @ 1432031901331
10:38:21,331 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,331 ERROR kafka.producer.async.DefaultEventHandler - Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: testOffsetHacking
10:38:21,331 INFO kafka.producer.async.DefaultEventHandler - Back off for 100 ms before retrying send. Remaining retries = 8
10:38:21,331 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,332 INFO kafka.producer.SyncProducer - Disconnecting from localhost:44901
10:38:21,332 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,332 ERROR kafka.producer.async.DefaultEventHandler - Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: testOffsetHacking
10:38:21,332 INFO kafka.producer.async.DefaultEventHandler - Back off for 100 ms before retrying send. Remaining retries = 8
10:38:21,332 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,333 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 14 on task Custom source -> Stream Sink
10:38:21,333 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 14 on task Custom source -> Stream Sink
10:38:21,333 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 14 on task Custom source -> Stream Sink
10:38:21,333 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed checkpoint 14
10:38:21,381 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 15 @ 1432031901381
10:38:21,382 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 15 on task Custom source -> Stream Sink
10:38:21,382 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 15 on task Custom source -> Stream Sink
10:38:21,382 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 15 on task Custom source -> Stream Sink
10:38:21,383 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed checkpoint 15
10:38:21,413 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:2,host:localhost,port:44901 with correlation id 6 for 1 topic(s) Set(testOffsetHacking)
10:38:21,413 INFO kafka.producer.SyncProducer - Connected to localhost:44901 for producing
10:38:21,416 INFO kafka.producer.SyncProducer - Disconnecting from localhost:44901
10:38:21,417 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,417 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:2,host:localhost,port:44901 with correlation id 7 for 1 topic(s) Set(testOffsetHacking)
10:38:21,417 INFO kafka.producer.SyncProducer - Connected to localhost:44901 for producing
10:38:21,423 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,427 INFO kafka.producer.SyncProducer - Disconnecting from localhost:44901
10:38:21,427 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,427 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,427 ERROR kafka.producer.async.DefaultEventHandler - Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: testOffsetHacking
10:38:21,427 INFO kafka.producer.async.DefaultEventHandler - Back off for 100 ms before retrying send. Remaining retries = 7
10:38:21,431 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 16 @ 1432031901431
10:38:21,432 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:2,host:localhost,port:44901 with correlation id 6 for 1 topic(s) Set(testOffsetHacking)
10:38:21,432 INFO kafka.producer.SyncProducer - Connected to localhost:44901 for producing
10:38:21,433 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:0,host:localhost,port:41293 with correlation id 6 for 1 topic(s) Set(testOffsetHacking)
10:38:21,433 INFO kafka.producer.SyncProducer - Connected to localhost:41293 for producing
10:38:21,434 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 16 on task Custom source -> Stream Sink
10:38:21,437 INFO kafka.producer.SyncProducer - Disconnecting from localhost:41293
10:38:21,437 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,437 INFO kafka.producer.SyncProducer - Disconnecting from localhost:44901
10:38:21,437 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,437 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:2,host:localhost,port:44901 with correlation id 7 for 1 topic(s) Set(testOffsetHacking)
10:38:21,438 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,438 INFO kafka.producer.SyncProducer - Connected to localhost:44901 for producing
10:38:21,438 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:2,host:localhost,port:44901 with correlation id 7 for 1 topic(s) Set(testOffsetHacking)
10:38:21,438 INFO kafka.producer.SyncProducer - Connected to localhost:44901 for producing
10:38:21,439 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,441 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 16 on task Custom source -> Stream Sink
10:38:21,441 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 16 on task Custom source -> Stream Sink
10:38:21,442 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed checkpoint 16
10:38:21,443 INFO kafka.producer.SyncProducer - Disconnecting from localhost:44901
10:38:21,443 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,443 ERROR kafka.producer.async.DefaultEventHandler - Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: testOffsetHacking
10:38:21,443 INFO kafka.producer.SyncProducer - Disconnecting from localhost:44901
10:38:21,443 INFO kafka.producer.async.DefaultEventHandler - Back off for 100 ms before retrying send. Remaining retries = 7
10:38:21,444 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,444 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,444 ERROR kafka.producer.async.DefaultEventHandler - Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: testOffsetHacking
10:38:21,444 INFO kafka.producer.async.DefaultEventHandler - Back off for 100 ms before retrying send. Remaining retries = 7
10:38:21,444 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,481 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 17 @ 1432031901481
10:38:21,481 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 17 on task Custom source -> Stream Sink
10:38:21,482 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 17 on task Custom source -> Stream Sink
10:38:21,482 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 17 on task Custom source -> Stream Sink
10:38:21,482 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed checkpoint 17
10:38:21,528 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:1,host:localhost,port:52102 with correlation id 8 for 1 topic(s) Set(testOffsetHacking)
10:38:21,528 INFO kafka.producer.SyncProducer - Connected to localhost:52102 for producing
10:38:21,531 INFO kafka.producer.SyncProducer - Disconnecting from localhost:52102
10:38:21,531 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,531 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 18 @ 1432031901531
10:38:21,531 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,532 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:0,host:localhost,port:41293 with correlation id 9 for 1 topic(s) Set(testOffsetHacking)
10:38:21,532 INFO kafka.producer.SyncProducer - Connected to localhost:41293 for producing
10:38:21,533 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 18 on task Custom source -> Stream Sink
10:38:21,533 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 18 on task Custom source -> Stream Sink
10:38:21,534 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 18 on task Custom source -> Stream Sink
10:38:21,534 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed checkpoint 18
10:38:21,537 INFO kafka.producer.SyncProducer - Disconnecting from localhost:41293
10:38:21,537 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,537 ERROR kafka.producer.async.DefaultEventHandler - Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: testOffsetHacking
10:38:21,537 INFO kafka.producer.async.DefaultEventHandler - Back off for 100 ms before retrying send. Remaining retries = 6
10:38:21,540 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,544 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:0,host:localhost,port:41293 with correlation id 8 for 1 topic(s) Set(testOffsetHacking)
10:38:21,545 INFO kafka.producer.SyncProducer - Connected to localhost:41293 for producing
10:38:21,547 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:2,host:localhost,port:44901 with correlation id 8 for 1 topic(s) Set(testOffsetHacking)
10:38:21,547 INFO kafka.producer.SyncProducer - Connected to localhost:44901 for producing
10:38:21,551 INFO kafka.producer.SyncProducer - Disconnecting from localhost:41293
10:38:21,551 INFO kafka.producer.SyncProducer - Disconnecting from localhost:44901
10:38:21,551 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,551 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,552 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:2,host:localhost,port:44901 with correlation id 9 for 1 topic(s) Set(testOffsetHacking)
10:38:21,552 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,552 INFO kafka.producer.SyncProducer - Connected to localhost:44901 for producing
10:38:21,553 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,552 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:0,host:localhost,port:41293 with correlation id 9 for 1 topic(s) Set(testOffsetHacking)
10:38:21,553 INFO kafka.producer.SyncProducer - Connected to localhost:41293 for producing
10:38:21,557 INFO kafka.producer.SyncProducer - Disconnecting from localhost:41293
10:38:21,557 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,557 ERROR kafka.producer.async.DefaultEventHandler - Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: testOffsetHacking
10:38:21,557 INFO kafka.producer.async.DefaultEventHandler - Back off for 100 ms before retrying send. Remaining retries = 6
10:38:21,558 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,558 INFO kafka.producer.SyncProducer - Disconnecting from localhost:44901
10:38:21,558 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,558 ERROR kafka.producer.async.DefaultEventHandler - Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: testOffsetHacking
10:38:21,558 INFO kafka.producer.async.DefaultEventHandler - Back off for 100 ms before retrying send. Remaining retries = 6
10:38:21,559 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,581 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 19 @ 1432031901581
10:38:21,581 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 19 on task Custom source -> Stream Sink
10:38:21,582 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 19 on task Custom source -> Stream Sink
10:38:21,582 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 19 on task Custom source -> Stream Sink
10:38:21,582 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed checkpoint 19
10:38:21,631 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 20 @ 1432031901631
10:38:21,632 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 20 on task Custom source -> Stream Sink
10:38:21,632 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 20 on task Custom source -> Stream Sink
10:38:21,632 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 20 on task Custom source -> Stream Sink
10:38:21,633 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed checkpoint 20
10:38:21,638 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:1,host:localhost,port:52102 with correlation id 10 for 1 topic(s) Set(testOffsetHacking)
10:38:21,638 INFO kafka.producer.SyncProducer - Connected to localhost:52102 for producing
10:38:21,642 INFO kafka.producer.SyncProducer - Disconnecting from localhost:52102
10:38:21,643 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,643 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:1,host:localhost,port:52102 with correlation id 11 for 1 topic(s) Set(testOffsetHacking)
10:38:21,643 INFO kafka.producer.SyncProducer - Connected to localhost:52102 for producing
10:38:21,644 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,648 INFO kafka.producer.SyncProducer - Disconnecting from localhost:52102
10:38:21,648 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,648 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,648 ERROR kafka.producer.async.DefaultEventHandler - Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: testOffsetHacking
10:38:21,648 INFO kafka.producer.async.DefaultEventHandler - Back off for 100 ms before retrying send. Remaining retries = 5
10:38:21,658 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:0,host:localhost,port:41293 with correlation id 10 for 1 topic(s) Set(testOffsetHacking)
10:38:21,658 INFO kafka.producer.SyncProducer - Connected to localhost:41293 for producing
10:38:21,659 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:0,host:localhost,port:41293 with correlation id 10 for 1 topic(s) Set(testOffsetHacking)
10:38:21,659 INFO kafka.producer.SyncProducer - Connected to localhost:41293 for producing
10:38:21,663 INFO kafka.producer.SyncProducer - Disconnecting from localhost:41293
10:38:21,663 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,663 INFO kafka.producer.SyncProducer - Disconnecting from localhost:41293
10:38:21,664 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,664 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:2,host:localhost,port:44901 with correlation id 11 for 1 topic(s) Set(testOffsetHacking)
10:38:21,664 INFO kafka.producer.SyncProducer - Connected to localhost:44901 for producing
10:38:21,665 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,663 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,665 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:0,host:localhost,port:41293 with correlation id 11 for 1 topic(s) Set(testOffsetHacking)
10:38:21,666 INFO kafka.producer.SyncProducer - Connected to localhost:41293 for producing
10:38:21,668 INFO kafka.producer.SyncProducer - Disconnecting from localhost:44901
10:38:21,668 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,668 ERROR kafka.producer.async.DefaultEventHandler - Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: testOffsetHacking
10:38:21,669 INFO kafka.producer.async.DefaultEventHandler - Back off for 100 ms before retrying send. Remaining retries = 5
10:38:21,669 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,670 INFO kafka.producer.SyncProducer - Disconnecting from localhost:41293
10:38:21,670 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,670 ERROR kafka.producer.async.DefaultEventHandler - Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: testOffsetHacking
10:38:21,671 INFO kafka.producer.async.DefaultEventHandler - Back off for 100 ms before retrying send. Remaining retries = 5
10:38:21,671 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,682 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 21 @ 1432031901682
10:38:21,682 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 21 on task Custom source -> Stream Sink
10:38:21,682 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 21 on task Custom source -> Stream Sink
10:38:21,683 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 21 on task Custom source -> Stream Sink
10:38:21,683 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed checkpoint 21
10:38:21,731 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 22 @ 1432031901731
10:38:21,732 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 22 on task Custom source -> Stream Sink
10:38:21,732 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 22 on task Custom source -> Stream Sink
10:38:21,732 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 22 on task Custom source -> Stream Sink
10:38:21,732 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed checkpoint 22
10:38:21,749 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:1,host:localhost,port:52102 with correlation id 12 for 1 topic(s) Set(testOffsetHacking)
10:38:21,749 INFO kafka.producer.SyncProducer - Connected to localhost:52102 for producing
10:38:21,753 INFO kafka.producer.SyncProducer - Disconnecting from localhost:52102
10:38:21,753 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,753 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:1,host:localhost,port:52102 with correlation id 13 for 1 topic(s) Set(testOffsetHacking)
10:38:21,754 INFO kafka.producer.SyncProducer - Connected to localhost:52102 for producing
10:38:21,754 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,757 INFO kafka.producer.SyncProducer - Disconnecting from localhost:52102
10:38:21,757 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,757 ERROR kafka.producer.async.DefaultEventHandler - Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: testOffsetHacking
10:38:21,757 INFO kafka.producer.async.DefaultEventHandler - Back off for 100 ms before retrying send. Remaining retries = 4
10:38:21,757 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,769 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:0,host:localhost,port:41293 with correlation id 12 for 1 topic(s) Set(testOffsetHacking)
10:38:21,770 INFO kafka.producer.SyncProducer - Connected to localhost:41293 for producing
10:38:21,771 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:2,host:localhost,port:44901 with correlation id 12 for 1 topic(s) Set(testOffsetHacking)
10:38:21,771 INFO kafka.producer.SyncProducer - Connected to localhost:44901 for producing
10:38:21,774 INFO kafka.producer.SyncProducer - Disconnecting from localhost:41293
10:38:21,775 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,775 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:2,host:localhost,port:44901 with correlation id 13 for 1 topic(s) Set(testOffsetHacking)
10:38:21,775 INFO kafka.producer.SyncProducer - Connected to localhost:44901 for producing
10:38:21,776 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,776 INFO kafka.producer.SyncProducer - Disconnecting from localhost:44901
10:38:21,776 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,777 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,777 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:2,host:localhost,port:44901 with correlation id 13 for 1 topic(s) Set(testOffsetHacking)
10:38:21,777 INFO kafka.producer.SyncProducer - Connected to localhost:44901 for producing
10:38:21,780 INFO kafka.producer.SyncProducer - Disconnecting from localhost:44901
10:38:21,780 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,780 ERROR kafka.producer.async.DefaultEventHandler - Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: testOffsetHacking
10:38:21,780 INFO kafka.producer.async.DefaultEventHandler - Back off for 100 ms before retrying send. Remaining retries = 4
10:38:21,780 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,781 INFO kafka.producer.SyncProducer - Disconnecting from localhost:44901
10:38:21,781 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,781 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,781 ERROR kafka.producer.async.DefaultEventHandler - Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: testOffsetHacking
10:38:21,781 INFO kafka.producer.async.DefaultEventHandler - Back off for 100 ms before retrying send. Remaining retries = 4
10:38:21,781 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 23 @ 1432031901781
10:38:21,782 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 23 on task Custom source -> Stream Sink
10:38:21,782 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 23 on task Custom source -> Stream Sink
10:38:21,782 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 23 on task Custom source -> Stream Sink
10:38:21,783 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed checkpoint 23
10:38:21,831 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 24 @ 1432031901831
10:38:21,831 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 24 on task Custom source -> Stream Sink
10:38:21,832 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 24 on task Custom source -> Stream Sink
10:38:21,832 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 24 on task Custom source -> Stream Sink
10:38:21,832 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed checkpoint 24
10:38:21,858 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:1,host:localhost,port:52102 with correlation id 14 for 1 topic(s) Set(testOffsetHacking)
10:38:21,858 INFO kafka.producer.SyncProducer - Connected to localhost:52102 for producing
10:38:21,861 INFO kafka.producer.SyncProducer - Disconnecting from localhost:52102
10:38:21,861 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,861 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,862 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:0,host:localhost,port:41293 with correlation id 15 for 1 topic(s) Set(testOffsetHacking)
10:38:21,862 INFO kafka.producer.SyncProducer - Connected to localhost:41293 for producing
10:38:21,865 INFO kafka.producer.SyncProducer - Disconnecting from localhost:41293
10:38:21,865 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,865 ERROR kafka.producer.async.DefaultEventHandler - Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: testOffsetHacking
10:38:21,866 INFO kafka.producer.async.DefaultEventHandler - Back off for 100 ms before retrying send. Remaining retries = 3
10:38:21,866 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,881 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:2,host:localhost,port:44901 with correlation id 14 for 1 topic(s) Set(testOffsetHacking)
10:38:21,881 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 25 @ 1432031901881
10:38:21,881 INFO kafka.producer.SyncProducer - Connected to localhost:44901 for producing
10:38:21,882 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:0,host:localhost,port:41293 with correlation id 14 for 1 topic(s) Set(testOffsetHacking)
10:38:21,882 INFO kafka.producer.SyncProducer - Connected to localhost:41293 for producing
10:38:21,883 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 25 on task Custom source -> Stream Sink
10:38:21,884 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 25 on task Custom source -> Stream Sink
10:38:21,886 INFO kafka.producer.SyncProducer - Disconnecting from localhost:44901
10:38:21,886 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,886 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,887 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:2,host:localhost,port:44901 with correlation id 15 for 1 topic(s) Set(testOffsetHacking)
10:38:21,887 INFO kafka.producer.SyncProducer - Connected to localhost:44901 for producing
10:38:21,887 INFO kafka.producer.SyncProducer - Disconnecting from localhost:41293
10:38:21,888 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,888 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,888 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:2,host:localhost,port:44901 with correlation id 15 for 1 topic(s) Set(testOffsetHacking)
10:38:21,889 INFO kafka.producer.SyncProducer - Connected to localhost:44901 for producing
10:38:21,892 INFO kafka.producer.SyncProducer - Disconnecting from localhost:44901
10:38:21,892 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,892 ERROR kafka.producer.async.DefaultEventHandler - Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: testOffsetHacking
10:38:21,892 INFO kafka.producer.async.DefaultEventHandler - Back off for 100 ms before retrying send. Remaining retries = 3
10:38:21,892 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,895 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 25 on task Custom source -> Stream Sink
10:38:21,895 INFO kafka.producer.SyncProducer - Disconnecting from localhost:44901
10:38:21,895 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed checkpoint 25
10:38:21,896 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,896 ERROR kafka.producer.async.DefaultEventHandler - Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: testOffsetHacking
10:38:21,896 INFO kafka.producer.async.DefaultEventHandler - Back off for 100 ms before retrying send. Remaining retries = 3
10:38:21,896 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,932 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 26 @ 1432031901932
10:38:21,932 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 26 on task Custom source -> Stream Sink
10:38:21,932 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 26 on task Custom source -> Stream Sink
10:38:21,933 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 26 on task Custom source -> Stream Sink
10:38:21,933 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed checkpoint 26
10:38:21,966 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:0,host:localhost,port:41293 with correlation id 16 for 1 topic(s) Set(testOffsetHacking)
10:38:21,966 INFO kafka.producer.SyncProducer - Connected to localhost:41293 for producing
10:38:21,970 INFO kafka.producer.SyncProducer - Disconnecting from localhost:41293
10:38:21,970 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,970 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:0,host:localhost,port:41293 with correlation id 17 for 1 topic(s) Set(testOffsetHacking)
10:38:21,971 INFO kafka.producer.SyncProducer - Connected to localhost:41293 for producing
10:38:21,971 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,973 INFO kafka.producer.SyncProducer - Disconnecting from localhost:41293
10:38:21,974 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:21,974 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,974 ERROR kafka.producer.async.DefaultEventHandler - Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: testOffsetHacking
10:38:21,974 INFO kafka.producer.async.DefaultEventHandler - Back off for 100 ms before retrying send. Remaining retries = 2
10:38:21,981 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 27 @ 1432031901981
10:38:21,981 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 27 on task Custom source -> Stream Sink
10:38:21,982 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 27 on task Custom source -> Stream Sink
10:38:21,982 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 27 on task Custom source -> Stream Sink
10:38:21,982 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed checkpoint 27
10:38:21,993 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:1,host:localhost,port:52102 with correlation id 16 for 1 topic(s) Set(testOffsetHacking)
10:38:21,993 INFO kafka.producer.SyncProducer - Connected to localhost:52102 for producing
10:38:21,996 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:2,host:localhost,port:44901 with correlation id 16 for 1 topic(s) Set(testOffsetHacking)
10:38:21,997 INFO kafka.producer.SyncProducer - Connected to localhost:44901 for producing
10:38:21,998 INFO kafka.producer.SyncProducer - Disconnecting from localhost:52102
10:38:21,998 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:21,999 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:2,host:localhost,port:44901 with correlation id 17 for 1 topic(s) Set(testOffsetHacking)
10:38:21,999 INFO kafka.producer.SyncProducer - Connected to localhost:44901 for producing
10:38:22,000 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:22,002 INFO kafka.producer.SyncProducer - Disconnecting from localhost:44901
10:38:22,003 INFO kafka.producer.SyncProducer - Disconnecting from localhost:44901
10:38:22,003 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:22,003 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:22,004 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:0,host:localhost,port:41293 with correlation id 17 for 1 topic(s) Set(testOffsetHacking)
10:38:22,004 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:22,003 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:22,004 ERROR kafka.producer.async.DefaultEventHandler - Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: testOffsetHacking
10:38:22,004 INFO kafka.producer.async.DefaultEventHandler - Back off for 100 ms before retrying send. Remaining retries = 2
10:38:22,004 INFO kafka.producer.SyncProducer - Connected to localhost:41293 for producing
10:38:22,006 INFO kafka.producer.SyncProducer - Disconnecting from localhost:41293
10:38:22,007 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:22,007 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:22,007 ERROR kafka.producer.async.DefaultEventHandler - Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: testOffsetHacking
10:38:22,007 INFO kafka.producer.async.DefaultEventHandler - Back off for 100 ms before retrying send. Remaining retries = 2
10:38:22,031 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 28 @ 1432031902031
10:38:22,032 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 28 on task Custom source -> Stream Sink
10:38:22,032 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 28 on task Custom source -> Stream Sink
10:38:22,035 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 28 on task Custom source -> Stream Sink
10:38:22,035 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed checkpoint 28
10:38:22,075 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:0,host:localhost,port:41293 with correlation id 18 for 1 topic(s) Set(testOffsetHacking)
10:38:22,075 INFO kafka.producer.SyncProducer - Connected to localhost:41293 for producing
10:38:22,081 INFO kafka.producer.SyncProducer - Disconnecting from localhost:41293
10:38:22,081 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:22,081 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:22,082 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:0,host:localhost,port:41293 with correlation id 19 for 1 topic(s) Set(testOffsetHacking)
10:38:22,082 INFO kafka.producer.SyncProducer - Connected to localhost:41293 for producing
10:38:22,082 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 29 @ 1432031902082
10:38:22,083 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 29 on task Custom source -> Stream Sink
10:38:22,083 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 29 on task Custom source -> Stream Sink
10:38:22,084 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 29 on task Custom source -> Stream Sink
10:38:22,084 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed checkpoint 29
10:38:22,089 INFO kafka.producer.SyncProducer - Disconnecting from localhost:41293
10:38:22,089 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:22,089 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:22,089 ERROR kafka.producer.async.DefaultEventHandler - Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: testOffsetHacking
10:38:22,090 INFO kafka.producer.async.DefaultEventHandler - Back off for 100 ms before retrying send. Remaining retries = 1
10:38:22,105 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:1,host:localhost,port:52102 with correlation id 18 for 1 topic(s) Set(testOffsetHacking)
10:38:22,105 INFO kafka.producer.SyncProducer - Connected to localhost:52102 for producing
10:38:22,108 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:0,host:localhost,port:41293 with correlation id 18 for 1 topic(s) Set(testOffsetHacking)
10:38:22,108 INFO kafka.producer.SyncProducer - Connected to localhost:41293 for producing
10:38:22,111 INFO kafka.producer.SyncProducer - Disconnecting from localhost:52102
10:38:22,111 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:22,112 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:22,112 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:0,host:localhost,port:41293 with correlation id 19 for 1 topic(s) Set(testOffsetHacking)
10:38:22,112 INFO kafka.producer.SyncProducer - Connected to localhost:41293 for producing
10:38:22,113 INFO kafka.producer.SyncProducer - Disconnecting from localhost:41293
10:38:22,113 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:22,113 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:0,host:localhost,port:41293 with correlation id 19 for 1 topic(s) Set(testOffsetHacking)
10:38:22,113 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:22,114 INFO kafka.producer.SyncProducer - Connected to localhost:41293 for producing
10:38:22,116 INFO kafka.producer.SyncProducer - Disconnecting from localhost:41293
10:38:22,116 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:22,117 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:22,117 ERROR kafka.producer.async.DefaultEventHandler - Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: testOffsetHacking
10:38:22,117 INFO kafka.producer.async.DefaultEventHandler - Back off for 100 ms before retrying send. Remaining retries = 1
10:38:22,120 INFO kafka.producer.SyncProducer - Disconnecting from localhost:41293
10:38:22,120 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:22,121 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:22,121 ERROR kafka.producer.async.DefaultEventHandler - Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: testOffsetHacking
10:38:22,121 INFO kafka.producer.async.DefaultEventHandler - Back off for 100 ms before retrying send. Remaining retries = 1
10:38:22,131 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 30 @ 1432031902131
10:38:22,132 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 30 on task Custom source -> Stream Sink
10:38:22,132 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 30 on task Custom source -> Stream Sink
10:38:22,132 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 30 on task Custom source -> Stream Sink
10:38:22,133 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed checkpoint 30
10:38:22,181 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 31 @ 1432031902181
10:38:22,181 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 31 on task Custom source -> Stream Sink
10:38:22,181 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 31 on task Custom source -> Stream Sink
10:38:22,182 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 31 on task Custom source -> Stream Sink
10:38:22,182 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed checkpoint 31
10:38:22,190 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:2,host:localhost,port:44901 with correlation id 20 for 1 topic(s) Set(testOffsetHacking)
10:38:22,191 INFO kafka.producer.SyncProducer - Connected to localhost:44901 for producing
10:38:22,193 INFO kafka.producer.SyncProducer - Disconnecting from localhost:44901
10:38:22,194 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:22,194 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:22,194 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:0,host:localhost,port:41293 with correlation id 21 for 1 topic(s) Set(testOffsetHacking)
10:38:22,194 INFO kafka.producer.SyncProducer - Connected to localhost:41293 for producing
10:38:22,197 INFO kafka.producer.SyncProducer - Disconnecting from localhost:41293
10:38:22,198 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:22,198 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:22,198 ERROR kafka.producer.async.DefaultEventHandler - Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: testOffsetHacking
10:38:22,198 INFO kafka.producer.async.DefaultEventHandler - Back off for 100 ms before retrying send. Remaining retries = 0
10:38:22,218 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:0,host:localhost,port:41293 with correlation id 20 for 1 topic(s) Set(testOffsetHacking)
10:38:22,218 INFO kafka.producer.SyncProducer - Connected to localhost:41293 for producing
10:38:22,221 INFO kafka.producer.SyncProducer - Disconnecting from localhost:41293
10:38:22,221 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:0,host:localhost,port:41293 with correlation id 20 for 1 topic(s) Set(testOffsetHacking)
10:38:22,222 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:22,222 INFO kafka.producer.SyncProducer - Connected to localhost:41293 for producing
10:38:22,222 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:1,host:localhost,port:52102 with correlation id 21 for 1 topic(s) Set(testOffsetHacking)
10:38:22,222 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:22,222 INFO kafka.producer.SyncProducer - Connected to localhost:52102 for producing
10:38:22,227 INFO kafka.producer.SyncProducer - Disconnecting from localhost:41293
10:38:22,228 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:22,228 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:2,host:localhost,port:44901 with correlation id 21 for 1 topic(s) Set(testOffsetHacking)
10:38:22,228 INFO kafka.producer.SyncProducer - Connected to localhost:44901 for producing
10:38:22,229 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:22,230 INFO kafka.producer.SyncProducer - Disconnecting from localhost:52102
10:38:22,230 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:22,230 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:22,230 ERROR kafka.producer.async.DefaultEventHandler - Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: testOffsetHacking
10:38:22,230 INFO kafka.producer.async.DefaultEventHandler - Back off for 100 ms before retrying send. Remaining retries = 0
10:38:22,231 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 32 @ 1432031902231
10:38:22,232 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 32 on task Custom source -> Stream Sink
10:38:22,232 INFO kafka.producer.SyncProducer - Disconnecting from localhost:44901
10:38:22,233 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:22,233 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:22,233 ERROR kafka.producer.async.DefaultEventHandler - Failed to collate messages by topic, partition due to: Failed to fetch topic metadata for topic: testOffsetHacking
10:38:22,233 INFO kafka.producer.async.DefaultEventHandler - Back off for 100 ms before retrying send. Remaining retries = 0
10:38:22,233 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 32 on task Custom source -> Stream Sink
10:38:22,233 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 32 on task Custom source -> Stream Sink
10:38:22,233 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed checkpoint 32
10:38:22,281 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 33 @ 1432031902281
10:38:22,282 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 33 on task Custom source -> Stream Sink
10:38:22,282 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 33 on task Custom source -> Stream Sink
10:38:22,282 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 33 on task Custom source -> Stream Sink
10:38:22,283 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Completed checkpoint 33
10:38:22,298 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:2,host:localhost,port:44901 with correlation id 22 for 1 topic(s) Set(testOffsetHacking)
10:38:22,299 INFO kafka.producer.SyncProducer - Connected to localhost:44901 for producing
10:38:22,302 INFO kafka.producer.SyncProducer - Disconnecting from localhost:44901
10:38:22,302 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:22,302 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:22,303 ERROR kafka.producer.async.DefaultEventHandler - Failed to send requests for topics testOffsetHacking with correlation ids in [0,22]
10:38:22,304 ERROR org.apache.flink.streaming.api.operators.StreamOperator - Calling user function failed
kafka.common.FailedToSendMessageException: Failed to send messages after 10 tries.
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
at kafka.producer.Producer.send(Producer.scala:77)
at kafka.javaapi.producer.Producer.send(Producer.scala:33)
at org.apache.flink.streaming.connectors.kafka.api.KafkaSink.invoke(KafkaSink.java:183)
at org.apache.flink.streaming.api.operators.StreamSink.callUserFunction(StreamSink.java:39)
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:137)
at org.apache.flink.streaming.api.operators.ChainableStreamOperator.collect(ChainableStreamOperator.java:54)
at org.apache.flink.streaming.api.collector.CollectorWrapper.collect(CollectorWrapper.java:39)
at org.apache.flink.streaming.connectors.kafka.KafkaITCase$3.run(KafkaITCase.java:326)
at org.apache.flink.streaming.api.operators.StreamSource.callUserFunction(StreamSource.java:40)
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:137)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:34)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:139)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
at java.lang.Thread.run(Thread.java:745)
10:38:22,306 ERROR org.apache.flink.streaming.api.operators.StreamOperator - Calling user function failed
java.lang.RuntimeException: kafka.common.FailedToSendMessageException: Failed to send messages after 10 tries.
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:142)
at org.apache.flink.streaming.api.operators.ChainableStreamOperator.collect(ChainableStreamOperator.java:54)
at org.apache.flink.streaming.api.collector.CollectorWrapper.collect(CollectorWrapper.java:39)
at org.apache.flink.streaming.connectors.kafka.KafkaITCase$3.run(KafkaITCase.java:326)
at org.apache.flink.streaming.api.operators.StreamSource.callUserFunction(StreamSource.java:40)
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:137)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:34)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:139)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
at java.lang.Thread.run(Thread.java:745)
Caused by: kafka.common.FailedToSendMessageException: Failed to send messages after 10 tries.
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
at kafka.producer.Producer.send(Producer.scala:77)
at kafka.javaapi.producer.Producer.send(Producer.scala:33)
at org.apache.flink.streaming.connectors.kafka.api.KafkaSink.invoke(KafkaSink.java:183)
at org.apache.flink.streaming.api.operators.StreamSink.callUserFunction(StreamSink.java:39)
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:137)
... 9 more
10:38:22,307 INFO kafka.producer.Producer - Shutting down producer
10:38:22,331 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 34 @ 1432031902331
10:38:22,331 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:2,host:localhost,port:44901 with correlation id 22 for 1 topic(s) Set(testOffsetHacking)
10:38:22,332 INFO kafka.producer.SyncProducer - Connected to localhost:44901 for producing
10:38:22,333 INFO kafka.client.ClientUtils$ - Fetching metadata from broker id:0,host:localhost,port:41293 with correlation id 22 for 1 topic(s) Set(testOffsetHacking)
10:38:22,334 INFO kafka.producer.SyncProducer - Connected to localhost:41293 for producing
10:38:22,338 INFO kafka.producer.SyncProducer - Disconnecting from localhost:41293
10:38:22,339 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:22,339 ERROR kafka.producer.async.DefaultEventHandler - Failed to send requests for topics testOffsetHacking with correlation ids in [0,22]
10:38:22,339 ERROR org.apache.flink.streaming.api.operators.StreamOperator - Calling user function failed
kafka.common.FailedToSendMessageException: Failed to send messages after 10 tries.
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
at kafka.producer.Producer.send(Producer.scala:77)
at kafka.javaapi.producer.Producer.send(Producer.scala:33)
at org.apache.flink.streaming.connectors.kafka.api.KafkaSink.invoke(KafkaSink.java:183)
at org.apache.flink.streaming.api.operators.StreamSink.callUserFunction(StreamSink.java:39)
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:137)
at org.apache.flink.streaming.api.operators.ChainableStreamOperator.collect(ChainableStreamOperator.java:54)
at org.apache.flink.streaming.api.collector.CollectorWrapper.collect(CollectorWrapper.java:39)
at org.apache.flink.streaming.connectors.kafka.KafkaITCase$3.run(KafkaITCase.java:326)
at org.apache.flink.streaming.api.operators.StreamSource.callUserFunction(StreamSource.java:40)
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:137)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:34)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:139)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
at java.lang.Thread.run(Thread.java:745)
10:38:22,340 ERROR org.apache.flink.streaming.api.operators.StreamOperator - Calling user function failed
java.lang.RuntimeException: kafka.common.FailedToSendMessageException: Failed to send messages after 10 tries.
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:142)
at org.apache.flink.streaming.api.operators.ChainableStreamOperator.collect(ChainableStreamOperator.java:54)
at org.apache.flink.streaming.api.collector.CollectorWrapper.collect(CollectorWrapper.java:39)
at org.apache.flink.streaming.connectors.kafka.KafkaITCase$3.run(KafkaITCase.java:326)
at org.apache.flink.streaming.api.operators.StreamSource.callUserFunction(StreamSource.java:40)
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:137)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:34)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:139)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
at java.lang.Thread.run(Thread.java:745)
Caused by: kafka.common.FailedToSendMessageException: Failed to send messages after 10 tries.
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
at kafka.producer.Producer.send(Producer.scala:77)
at kafka.javaapi.producer.Producer.send(Producer.scala:33)
at org.apache.flink.streaming.connectors.kafka.api.KafkaSink.invoke(KafkaSink.java:183)
at org.apache.flink.streaming.api.operators.StreamSink.callUserFunction(StreamSink.java:39)
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:137)
... 9 more
10:38:22,340 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:22,342 INFO kafka.producer.SyncProducer - Disconnecting from localhost:44901
10:38:22,342 WARN kafka.producer.BrokerPartitionInfo - Error while fetching metadata [{TopicMetadata for topic testOffsetHacking ->
No partition metadata for topic testOffsetHacking due to kafka.common.LeaderNotAvailableException}] for topic [testOffsetHacking]: class kafka.common.LeaderNotAvailableException
10:38:22,342 ERROR kafka.producer.async.DefaultEventHandler - Failed to send requests for topics testOffsetHacking with correlation ids in [0,22]
10:38:22,342 ERROR org.apache.flink.streaming.api.operators.StreamOperator - Calling user function failed
kafka.common.FailedToSendMessageException: Failed to send messages after 10 tries.
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
at kafka.producer.Producer.send(Producer.scala:77)
at kafka.javaapi.producer.Producer.send(Producer.scala:33)
at org.apache.flink.streaming.connectors.kafka.api.KafkaSink.invoke(KafkaSink.java:183)
at org.apache.flink.streaming.api.operators.StreamSink.callUserFunction(StreamSink.java:39)
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:137)
at org.apache.flink.streaming.api.operators.ChainableStreamOperator.collect(ChainableStreamOperator.java:54)
at org.apache.flink.streaming.api.collector.CollectorWrapper.collect(CollectorWrapper.java:39)
at org.apache.flink.streaming.connectors.kafka.KafkaITCase$3.run(KafkaITCase.java:326)
at org.apache.flink.streaming.api.operators.StreamSource.callUserFunction(StreamSource.java:40)
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:137)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:34)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:139)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
at java.lang.Thread.run(Thread.java:745)
10:38:22,343 ERROR org.apache.flink.streaming.api.operators.StreamOperator - Calling user function failed
java.lang.RuntimeException: kafka.common.FailedToSendMessageException: Failed to send messages after 10 tries.
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:142)
at org.apache.flink.streaming.api.operators.ChainableStreamOperator.collect(ChainableStreamOperator.java:54)
at org.apache.flink.streaming.api.collector.CollectorWrapper.collect(CollectorWrapper.java:39)
at org.apache.flink.streaming.connectors.kafka.KafkaITCase$3.run(KafkaITCase.java:326)
at org.apache.flink.streaming.api.operators.StreamSource.callUserFunction(StreamSource.java:40)
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:137)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:34)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:139)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
at java.lang.Thread.run(Thread.java:745)
Caused by: kafka.common.FailedToSendMessageException: Failed to send messages after 10 tries.
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
at kafka.producer.Producer.send(Producer.scala:77)
at kafka.javaapi.producer.Producer.send(Producer.scala:33)
at org.apache.flink.streaming.connectors.kafka.api.KafkaSink.invoke(KafkaSink.java:183)
at org.apache.flink.streaming.api.operators.StreamSink.callUserFunction(StreamSink.java:39)
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:137)
... 9 more
10:38:22,343 INFO kafka.network.Processor - Closing socket connection to /127.0.0.1.
10:38:22,359 INFO kafka.producer.ProducerPool - Closing all sync producers
10:38:22,340 INFO kafka.producer.Producer - Shutting down producer
10:38:22,371 INFO kafka.producer.Producer - Producer shutdown completed in 64 ms
10:38:22,372 ERROR org.apache.flink.streaming.runtime.tasks.StreamTask - StreamOperator failed due to: java.lang.RuntimeException: java.lang.RuntimeException: kafka.common.FailedToSendMessageException: Failed to send messages after 10 tries.
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:142)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:34)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:139)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: kafka.common.FailedToSendMessageException: Failed to send messages after 10 tries.
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:142)
at org.apache.flink.streaming.api.operators.ChainableStreamOperator.collect(ChainableStreamOperator.java:54)
at org.apache.flink.streaming.api.collector.CollectorWrapper.collect(CollectorWrapper.java:39)
at org.apache.flink.streaming.connectors.kafka.KafkaITCase$3.run(KafkaITCase.java:326)
at org.apache.flink.streaming.api.operators.StreamSource.callUserFunction(StreamSource.java:40)
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:137)
... 4 more
Caused by: kafka.common.FailedToSendMessageException: Failed to send messages after 10 tries.
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
at kafka.producer.Producer.send(Producer.scala:77)
at kafka.javaapi.producer.Producer.send(Producer.scala:33)
at org.apache.flink.streaming.connectors.kafka.api.KafkaSink.invoke(KafkaSink.java:183)
at org.apache.flink.streaming.api.operators.StreamSink.callUserFunction(StreamSink.java:39)
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:137)
... 9 more
10:38:22,372 INFO org.apache.flink.runtime.taskmanager.Task - Custom source -> Stream Sink (1/3) switched to FAILED with exception.
java.lang.RuntimeException: java.lang.RuntimeException: kafka.common.FailedToSendMessageException: Failed to send messages after 10 tries.
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:142)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:34)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:139)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: kafka.common.FailedToSendMessageException: Failed to send messages after 10 tries.
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:142)
at org.apache.flink.streaming.api.operators.ChainableStreamOperator.collect(ChainableStreamOperator.java:54)
at org.apache.flink.streaming.api.collector.CollectorWrapper.collect(CollectorWrapper.java:39)
at org.apache.flink.streaming.connectors.kafka.KafkaITCase$3.run(KafkaITCase.java:326)
at org.apache.flink.streaming.api.operators.StreamSource.callUserFunction(StreamSource.java:40)
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:137)
... 4 more
Caused by: kafka.common.FailedToSendMessageException: Failed to send messages after 10 tries.
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
at kafka.producer.Producer.send(Producer.scala:77)
at kafka.javaapi.producer.Producer.send(Producer.scala:33)
at org.apache.flink.streaming.connectors.kafka.api.KafkaSink.invoke(KafkaSink.java:183)
at org.apache.flink.streaming.api.operators.StreamSink.callUserFunction(StreamSink.java:39)
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:137)
... 9 more
10:38:22,377 INFO org.apache.flink.streaming.connectors.kafka.KafkaITCase - Source got cancel()
10:38:22,377 INFO org.apache.flink.runtime.taskmanager.Task - Freeing task resources for Custom source -> Stream Sink (1/3)
10:38:22,343 INFO kafka.producer.Producer - Shutting down producer
10:38:22,381 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Triggering checkpoint 35 @ 1432031902381
10:38:22,399 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 34 on task Custom source -> Stream Sink
10:38:22,412 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 34 on task Custom source -> Stream Sink
10:38:22,384 INFO kafka.producer.ProducerPool - Closing all sync producers
10:38:22,409 INFO kafka.producer.ProducerPool - Closing all sync producers
10:38:22,415 INFO kafka.producer.Producer - Producer shutdown completed in 43 ms
10:38:22,415 INFO kafka.producer.Producer - Producer shutdown completed in 37 ms
10:38:22,415 ERROR org.apache.flink.streaming.runtime.tasks.StreamTask - StreamOperator failed due to: java.lang.RuntimeException: java.lang.RuntimeException: kafka.common.FailedToSendMessageException: Failed to send messages after 10 tries.
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:142)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:34)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:139)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: kafka.common.FailedToSendMessageException: Failed to send messages after 10 tries.
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:142)
at org.apache.flink.streaming.api.operators.ChainableStreamOperator.collect(ChainableStreamOperator.java:54)
at org.apache.flink.streaming.api.collector.CollectorWrapper.collect(CollectorWrapper.java:39)
at org.apache.flink.streaming.connectors.kafka.KafkaITCase$3.run(KafkaITCase.java:326)
at org.apache.flink.streaming.api.operators.StreamSource.callUserFunction(StreamSource.java:40)
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:137)
... 4 more
Caused by: kafka.common.FailedToSendMessageException: Failed to send messages after 10 tries.
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
at kafka.producer.Producer.send(Producer.scala:77)
at kafka.javaapi.producer.Producer.send(Producer.scala:33)
at org.apache.flink.streaming.connectors.kafka.api.KafkaSink.invoke(KafkaSink.java:183)
at org.apache.flink.streaming.api.operators.StreamSink.callUserFunction(StreamSink.java:39)
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:137)
... 9 more
10:38:22,415 INFO org.apache.flink.runtime.taskmanager.Task - Custom source -> Stream Sink (2/3) switched to FAILED with exception.
java.lang.RuntimeException: java.lang.RuntimeException: kafka.common.FailedToSendMessageException: Failed to send messages after 10 tries.
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:142)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:34)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:139)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: kafka.common.FailedToSendMessageException: Failed to send messages after 10 tries.
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:142)
at org.apache.flink.streaming.api.operators.ChainableStreamOperator.collect(ChainableStreamOperator.java:54)
at org.apache.flink.streaming.api.collector.CollectorWrapper.collect(CollectorWrapper.java:39)
at org.apache.flink.streaming.connectors.kafka.KafkaITCase$3.run(KafkaITCase.java:326)
at org.apache.flink.streaming.api.operators.StreamSource.callUserFunction(StreamSource.java:40)
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:137)
... 4 more
Caused by: kafka.common.FailedToSendMessageException: Failed to send messages after 10 tries.
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
at kafka.producer.Producer.send(Producer.scala:77)
at kafka.javaapi.producer.Producer.send(Producer.scala:33)
at org.apache.flink.streaming.connectors.kafka.api.KafkaSink.invoke(KafkaSink.java:183)
at org.apache.flink.streaming.api.operators.StreamSink.callUserFunction(StreamSink.java:39)
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:137)
... 9 more
10:38:22,416 INFO org.apache.flink.streaming.connectors.kafka.KafkaITCase - Source got cancel()
10:38:22,417 INFO org.apache.flink.runtime.taskmanager.Task - Freeing task resources for Custom source -> Stream Sink (2/3)
10:38:22,417 INFO org.apache.flink.runtime.taskmanager.TaskManager - Unregistering task and sending final execution state FAILED to JobManager for task Custom source -> Stream Sink (eb239f4299e9bdc71f552a2c3241c056)
10:38:22,415 INFO org.apache.flink.streaming.runtime.tasks.StreamTask - Starting checkpoint 35 on task Custom source -> Stream Sink
10:38:22,425 INFO org.apache.flink.runtime.taskmanager.TaskManager - Unregistering task and sending final execution state FAILED to JobManager for task Custom source -> Stream Sink (49fb290b3795470b31f19929895e470f)
10:38:22,427 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Custom source -> Stream Sink (1/3) (eb239f4299e9bdc71f552a2c3241c056) switched from RUNNING to FAILED
10:38:22,427 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Custom source -> Stream Sink (2/3) (49fb290b3795470b31f19929895e470f) switched from RUNNING to FAILED
10:38:22,428 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Custom source -> Stream Sink (3/3) (6e38b372ae659c492e8eec1e6213b0c5) switched from RUNNING to CANCELING
10:38:22,428 INFO org.apache.flink.runtime.client.JobClient - 05/19/2015 10:38:22 Custom source -> Stream Sink(1/3) switched to FAILED
java.lang.RuntimeException: java.lang.RuntimeException: kafka.common.FailedToSendMessageException: Failed to send messages after 10 tries.
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:142)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:34)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:139)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: kafka.common.FailedToSendMessageException: Failed to send messages after 10 tries.
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:142)
at org.apache.flink.streaming.api.operators.ChainableStreamOperator.collect(ChainableStreamOperator.java:54)
at org.apache.flink.streaming.api.collector.CollectorWrapper.collect(CollectorWrapper.java:39)
at org.apache.flink.streaming.connectors.kafka.KafkaITCase$3.run(KafkaITCase.java:326)
at org.apache.flink.streaming.api.operators.StreamSource.callUserFunction(StreamSource.java:40)
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:137)
... 4 more
Caused by: kafka.common.FailedToSendMessageException: Failed to send messages after 10 tries.
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
at kafka.producer.Producer.send(Producer.scala:77)
at kafka.javaapi.producer.Producer.send(Producer.scala:33)
at org.apache.flink.streaming.connectors.kafka.api.KafkaSink.invoke(KafkaSink.java:183)
at org.apache.flink.streaming.api.operators.StreamSink.callUserFunction(StreamSink.java:39)
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:137)
... 9 more
10:38:22,428 INFO org.apache.flink.runtime.client.JobClient - 05/19/2015 10:38:22 Job execution switched to status FAILING.
10:38:22,428 INFO org.apache.flink.runtime.client.JobClient - 05/19/2015 10:38:22 Custom source -> Stream Sink(2/3) switched to FAILED
java.lang.RuntimeException: java.lang.RuntimeException: kafka.common.FailedToSendMessageException: Failed to send messages after 10 tries.
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:142)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:34)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:139)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: kafka.common.FailedToSendMessageException: Failed to send messages after 10 tries.
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:142)
at org.apache.flink.streaming.api.operators.ChainableStreamOperator.collect(ChainableStreamOperator.java:54)
at org.apache.flink.streaming.api.collector.CollectorWrapper.collect(CollectorWrapper.java:39)
at org.apache.flink.streaming.connectors.kafka.KafkaITCase$3.run(KafkaITCase.java:326)
at org.apache.flink.streaming.api.operators.StreamSource.callUserFunction(StreamSource.java:40)
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:137)
... 4 more
Caused by: kafka.common.FailedToSendMessageException: Failed to send messages after 10 tries.
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
at kafka.producer.Producer.send(Producer.scala:77)
at kafka.javaapi.producer.Producer.send(Producer.scala:33)
at org.apache.flink.streaming.connectors.kafka.api.KafkaSink.invoke(KafkaSink.java:183)
at org.apache.flink.streaming.api.operators.StreamSink.callUserFunction(StreamSink.java:39)
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:137)
... 9 more
10:38:22,428 INFO org.apache.flink.runtime.client.JobClient - 05/19/2015 10:38:22 Custom source -> Stream Sink(3/3) switched to CANCELING
10:38:22,430 INFO org.apache.flink.runtime.taskmanager.TaskManager - Discarding the results produced by task execution eb239f4299e9bdc71f552a2c3241c056
10:38:22,431 INFO org.apache.flink.runtime.taskmanager.TaskManager - Discarding the results produced by task execution 49fb290b3795470b31f19929895e470f
10:38:22,432 INFO org.apache.flink.runtime.taskmanager.Task - Attempting to cancel task Custom source -> Stream Sink (3/3)
10:38:22,432 INFO org.apache.flink.runtime.taskmanager.Task - Custom source -> Stream Sink (3/3) switched to CANCELING
10:38:22,432 INFO org.apache.flink.runtime.taskmanager.Task - Triggering cancellation of task code Custom source -> Stream Sink (3/3) (6e38b372ae659c492e8eec1e6213b0c5).
10:38:22,415 ERROR org.apache.flink.streaming.runtime.tasks.StreamTask - StreamOperator failed due to: java.lang.RuntimeException: java.lang.RuntimeException: kafka.common.FailedToSendMessageException: Failed to send messages after 10 tries.
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:142)
at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:34)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:139)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: kafka.common.FailedToSendMessageException: Failed to send messages after 10 tries.
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:142)
at org.apache.flink.streaming.api.operators.ChainableStreamOperator.collect(ChainableStreamOperator.java:54)
at org.apache.flink.streaming.api.collector.CollectorWrapper.collect(CollectorWrapper.java:39)
at org.apache.flink.streaming.connectors.kafka.KafkaITCase$3.run(KafkaITCase.java:326)
at org.apache.flink.streaming.api.operators.StreamSource.callUserFunction(StreamSource.java:40)
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:137)
... 4 more
Caused by: kafka.common.FailedToSendMessageException: Failed to send messages after 10 tries.
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
at kafka.producer.Producer.send(Producer.scala:77)
at kafka.javaapi.producer.Producer.send(Producer.scala:33)
at org.apache.flink.streaming.connectors.kafka.api.KafkaSink.invoke(KafkaSink.java:183)
at org.apache.flink.streaming.api.operators.StreamSink.callUserFunction(StreamSink.java:39)
at org.apache.flink.streaming.api.operators.StreamOperator.callUserFunctionAndLogException(StreamOperator.java:137)
... 9 more
10:38:22,434 INFO org.apache.flink.runtime.taskmanager.Task - Custom source -> Stream Sink (3/3) switched to CANCELED
10:38:22,434 INFO org.apache.flink.runtime.taskmanager.Task - Freeing task resources for Custom source -> Stream Sink (3/3)
10:38:22,430 INFO org.apache.flink.runtime.jobmanager.JobManager - Status of job de7d3f8b724ef06f7b1c6c901e163c24 (Write sequence from 0 to 99 to topic testOffsetHacking) changed to FAILING java.lang.RuntimeException: kafka.common.FailedToSendMessageException: Failed to send messages after 10 tries..
10:38:22,435 INFO org.apache.flink.streaming.connectors.kafka.KafkaITCase - Source got cancel()
10:38:22,435 INFO org.apache.flink.runtime.taskmanager.TaskManager - Unregistering task and sending final execution state CANCELED to JobManager for task Custom source -> Stream Sink (6e38b372ae659c492e8eec1e6213b0c5)
10:38:22,436 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph - Custom source -> Stream Sink (3/3) (6e38b372ae659c492e8eec1e6213b0c5) switched from CANCELING to CANCELED
10:38:22,437 INFO org.apache.flink.runtime.client.JobClient - 05/19/2015 10:38:22 Custom source -> Stream Sink(3/3) switched to CANCELED
10:38:22,437 INFO org.apache.flink.runtime.client.JobClient - 05/19/2015 10:38:22 Job execution switched to status FAILED.
10:38:22,437 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Stopping checkpoint coordinator for job de7d3f8b724ef06f7b1c6c901e163c24
10:38:22,439 INFO org.apache.flink.runtime.jobmanager.JobManager - Status of job de7d3f8b724ef06f7b1c6c901e163c24 (Write sequence from 0 to 99 to topic testOffsetHacking) changed to FAILED java.lang.RuntimeException: kafka.common.FailedToSendMessageException: Failed to send messages after 10 tries..
10:38:22,439 INFO org.apache.flink.runtime.minicluster.FlinkMiniCluster - Stopping FlinkMiniCluster.
10:38:22,445 INFO org.apache.flink.runtime.taskmanager.TaskManager - Stopping TaskManager akka://flink/user/taskmanager_1#306060796.
10:38:22,446 INFO org.apache.flink.runtime.taskmanager.TaskManager - Disassociating from JobManager
10:38:22,446 INFO org.apache.flink.runtime.jobmanager.JobManager - Stopping JobManager akka://flink/user/jobmanager#1973411457.
10:38:22,458 INFO org.apache.flink.runtime.io.disk.iomanager.IOManager - I/O manager removed spill file directory /tmp/flink-io-b3d946fc-97b2-4a91-8dd6-d90d56992643
10:38:22,462 INFO org.apache.flink.runtime.taskmanager.TaskManager - Task manager akka://flink/user/taskmanager_1 is completely shut down.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment