Skip to content

Instantly share code, notes, and snippets.

View mfenniak's full-sized avatar

Mathieu Fenniak mfenniak

View GitHub Profile
# ========
# captured on: Fri May 12 20:07:59 2017
# hostname : ip-10-34-11-79
# os release : 4.4.0-70-generic
# perf version : 4.4.49
# arch : x86_64
# nrcpus online : 1
# nrcpus avail : 1
# cpudesc : Intel(R) Xeon(R) CPU E5-2676 v3 @ 2.40GHz
# cpuid : GenuineIntel,6,63,2
Analysis of sampling postgres (pid 94571) every 1 millisecond
Process: postgres [94571]
Path: /Users/mathieu.fenniak/Development/pgsql-9.5.4-avoid-search/bin/postgres
Load Address: 0x1061db000
Identifier: postgres
Version: 0
Code Type: X86-64
Parent Process: postgres [94365]
Date/Time: 2017-05-10 09:40:17.180 -0600
Analysis of sampling postgres (pid 32304) every 1 millisecond
Process: postgres [32304]
Path: /usr/local/Cellar/postgresql/9.6.2/bin/postgres
Load Address: 0x10d9c1000
Identifier: postgres
Version: 0
Code Type: X86-64
Parent Process: postgres [1006]
Date/Time: 2017-05-05 14:07:12.371 -0600
22:24:58.587 [StreamThread-1] ERROR o.a.k.c.c.i.ConsumerCoordinator - User provided listener org.apache.kafka.streams.processor.internals.StreamThread$1 for group timesheet-list failed on partition assignmen
t
org.apache.kafka.streams.errors.ProcessorStateException: Error opening store ServiceCenterParentFlatMap-topic at location /tmp/kafka-streams/timesheet-list/33_5/rocksdb/ServiceCenterParentFlatMap-topic
at org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:173)
at org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:143)
at org.apache.kafka.streams.state.internals.RocksDBStore.init(RocksDBStore.java:148)
at org.apache.kafka.streams.state.internals.MeteredKeyValueStore$7.run(MeteredKeyValueStore.java:100)
at org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:188)
at org.apache.kafka.streams.state.internals.MeteredKeyValueStore.init(MeteredKeyValueStore
@mfenniak
mfenniak / log-sequence-CommitFailedException.log
Created March 10, 2017 18:03
Log output & sequence from Kafka Streams CommitFailedException
App starts up. Consumer config is logged showing max.poll.interval.ms = 1800000 (30 minutes)
{
"timestamp":"2017-03-08T22:13:51.139+00:00",
"message":"ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = earliest
bootstrap.servers = [10.20.4.219:9092, 10.20.6.54:9092]
check.crcs = true
client.id = timesheet-list-3d89bd2e-941f-4272-9916-aad4e6f82e36-StreamThread-1-consumer
connections.max.idle.ms = 540000
fun printGraphviz(builder: TopologyBuilder) {
println("digraph {")
val nodeGroups = builder.nodeGroups()
val processorTopologys = nodeGroups.map { kv -> builder.build(kv.key) }
val sourceTopics = processorTopologys.flatMap { it.sourceTopics() }
val sinkTopics = processorTopologys.flatMap { it.sinkTopics() }
val allTopics = sourceTopics.plus(sinkTopics).distinct().sorted()
println(" node [shape=\"rect\"]")
... more running log data ...
15:51:30.488 [StreamThread-1] INFO o.a.k.s.p.internals.StreamThread - stream-thread [StreamThread-1] Flushing state stores of task 19_2
15:51:30.488 [StreamThread-1] INFO o.a.k.s.p.internals.StreamThread - stream-thread [StreamThread-1] Flushing state stores of task 20_1
15:51:30.488 [StreamThread-1] INFO o.a.k.s.p.internals.StreamThread - stream-thread [StreamThread-1] Flushing state stores of task 21_1
15:51:30.488 [StreamThread-1] INFO o.a.k.s.p.internals.StreamThread - stream-thread [StreamThread-1] Flushing state stores of task 21_3
15:51:30.489 [StreamThread-1] INFO o.a.k.s.p.internals.StreamThread - stream-thread [StreamThread-1] Closing the state manager of task 2_0
15:51:30.501 [StreamThread-1] INFO o.a.k.s.p.internals.StreamThread - stream-thread [StreamThread-1] Closing the state manager of task 3_0
15:51:30.501 [StreamThread-1] INFO o.a.k.s.p.internals.StreamThread - stream-thread [StreamThread-1] Closing the state manager of task 1_2
15:51:30.518 [StreamThread
@mfenniak
mfenniak / gist:509fb82dfcfda79a21cfc1b07dafa89c
Created December 4, 2016 03:43
kafka-streams error w/ trunk @ e43bbce
java.lang.IllegalStateException: Attempting to put a clean entry for key [urn:replicon-tenant:strprc971e3ca9:timesheet:97c0ce25-e039-4e8b-9f2c-d43f0668b755] into NamedCache [0_0-TimesheetNonBillableHours] when it already contains a dirty entry for the same key
at org.apache.kafka.streams.state.internals.NamedCache.put(NamedCache.java:124)
at org.apache.kafka.streams.state.internals.ThreadCache.put(ThreadCache.java:120)
at org.apache.kafka.streams.state.internals.CachingKeyValueStore.get(CachingKeyValueStore.java:146)
at org.apache.kafka.streams.state.internals.CachingKeyValueStore.get(CachingKeyValueStore.java:133)
at org.apache.kafka.streams.kstream.internals.KTableAggregate$KTableAggregateValueGetter.get(KTableAggregate.java:128)
at org.apache.kafka.streams.kstream.internals.KTableKTableLeftJoin$KTableKTableLeftJoinProcessor.process(KTableKTableLeftJoin.java:81)
at org.apache.kafka.streams.kstream.internals.KTableKTableLeftJoin$KTableKTableLeftJoinProcessor.process(KTableKTableLeftJoin.java:54)
at o

Keybase proof

I hereby claim:

  • I am mfenniak on github.
  • I am mfenniak (https://keybase.io/mfenniak) on keybase.
  • I have a public key ASBszYEV6fmIrudDoq2iEuw_FwT_X1WqTd3ECfOA-wfF9Ao

To claim this, I am signing this object:

org.apache.kafka.streams.errors.TopologyBuilderException: Invalid topology building: External source topic not found: TableNumber2Aggregated-repartition
at org.apache.kafka.streams.processor.internals.StreamPartitionAssignor.ensureCopartitioning(StreamPartitionAssignor.java:452)
at org.apache.kafka.streams.processor.internals.StreamPartitionAssignor.ensureCopartitioning(StreamPartitionAssignor.java:440)
at org.apache.kafka.streams.processor.internals.StreamPartitionAssignor.assign(StreamPartitionAssignor.java:267)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.performAssignment(ConsumerCoordinator.java:260)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.onJoinLeader(AbstractCoordinator.java:404)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.access$900(AbstractCoordinator.java:81)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:358)
at org.apache.kafka.clients.consume