Skip to content

Instantly share code, notes, and snippets.

@tamird
Created November 4, 2015 15:44
Show Gist options
  • Save tamird/f039dbe716444126bc93 to your computer and use it in GitHub Desktop.
Save tamird/f039dbe716444126bc93 to your computer and use it in GitHub Desktop.
⏚ [tamird:~/src/go/src/github.com/cockroachdb/cockroach] revert-revert-multicpu(+1/-0) 2 ± make testrace PKG=./storage TESTFLAGS='-count 300 -parallel 1'
go test -tags '' -race -i ./storage
go test -tags '' -race -run . ./storage -timeout 5m -count 300 -parallel 1
I1104 00:08:43.609204 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:43.610524 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:43.623744 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:43.713963 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:43.714968 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:43.716292 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:43.721331 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:43.722080 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:43.736351 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:43.741068 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:43.741706 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:43.752744 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:43.758755 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:43.759542 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:43.767059 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:43.772253 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:43.772825 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:43.822486 45133 stopper.go:236 draining; tasks left:
1 storage/id_alloc.go:106
I1104 00:08:43.823222 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:43.830285 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:43.831594 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:43.834837 45133 storage/id_alloc.go:114 unable to allocate 10 ids from "": storage/engine/mvcc.go:457: attempted access to empty key
I1104 00:08:43.886544 45133 stopper.go:236 draining; tasks left:
1 storage/id_alloc.go:106
I1104 00:08:43.887334 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:43.892502 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:43.893288 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:43.894154 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:43.931241 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:43.937106 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:43.940581 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:43.943818 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:43.944470 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:43.947215 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:43.950980 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:43.951797 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:43.953951 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:43.958918 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:43.959676 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:43.961598 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:43.966258 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:43.966910 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:43.968325 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:43.973036 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:43.973660 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:43.976332 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:43.979794 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:43.980666 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.132880 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:01 +0000 UTC +0.000s
I1104 00:08:44.133749 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.138685 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.139283 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.141801 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:01 +0000 UTC +0.000s
I1104 00:08:44.142229 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.147515 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.148216 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.149822 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.153081 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.154210 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.156128 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.159413 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.159990 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.160921 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.165427 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.166268 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.169018 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.174711 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.175357 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.177875 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.182327 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.182984 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.184523 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.186371 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.187020 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.188685 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.192051 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.192645 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.194615 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.199993 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.200622 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.206152 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.211024 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.211672 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.213343 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.217997 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.218610 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.220074 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.223627 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.224231 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.225798 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.230980 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.231610 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.234891 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.239713 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.240331 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.242550 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.247280 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.247989 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.318116 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.321869 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.322516 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
E1104 00:08:44.324677 45133 storage/replica.go:1442 stalling replica due to: unexpected EOF
I1104 00:08:44.325100 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.330040 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.330755 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.332494 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.337732 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.338305 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.341128 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.345828 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.346569 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.350476 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.355485 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.356279 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.360031 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.363738 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.364799 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.370143 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.374370 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.375094 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.377723 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.382488 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.383244 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.388854 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.393535 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.394188 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.399337 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.400146 45133 storage/replica_command.go:1119 range 2: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.402475 45133 stopper.go:236 draining; tasks left:
1 storage/replica.go:1372
I1104 00:08:44.403497 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.408079 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.408764 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.411588 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.412027 45133 storage/replica_command.go:1119 range 2: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.413590 45133 stopper.go:236 draining; tasks left:
1 storage/replica.go:1551
W1104 00:08:44.414659 45133 storage/replica.go:1548 unable to resolve intent: storage/replica_test.go:1923: boom
I1104 00:08:44.415294 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.420237 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.421144 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.423152 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.426983 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.427704 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.432252 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.435725 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.436342 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.443764 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.448358 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.449012 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.474488 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.479054 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.479674 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.496224 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.501435 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.502094 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.505101 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.510350 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.511020 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.513450 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.518156 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.518948 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.521518 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.526000 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.526787 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.529205 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.532702 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.533241 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.536727 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.542353 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.543346 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.549829 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.553294 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.558717 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.559407 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.561365 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.567859 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.568546 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.573934 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.578241 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.578972 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
E1104 00:08:44.580536 45133 storage/replica.go:1442 stalling replica due to: storage/replica.go:969: applied index moved backwards: 27 >= 14
I1104 00:08:44.581257 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.586511 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.587421 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.588510 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.593148 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.594092 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.596959 45133 stopper.go:236 draining; tasks left:
4 storage/replica.go:1372
I1104 00:08:44.598116 45133 stopper.go:236 draining; tasks left:
3 storage/replica.go:1372
I1104 00:08:44.598708 45133 stopper.go:236 draining; tasks left:
2 storage/replica.go:1372
I1104 00:08:44.599783 45133 stopper.go:236 draining; tasks left:
1 storage/replica.go:1372
I1104 00:08:44.600605 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.602544 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.603125 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.605425 45133 stopper.go:236 draining; tasks left:
2 storage/replica.go:1372
I1104 00:08:44.607096 45133 stopper.go:236 draining; tasks left:
1 storage/replica.go:1372
I1104 00:08:44.607896 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.612485 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.613147 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.617787 45133 stopper.go:236 draining; tasks left:
4 storage/replica.go:1372
I1104 00:08:44.618851 45133 stopper.go:236 draining; tasks left:
3 storage/replica.go:1372
I1104 00:08:44.619954 45133 stopper.go:236 draining; tasks left:
2 storage/replica.go:1372
I1104 00:08:44.620831 45133 stopper.go:236 draining; tasks left:
1 storage/replica.go:1372
I1104 00:08:44.621771 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.626620 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.627161 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.628663 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.632549 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.641049 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.641607 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.643511 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.646751 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.647804 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.649989 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.653591 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.656887 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.659889 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.662059 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.662154 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.666070 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.666137 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.669155 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.671955 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.676174 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.681208 45133 storage/scanner_test.go:217 q1: 3, q2: 3, wanted: 2
I1104 00:08:44.681307 45133 storage/scanner_test.go:217 q1: 2, q2: 2, wanted: 2
I1104 00:08:44.784436 45133 storage/scanner_test.go:254 0: average scan: 15.400456ms
I1104 00:08:44.885110 45133 storage/scanner_test.go:254 1: average scan: 25.254511ms
I1104 00:08:44.898508 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.899231 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.901047 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.905647 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.909194 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.910736 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.911989 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.916862 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
W1104 00:08:44.928638 45133 storage/store_pool.go:59 store 2 on node 2 is now considered offline
W1104 00:08:44.935303 45133 storage/store_pool.go:59 store 2 on node 2 is now considered offline
W1104 00:08:44.943197 45133 storage/store_pool.go:59 store 2 on node 1 is now considered offline
W1104 00:08:44.943301 45133 storage/store_pool.go:59 store 1 on node 1 is now considered offline
W1104 00:08:44.943411 45133 storage/store_pool.go:59 store 3 on node 1 is now considered offline
W1104 00:08:44.943494 45133 storage/store_pool.go:59 store 5 on node 1 is now considered offline
W1104 00:08:44.943573 45133 storage/store_pool.go:59 store 4 on node 1 is now considered offline
W1104 00:08:44.962173 45133 storage/store_pool.go:59 store 2 on node 2 is now considered offline
W1104 00:08:44.962276 45133 storage/store_pool.go:59 store 1 on node 1 is now considered offline
W1104 00:08:44.962446 45133 storage/store_pool.go:59 store 4 on node 4 is now considered offline
W1104 00:08:44.962529 45133 storage/store_pool.go:59 store 3 on node 3 is now considered offline
W1104 00:08:44.962592 45133 storage/store_pool.go:59 store 5 on node 5 is now considered offline
I1104 00:08:44.971015 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.974561 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.979878 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.980483 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.981775 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.986403 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.987191 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.988998 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:44.993557 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:44.994250 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:44.996103 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:45.000258 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:45.000938 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:45.001870 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:45.008261 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:45.008833 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:45.012143 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:45.017388 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:45.018026 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:45.019114 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:45.024579 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:45.025137 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:45.026252 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:45.030794 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:45.031514 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:45.033286 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:45.037188 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:45.038158 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:45.039500 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:45.044374 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:45.045033 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:45.046601 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:45.047110 45133 storage/replica_command.go:1119 range 2: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:45.047974 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:45.053579 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:45.054378 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:45.057867 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:45.062827 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:45.063586 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:45.065078 45133 stopper.go:236 draining; tasks left:
1 storage/id_alloc.go:106
I1104 00:08:45.066037 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:45.069652 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:45.070374 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:45.072509 45133 stopper.go:236 draining; tasks left:
1 storage/id_alloc.go:106
I1104 00:08:45.073329 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:45.078429 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:45.079146 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:45.087976 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:45.092920 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:45.093655 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:45.097854 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:45.102670 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:45.103364 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:45.121840 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:45.127035 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:45.127724 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:45.132464 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:45.138182 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:45.138932 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:45.145405 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:45.150278 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:45.150886 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:45.156271 45133 storage/replica.go:1370 failed to resolve on inconsistent read: conflicting intents on ["true-a"]: resolved? false
W1104 00:08:45.162139 45133 storage/replica.go:1370 failed to resolve on inconsistent read: conflicting intents on ["true-a"]: resolved? false
W1104 00:08:45.168558 45133 storage/replica.go:1370 failed to resolve on inconsistent read: conflicting intents on ["false-a"]: resolved? false
I1104 00:08:45.177315 45133 stopper.go:236 draining; tasks left:
37 storage/replica.go:1372
17 storage/replica.go:1532
I1104 00:08:45.179122 45133 stopper.go:236 draining; tasks left:
36 storage/replica.go:1372
17 storage/replica.go:1532
W1104 00:08:45.179491 45133 storage/replica.go:1370 failed to resolve on inconsistent read: conflicting intents on ["false-a"]: resolved? false
I1104 00:08:45.179588 45133 stopper.go:236 draining; tasks left:
35 storage/replica.go:1372
17 storage/replica.go:1532
I1104 00:08:45.179894 45133 stopper.go:236 draining; tasks left:
35 storage/replica.go:1372
16 storage/replica.go:1532
I1104 00:08:45.181682 45133 stopper.go:236 draining; tasks left:
34 storage/replica.go:1372
16 storage/replica.go:1532
I1104 00:08:45.182061 45133 stopper.go:236 draining; tasks left:
33 storage/replica.go:1372
16 storage/replica.go:1532
I1104 00:08:45.182312 45133 stopper.go:236 draining; tasks left:
33 storage/replica.go:1372
15 storage/replica.go:1532
I1104 00:08:45.182607 45133 stopper.go:236 draining; tasks left:
33 storage/replica.go:1372
14 storage/replica.go:1532
I1104 00:08:45.183091 45133 stopper.go:236 draining; tasks left:
32 storage/replica.go:1372
14 storage/replica.go:1532
I1104 00:08:45.183745 45133 stopper.go:236 draining; tasks left:
32 storage/replica.go:1372
13 storage/replica.go:1532
I1104 00:08:45.184044 45133 stopper.go:236 draining; tasks left:
31 storage/replica.go:1372
13 storage/replica.go:1532
I1104 00:08:45.184883 45133 stopper.go:236 draining; tasks left:
31 storage/replica.go:1372
12 storage/replica.go:1532
I1104 00:08:45.185200 45133 stopper.go:236 draining; tasks left:
29 storage/replica.go:1372
12 storage/replica.go:1532
I1104 00:08:45.185911 45133 stopper.go:236 draining; tasks left:
29 storage/replica.go:1372
11 storage/replica.go:1532
I1104 00:08:45.186196 45133 stopper.go:236 draining; tasks left:
28 storage/replica.go:1372
10 storage/replica.go:1532
I1104 00:08:45.186811 45133 stopper.go:236 draining; tasks left:
9 storage/replica.go:1532
28 storage/replica.go:1372
I1104 00:08:45.187357 45133 stopper.go:236 draining; tasks left:
9 storage/replica.go:1532
27 storage/replica.go:1372
I1104 00:08:45.187793 45133 stopper.go:236 draining; tasks left:
8 storage/replica.go:1532
27 storage/replica.go:1372
I1104 00:08:45.188208 45133 stopper.go:236 draining; tasks left:
8 storage/replica.go:1532
26 storage/replica.go:1372
I1104 00:08:45.188970 45133 stopper.go:236 draining; tasks left:
7 storage/replica.go:1532
26 storage/replica.go:1372
I1104 00:08:45.189137 45133 stopper.go:236 draining; tasks left:
7 storage/replica.go:1532
25 storage/replica.go:1372
I1104 00:08:45.189934 45133 stopper.go:236 draining; tasks left:
6 storage/replica.go:1532
25 storage/replica.go:1372
I1104 00:08:45.190061 45133 stopper.go:236 draining; tasks left:
6 storage/replica.go:1532
24 storage/replica.go:1372
I1104 00:08:45.190378 45133 stopper.go:236 draining; tasks left:
6 storage/replica.go:1532
23 storage/replica.go:1372
I1104 00:08:45.190652 45133 stopper.go:236 draining; tasks left:
5 storage/replica.go:1532
23 storage/replica.go:1372
I1104 00:08:45.191256 45133 stopper.go:236 draining; tasks left:
4 storage/replica.go:1532
23 storage/replica.go:1372
I1104 00:08:45.191387 45133 stopper.go:236 draining; tasks left:
4 storage/replica.go:1532
22 storage/replica.go:1372
I1104 00:08:45.192385 45133 stopper.go:236 draining; tasks left:
3 storage/replica.go:1532
22 storage/replica.go:1372
I1104 00:08:45.193233 45133 stopper.go:236 draining; tasks left:
3 storage/replica.go:1532
21 storage/replica.go:1372
I1104 00:08:45.194161 45133 stopper.go:236 draining; tasks left:
21 storage/replica.go:1372
2 storage/replica.go:1532
I1104 00:08:45.195143 45133 stopper.go:236 draining; tasks left:
20 storage/replica.go:1372
2 storage/replica.go:1532
I1104 00:08:45.195687 45133 stopper.go:236 draining; tasks left:
2 storage/replica.go:1532
19 storage/replica.go:1372
I1104 00:08:45.195776 45133 stopper.go:236 draining; tasks left:
2 storage/replica.go:1532
18 storage/replica.go:1372
I1104 00:08:45.196296 45133 stopper.go:236 draining; tasks left:
18 storage/replica.go:1372
1 storage/replica.go:1532
I1104 00:08:45.196655 45133 stopper.go:236 draining; tasks left:
17 storage/replica.go:1372
1 storage/replica.go:1532
I1104 00:08:45.197747 45133 stopper.go:236 draining; tasks left:
16 storage/replica.go:1372
1 storage/replica.go:1532
I1104 00:08:45.199045 45133 stopper.go:236 draining; tasks left:
16 storage/replica.go:1372
I1104 00:08:45.200785 45133 stopper.go:236 draining; tasks left:
15 storage/replica.go:1372
I1104 00:08:45.201427 45133 stopper.go:236 draining; tasks left:
14 storage/replica.go:1372
I1104 00:08:45.202164 45133 stopper.go:236 draining; tasks left:
13 storage/replica.go:1372
I1104 00:08:45.202731 45133 stopper.go:236 draining; tasks left:
12 storage/replica.go:1372
I1104 00:08:45.203373 45133 stopper.go:236 draining; tasks left:
11 storage/replica.go:1372
I1104 00:08:45.204433 45133 stopper.go:236 draining; tasks left:
10 storage/replica.go:1372
I1104 00:08:45.204992 45133 stopper.go:236 draining; tasks left:
9 storage/replica.go:1372
I1104 00:08:45.205594 45133 stopper.go:236 draining; tasks left:
8 storage/replica.go:1372
I1104 00:08:45.206236 45133 stopper.go:236 draining; tasks left:
7 storage/replica.go:1372
I1104 00:08:45.207001 45133 stopper.go:236 draining; tasks left:
6 storage/replica.go:1372
I1104 00:08:45.207878 45133 stopper.go:236 draining; tasks left:
5 storage/replica.go:1372
I1104 00:08:45.208442 45133 stopper.go:236 draining; tasks left:
4 storage/replica.go:1372
I1104 00:08:45.209106 45133 stopper.go:236 draining; tasks left:
3 storage/replica.go:1372
I1104 00:08:45.209698 45133 stopper.go:236 draining; tasks left:
2 storage/replica.go:1372
I1104 00:08:45.210445 45133 stopper.go:236 draining; tasks left:
1 storage/replica.go:1372
I1104 00:08:45.211449 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:45.215199 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:45.215740 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:45.292298 45133 storage/replica.go:1370 failed to resolve on inconsistent read: conflicting intents on ["key2-00" "key2-01" "key2-02" "key2-03" "key2-04" "key2-05" "key2-06" "key2-07" "key2-08" "key2-09"]: resolved? false
I1104 00:08:45.297163 45133 stopper.go:236 draining; tasks left:
1 storage/replica.go:1372
W1104 00:08:45.299138 45133 storage/replica.go:1370 failed to resolve on inconsistent read: conflicting intents on ["key3-00" "key3-01" "key3-02" "key3-03" "key3-04" "key3-05" "key3-06" "key3-07" "key3-08" "key3-09"]: resolved? false
I1104 00:08:45.299803 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:45.305856 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:45.306542 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:45.327086 45133 stopper.go:236 draining; tasks left:
8 storage/replica.go:1372
2 storage/replica.go:1532
I1104 00:08:45.327246 45133 stopper.go:236 draining; tasks left:
8 storage/replica.go:1372
1 storage/replica.go:1532
I1104 00:08:45.328418 45133 stopper.go:236 draining; tasks left:
7 storage/replica.go:1372
1 storage/replica.go:1532
I1104 00:08:45.330137 45133 stopper.go:236 draining; tasks left:
7 storage/replica.go:1372
I1104 00:08:45.332910 45133 stopper.go:236 draining; tasks left:
6 storage/replica.go:1372
I1104 00:08:45.335591 45133 stopper.go:236 draining; tasks left:
5 storage/replica.go:1372
I1104 00:08:45.338917 45133 stopper.go:236 draining; tasks left:
4 storage/replica.go:1372
I1104 00:08:45.341307 45133 stopper.go:236 draining; tasks left:
3 storage/replica.go:1372
I1104 00:08:45.343430 45133 stopper.go:236 draining; tasks left:
2 storage/replica.go:1372
I1104 00:08:45.346520 45133 stopper.go:236 draining; tasks left:
1 storage/replica.go:1372
I1104 00:08:45.348109 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:45.353363 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:45.354251 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:45.359787 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:45.365201 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:45.365979 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:45.367060 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:45.400917 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:45.401748 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:45.403274 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
E1104 00:08:45.405178 45133 config/config.go:263 unable to determine largest object ID from system config: empty system values in config
I1104 00:08:45.415640 45133 gossip/gossip.go:175 gossiping node descriptor node_id:1 address:<network_field:"localhost" address_field:"1" > attrs:<>
I1104 00:08:45.420289 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:45.421671 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:45.425698 45133 storage/replica_command.go:1191 initiating a split of range=1 [""-"\xff\xff") at key "b"
I1104 00:08:45.433612 45133 storage/replica_command.go:1354 initiating a merge of range=2 ["b"-"\xff\xff") into range=1 [""-"b")
I1104 00:08:45.436566 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:45.438632 45133 storage/replica_command.go:1119 range 2: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:45.440881 45133 kv/local_sender.go:173 range not contained in one range: ["","b\x00"), but have ["","b")
W1104 00:08:45.461211 45133 storage/replica.go:1370 failed to resolve on inconsistent read: conflicting intents on ["\x00\x00meta2\xff\xff"]: resolved? false
I1104 00:08:45.472600 45133 stopper.go:236 draining; tasks left:
1 kv/txn_coord_sender.go:752
I1104 00:08:45.473332 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:45.478629 45133 gossip/gossip.go:175 gossiping node descriptor node_id:1 address:<network_field:"localhost" address_field:"1" > attrs:<>
I1104 00:08:45.484291 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:45.485569 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:45.491712 45133 storage/replica_command.go:1191 initiating a split of range=1 [""-"\xff\xff") at key "b"
I1104 00:08:45.507292 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:45.509169 45133 storage/replica_command.go:1119 range 2: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:45.511583 45133 storage/replica_command.go:1354 initiating a merge of range=2 ["b"-"\xff\xff") into range=1 [""-"b")
W1104 00:08:45.517462 45133 kv/local_sender.go:173 range not contained in one range: ["","b\x00"), but have ["","b")
W1104 00:08:45.521416 45133 storage/replica.go:1370 failed to resolve on inconsistent read: conflicting intents on ["\x00\x00meta2\xff\xff"]: resolved? false
I1104 00:08:45.525641 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:45.530083 45133 gossip/gossip.go:175 gossiping node descriptor node_id:1 address:<network_field:"localhost" address_field:"1" > attrs:<>
I1104 00:08:45.536334 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:45.538936 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:45.542111 45133 storage/replica_command.go:1191 initiating a split of range=1 [""-"\xff\xff") at key "b"
I1104 00:08:45.551798 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:45.552396 45133 storage/replica_command.go:1119 range 2: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:45.554886 45133 storage/replica_command.go:1354 initiating a merge of range=2 ["b"-"\xff\xff") into range=1 [""-"b")
W1104 00:08:45.557809 45133 kv/local_sender.go:173 range not contained in one range: ["","b\x00"), but have ["","b")
W1104 00:08:45.561813 45133 storage/replica.go:1370 failed to resolve on inconsistent read: conflicting intents on ["\x00\x00meta2\xff\xff"]: resolved? false
I1104 00:08:45.565661 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:45.569237 45133 gossip/gossip.go:175 gossiping node descriptor node_id:1 address:<network_field:"localhost" address_field:"1" > attrs:<>
I1104 00:08:45.572642 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:45.580206 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:45.589198 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:45.600067 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:45.601511 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:45.603034 45133 gossip/gossip.go:175 gossiping node descriptor node_id:1 address:<network_field:"localhost" address_field:"1" > attrs:<>
I1104 00:08:45.606423 45133 gossip/gossip.go:175 gossiping node descriptor node_id:2 address:<network_field:"localhost" address_field:"2" > attrs:<>
I1104 00:08:45.609030 45133 gossip/gossip.go:175 gossiping node descriptor node_id:3 address:<network_field:"localhost" address_field:"3" > attrs:<>
I1104 00:08:45.612312 45133 gossip/gossip.go:175 gossiping node descriptor node_id:4 address:<network_field:"localhost" address_field:"4" > attrs:<>
I1104 00:08:45.614579 45133 storage/replica_command.go:1191 initiating a split of range=1 [""-"\xff\xff") at key "d"
I1104 00:08:45.622449 45133 storage/replica_command.go:1191 initiating a split of range=1 [""-"d") at key "b"
I1104 00:08:45.625142 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:45.625679 45133 storage/replica_command.go:1119 range 2: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:45.626684 45133 kv/local_sender.go:173 range not contained in one range: ["","d\x00"), but have ["","d")
I1104 00:08:45.633021 45133 storage/store.go:1448 changing raft replica {2 2 2} for range 1
I1104 00:08:45.641805 45133 storage/store.go:1448 changing raft replica {3 3 3} for range 1
W1104 00:08:45.757355 45133 kv/local_sender.go:173 range not contained in one range: ["\x00\x00meta2d","b\x00"), but have ["","b")
I1104 00:08:45.767667 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:45.773045 45133 storage/replica_command.go:1119 range 3: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:45.776474 45133 storage/store.go:1448 changing raft replica {2 2 2} for range 3
W1104 00:08:45.776865 45133 storage/replica.go:1370 failed to resolve on inconsistent read: conflicting intents on ["\x00\x00meta2d"]: resolved? false
I1104 00:08:45.788159 45133 storage/store.go:1448 changing raft replica {4 4 3} for range 3
I1104 00:08:45.813958 45133 storage/store.go:1448 changing raft replica {2 2 2} for range 2
I1104 00:08:45.825714 45133 storage/store.go:1448 changing raft replica {3 3 3} for range 2
I1104 00:08:45.971982 45133 storage/replica_command.go:1354 initiating a merge of range=3 ["b"-"d") into range=1 [""-"b")
I1104 00:08:45.981569 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:45.981694 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:45.981768 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:45.981870 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:45.984208 45133 gossip/gossip.go:175 gossiping node descriptor node_id:1 address:<network_field:"localhost" address_field:"1" > attrs:<>
I1104 00:08:45.986407 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:45.987126 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:45.990151 45133 storage/replica_command.go:1191 initiating a split of range=1 [""-"\xff\xff") at key "m"
I1104 00:08:45.997246 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:45.997705 45133 storage/replica_command.go:1119 range 2: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:45.999413 45133 gossip/gossip.go:175 gossiping node descriptor node_id:1 address:<network_field:"localhost" address_field:"1" > attrs:<>
I1104 00:08:46.001759 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:46.002703 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:46.003940 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:46.006198 45133 gossip/gossip.go:175 gossiping node descriptor node_id:1 address:<network_field:"localhost" address_field:"1" > attrs:<>
I1104 00:08:46.008200 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:46.008923 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:46.011244 45133 gossip/gossip.go:175 gossiping node descriptor node_id:1 address:<network_field:"localhost" address_field:"1" > attrs:<>
I1104 00:08:46.013537 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:46.014862 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:46.020806 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:46.021505 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:46.022444 45133 gossip/gossip.go:175 gossiping node descriptor node_id:1 address:<network_field:"localhost" address_field:"1" > attrs:<>
I1104 00:08:46.023769 45133 gossip/gossip.go:175 gossiping node descriptor node_id:2 address:<network_field:"localhost" address_field:"2" > attrs:<>
I1104 00:08:46.025811 45133 storage/store.go:1448 changing raft replica {2 2 2} for range 1
I1104 00:08:46.171940 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:46.172149 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:46.178604 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:46.179667 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:46.182382 45133 gossip/gossip.go:175 gossiping node descriptor node_id:1 address:<network_field:"localhost" address_field:"1" > attrs:<>
I1104 00:08:46.183929 45133 gossip/gossip.go:175 gossiping node descriptor node_id:2 address:<network_field:"localhost" address_field:"2" > attrs:<>
I1104 00:08:46.186119 45133 storage/store.go:1448 changing raft replica {2 2 2} for range 1
I1104 00:08:47.101739 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:47.101883 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:47.105104 45133 storage/client_raft_test.go:371 using seed 6592049777589337389
I1104 00:08:47.107699 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:47.108242 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:47.109226 45133 gossip/gossip.go:175 gossiping node descriptor node_id:1 address:<network_field:"localhost" address_field:"1" > attrs:<>
I1104 00:08:47.110454 45133 gossip/gossip.go:175 gossiping node descriptor node_id:2 address:<network_field:"localhost" address_field:"2" > attrs:<>
I1104 00:08:47.112123 45133 storage/store.go:1448 changing raft replica {2 2 2} for range 1
W1104 00:08:47.112702 45133 multiraft/multiraft.go:1151 aborting configuration change: storage/client_raft_test.go:382: boom
I1104 00:08:47.123091 45133 storage/store.go:1448 changing raft replica {2 2 2} for range 1
I1104 00:08:47.269465 45133 stopper.go:236 draining; tasks left:
1 kv/txn_coord_sender.go:752
I1104 00:08:52.114627 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:52.114834 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:52.120575 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:52.121213 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:52.122115 45133 gossip/gossip.go:175 gossiping node descriptor node_id:1 address:<network_field:"localhost" address_field:"1" > attrs:<>
I1104 00:08:52.123974 45133 gossip/gossip.go:175 gossiping node descriptor node_id:2 address:<network_field:"localhost" address_field:"2" > attrs:<>
I1104 00:08:52.128026 45133 storage/store.go:1448 changing raft replica {2 2 2} for range 1
E1104 00:08:52.277373 45133 storage/client_raft_test.go:513 expected stats on new range to equal old; {LiveBytes:1719 KeyBytes:514 ValBytes:1324 IntentBytes:0 LiveCount:17 KeyCount:17 ValCount:20 IntentCount:0 IntentAge:0 GCBytesAge:0 SysBytes:557 SysCount:7 LastUpdateNanos:0} != {LiveBytes:1703 KeyBytes:490 ValBytes:1246 IntentBytes:0 LiveCount:17 KeyCount:17 ValCount:18 IntentCount:0 IntentAge:0 GCBytesAge:0 SysBytes:324 SysCount:6 LastUpdateNanos:0}
I1104 00:08:52.279424 45133 storage/client_raft_test.go:532 read value 16
I1104 00:08:52.279563 45133 storage/client_raft_test.go:532 read value 16
I1104 00:08:52.279662 45133 storage/client_raft_test.go:532 read value 39
I1104 00:08:52.280977 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:52.281129 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:52.286839 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:52.287484 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:52.288277 45133 gossip/gossip.go:175 gossiping node descriptor node_id:1 address:<network_field:"localhost" address_field:"1" > attrs:<>
I1104 00:08:52.289737 45133 gossip/gossip.go:175 gossiping node descriptor node_id:2 address:<network_field:"localhost" address_field:"2" > attrs:<>
I1104 00:08:52.291425 45133 gossip/gossip.go:175 gossiping node descriptor node_id:3 address:<network_field:"localhost" address_field:"3" > attrs:<>
I1104 00:08:52.293943 45133 storage/store.go:1448 changing raft replica {3 3 2} for range 1
I1104 00:08:52.301556 45133 storage/store.go:1448 changing raft replica {2 2 3} for range 1
I1104 00:08:52.441462 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:52.441595 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:52.441887 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:52.447438 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:52.448111 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:52.448955 45133 gossip/gossip.go:175 gossiping node descriptor node_id:1 address:<network_field:"localhost" address_field:"1" > attrs:<>
I1104 00:08:52.450680 45133 gossip/gossip.go:175 gossiping node descriptor node_id:2 address:<network_field:"localhost" address_field:"2" > attrs:<>
I1104 00:08:52.452392 45133 gossip/gossip.go:175 gossiping node descriptor node_id:3 address:<network_field:"localhost" address_field:"3" > attrs:<>
I1104 00:08:52.453748 45133 gossip/gossip.go:175 gossiping node descriptor node_id:4 address:<network_field:"localhost" address_field:"4" > attrs:<>
I1104 00:08:52.455260 45133 gossip/gossip.go:175 gossiping node descriptor node_id:5 address:<network_field:"localhost" address_field:"5" > attrs:<>
I1104 00:08:52.456978 45133 storage/store.go:1448 changing raft replica {2 2 2} for range 1
I1104 00:08:52.464633 45133 storage/store.go:1448 changing raft replica {3 3 3} for range 1
I1104 00:08:52.613231 45133 storage/replica_command.go:1191 initiating a split of range=1 [""-"\xff\xff") at key "m"
W1104 00:08:52.630082 45133 kv/local_sender.go:173 range not contained in one range: ["\x00\x00meta2\xff\xff","m\x00"), but have ["","m")
I1104 00:08:52.969680 45133 storage/replica_command.go:1119 range 2: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:52.984715 45133 storage/replica.go:1370 failed to resolve on inconsistent read: conflicting intents on ["\x00\x00meta2\xff\xff"]: resolved? false
I1104 00:08:52.985569 45133 storage/store.go:1448 changing raft replica {4 4 4} for range 2
I1104 00:08:53.023837 45133 storage/store.go:1448 changing raft replica {5 5 5} for range 2
I1104 00:08:53.100016 45133 storage/store.go:1448 changing raft replica {1 1 1} for range 2
I1104 00:08:53.363711 45133 storage/replica_command.go:1119 range 2: new leader lease replica {2 2 2} 1970-01-01 00:00:02 +0000 UTC +1.000s
I1104 00:08:53.399311 45133 storage/store.go:1448 changing raft replica {5 5 5} for range 2
W1104 00:08:53.404372 45133 multiraft/multiraft.go:1151 aborting configuration change: retry txn "storage/replica_command.go:1587 (*Replica).ChangeReplicas" id=dec6f044 key="\x00\x00\x00k1m\x00\x01rdsc" rw=false pri=95.03894779 iso=SERIALIZABLE stat=PENDING epo=0 ts=3.000000003,1 orig=2.000000002,54 max=2.000000002,54
W1104 00:08:53.405521 45133 multiraft/multiraft.go:1151 aborting configuration change: retry txn "storage/replica_command.go:1587 (*Replica).ChangeReplicas" id=dec6f044 key="\x00\x00\x00k1m\x00\x01rdsc" rw=false pri=95.03894779 iso=SERIALIZABLE stat=PENDING epo=0 ts=3.000000003,1 orig=2.000000002,54 max=2.000000002,54
W1104 00:08:53.405890 45133 multiraft/multiraft.go:1151 aborting configuration change: retry txn "storage/replica_command.go:1587 (*Replica).ChangeReplicas" id=dec6f044 key="\x00\x00\x00k1m\x00\x01rdsc" rw=false pri=95.03894779 iso=SERIALIZABLE stat=PENDING epo=0 ts=3.000000003,1 orig=2.000000002,54 max=2.000000002,54
W1104 00:08:53.405974 45133 multiraft/multiraft.go:1151 aborting configuration change: retry txn "storage/replica_command.go:1587 (*Replica).ChangeReplicas" id=dec6f044 key="\x00\x00\x00k1m\x00\x01rdsc" rw=false pri=95.03894779 iso=SERIALIZABLE stat=PENDING epo=0 ts=3.000000003,1 orig=2.000000002,54 max=2.000000002,54
I1104 00:08:53.420452 45133 storage/store.go:1448 changing raft replica {5 5 5} for range 2
E1104 00:08:53.447901 45133 multiraft/transport.go:176 sending rpc failed: unexpected EOF
E1104 00:08:53.448069 45133 multiraft/transport.go:176 sending rpc failed: unexpected EOF
W1104 00:08:53.448242 45133 rpc/server.go:438 rpc: write response failed: write tcp 127.0.0.1:59732->127.0.0.1:59737: use of closed network connection
W1104 00:08:53.448381 45133 rpc/server.go:438 rpc: write response failed: write tcp 127.0.0.1:59728->127.0.0.1:59734: use of closed network connection
W1104 00:08:53.448422 45133 multiraft/multiraft.go:1226 node 5 failed to send message to 1: connection is shut down
I1104 00:08:53.449719 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:53.449836 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:53.449914 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:53.450103 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:53.450328 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:53.454862 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:53.455611 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:53.456644 45133 gossip/gossip.go:175 gossiping node descriptor node_id:1 address:<network_field:"localhost" address_field:"1" > attrs:<>
I1104 00:08:53.458985 45133 gossip/gossip.go:175 gossiping node descriptor node_id:2 address:<network_field:"localhost" address_field:"2" > attrs:<>
I1104 00:08:53.461199 45133 gossip/gossip.go:175 gossiping node descriptor node_id:3 address:<network_field:"localhost" address_field:"3" > attrs:<>
I1104 00:08:53.462863 45133 storage/store.go:1448 changing raft replica {2 2 2} for range 1
I1104 00:08:53.618564 45133 storage/store.go:1448 changing raft replica {3 3 3} for range 1
I1104 00:08:53.698596 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:53.698819 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:53.699247 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:53.705211 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:53.705828 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:53.706924 45133 gossip/gossip.go:175 gossiping node descriptor node_id:1 address:<network_field:"localhost" address_field:"1" > attrs:<>
I1104 00:08:53.708954 45133 gossip/gossip.go:175 gossiping node descriptor node_id:2 address:<network_field:"localhost" address_field:"2" > attrs:<>
I1104 00:08:53.711442 45133 gossip/gossip.go:175 gossiping node descriptor node_id:3 address:<network_field:"localhost" address_field:"3" > attrs:<>
I1104 00:08:53.713489 45133 storage/store.go:1448 changing raft replica {2 2 2} for range 1
I1104 00:08:53.721333 45133 storage/store.go:1448 changing raft replica {3 3 3} for range 1
W1104 00:08:53.875024 45133 multiraft/multiraft.go:1226 node 1 failed to send message to 2: multiraft/transport.go:143: unknown peer 2
W1104 00:08:53.876308 45133 multiraft/multiraft.go:1226 node 1 failed to send message to 2: multiraft/transport.go:143: unknown peer 2
I1104 00:08:53.921293 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:53.921507 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:53.921730 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:53.928153 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:53.928974 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:53.930068 45133 gossip/gossip.go:175 gossiping node descriptor node_id:1 address:<network_field:"localhost" address_field:"1" > attrs:<>
I1104 00:08:53.931339 45133 gossip/gossip.go:175 gossiping node descriptor node_id:2 address:<network_field:"localhost" address_field:"2" > attrs:<>
I1104 00:08:53.932545 45133 gossip/gossip.go:175 gossiping node descriptor node_id:3 address:<network_field:"localhost" address_field:"3" > attrs:<>
I1104 00:08:53.933825 45133 gossip/gossip.go:175 gossiping node descriptor node_id:4 address:<network_field:"localhost" address_field:"4" > attrs:<>
I1104 00:08:53.935797 45133 storage/store.go:1448 changing raft replica {4 4 2} for range 1
I1104 00:08:53.943015 45133 storage/store.go:1448 changing raft replica {2 2 3} for range 1
W1104 00:08:54.096556 45133 multiraft/multiraft.go:1226 node 1 failed to send message to 2: multiraft/transport.go:143: unknown peer 2
W1104 00:08:54.097673 45133 multiraft/multiraft.go:1226 node 1 failed to send message to 2: multiraft/transport.go:143: unknown peer 2
I1104 00:08:54.098950 45133 storage/store.go:1448 changing raft replica {3 3 4} for range 1
W1104 00:08:54.127327 45133 multiraft/multiraft.go:1226 node 1 failed to send message to 2: multiraft/transport.go:143: unknown peer 2
W1104 00:08:54.133068 45133 multiraft/multiraft.go:1226 node 3 failed to send message to 2: multiraft/transport.go:143: unknown peer 2
W1104 00:08:54.133523 45133 multiraft/multiraft.go:1226 node 4 failed to send message to 2: multiraft/transport.go:143: unknown peer 2
W1104 00:08:54.142416 45133 multiraft/multiraft.go:1226 node 1 failed to send message to 2: multiraft/transport.go:143: unknown peer 2
I1104 00:08:54.145600 45133 storage/store.go:1448 changing raft replica {2 2 3} for range 1
I1104 00:08:54.164318 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:54.165573 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:54.167296 45133 gossip/gossip.go:175 gossiping node descriptor node_id:1 address:<network_field:"localhost" address_field:"1" > attrs:<>
I1104 00:08:54.168831 45133 gossip/gossip.go:175 gossiping node descriptor node_id:2 address:<network_field:"localhost" address_field:"2" > attrs:<>
I1104 00:08:54.170540 45133 gossip/gossip.go:175 gossiping node descriptor node_id:3 address:<network_field:"localhost" address_field:"3" > attrs:<>
I1104 00:08:54.172574 45133 gossip/gossip.go:175 gossiping node descriptor node_id:4 address:<network_field:"localhost" address_field:"4" > attrs:<>
I1104 00:08:54.174884 45133 storage/store.go:1448 changing raft replica {4 4 2} for range 1
I1104 00:08:54.184525 45133 storage/store.go:1448 changing raft replica {2 2 3} for range 1
W1104 00:08:54.336465 45133 multiraft/multiraft.go:1226 node 1 failed to send message to 2: multiraft/transport.go:143: unknown peer 2
W1104 00:08:54.337991 45133 multiraft/multiraft.go:1226 node 1 failed to send message to 2: multiraft/transport.go:143: unknown peer 2
I1104 00:08:54.338995 45133 storage/store.go:1448 changing raft replica {2 2 3} for range 1
I1104 00:08:54.344954 45133 storage/store.go:1448 changing raft replica {3 3 4} for range 1
W1104 00:08:54.363627 45133 multiraft/multiraft.go:1226 node 1 failed to send message to 2: multiraft/transport.go:143: unknown peer 2
W1104 00:08:54.371926 45133 multiraft/multiraft.go:1226 node 4 failed to send message to 2: multiraft/transport.go:143: unknown peer 2
I1104 00:08:54.399550 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:54.399670 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:54.399747 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:54.399831 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:54.402524 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:54.402691 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:54.402886 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:54.403192 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:54.409833 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:54.410446 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:54.411404 45133 gossip/gossip.go:175 gossiping node descriptor node_id:1 address:<network_field:"localhost" address_field:"1" > attrs:<>
I1104 00:08:54.412667 45133 gossip/gossip.go:175 gossiping node descriptor node_id:2 address:<network_field:"localhost" address_field:"2" > attrs:<>
I1104 00:08:54.414115 45133 gossip/gossip.go:175 gossiping node descriptor node_id:3 address:<network_field:"localhost" address_field:"3" > attrs:<>
I1104 00:08:54.415741 45133 storage/store.go:1448 changing raft replica {2 2 2} for range 1
I1104 00:08:54.423637 45133 storage/store.go:1448 changing raft replica {3 3 3} for range 1
I1104 00:08:55.082463 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:55.082778 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:55.083069 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:55.088873 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:55.089432 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:55.090487 45133 gossip/gossip.go:175 gossiping node descriptor node_id:1 address:<network_field:"localhost" address_field:"1" > attrs:<>
I1104 00:08:55.092180 45133 gossip/gossip.go:175 gossiping node descriptor node_id:2 address:<network_field:"localhost" address_field:"2" > attrs:<>
I1104 00:08:55.093824 45133 storage/replica_command.go:1191 initiating a split of range=1 [""-"\xff\xff") at key "m"
I1104 00:08:55.099216 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:55.099639 45133 storage/replica_command.go:1119 range 2: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:55.101007 45133 kv/local_sender.go:173 range not contained in one range: ["\x00\x00meta2\xff\xff","m\x00"), but have ["","m")
I1104 00:08:55.104623 45133 storage/store.go:1448 changing raft replica {2 2 2} for range 2
W1104 00:08:55.104899 45133 storage/replica.go:1370 failed to resolve on inconsistent read: conflicting intents on ["\x00\x00meta2\xff\xff"]: resolved? false
I1104 00:08:55.252097 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:55.252342 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:55.258152 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:55.258854 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:55.260343 45133 gossip/gossip.go:175 gossiping node descriptor node_id:1 address:<network_field:"localhost" address_field:"1" > attrs:<>
I1104 00:08:55.262848 45133 storage/replica_command.go:1191 initiating a split of range=1 [""-"\xff\xff") at key "A020"
I1104 00:08:55.269402 45133 storage/replica_command.go:1191 initiating a split of range=1 [""-"A020") at key "A019"
I1104 00:08:55.272413 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:55.272954 45133 storage/replica_command.go:1119 range 2: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:55.274185 45133 kv/local_sender.go:173 range not contained in one range: ["","A020\x00"), but have ["","A020")
I1104 00:08:55.279705 45133 storage/replica_command.go:1191 initiating a split of range=1 [""-"A019") at key "A018"
I1104 00:08:55.284516 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:55.286196 45133 storage/replica_command.go:1119 range 3: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:55.287759 45133 kv/local_sender.go:173 range not contained in one range: ["","A019\x00"), but have ["","A019")
I1104 00:08:55.295140 45133 storage/replica_command.go:1191 initiating a split of range=1 [""-"A018") at key "A017"
I1104 00:08:55.300425 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:55.301003 45133 storage/replica_command.go:1119 range 4: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:55.301662 45133 kv/local_sender.go:173 range not contained in one range: ["","A018\x00"), but have ["","A018")
I1104 00:08:55.309976 45133 storage/replica_command.go:1191 initiating a split of range=1 [""-"A017") at key "A016"
I1104 00:08:55.314085 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:55.314853 45133 storage/replica_command.go:1119 range 5: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:55.317177 45133 kv/local_sender.go:173 range not contained in one range: ["","A017\x00"), but have ["","A017")
I1104 00:08:55.327716 45133 storage/replica_command.go:1191 initiating a split of range=1 [""-"A016") at key "A015"
I1104 00:08:55.333107 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:55.333684 45133 storage/replica_command.go:1119 range 6: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:55.334323 45133 kv/local_sender.go:173 range not contained in one range: ["","A016\x00"), but have ["","A016")
I1104 00:08:55.344279 45133 storage/replica_command.go:1191 initiating a split of range=1 [""-"A015") at key "A014"
I1104 00:08:55.349836 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:55.350796 45133 storage/replica_command.go:1119 range 7: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:55.354639 45133 kv/local_sender.go:173 range not contained in one range: ["","A015\x00"), but have ["","A015")
I1104 00:08:55.369399 45133 storage/replica_command.go:1191 initiating a split of range=1 [""-"A014") at key "A013"
I1104 00:08:55.373252 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:55.374737 45133 storage/replica_command.go:1119 range 8: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:55.381698 45133 kv/local_sender.go:173 range not contained in one range: ["","A014\x00"), but have ["","A014")
I1104 00:08:55.394289 45133 storage/replica_command.go:1191 initiating a split of range=1 [""-"A013") at key "A012"
I1104 00:08:55.399189 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:55.399889 45133 storage/replica_command.go:1119 range 9: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:55.402296 45133 kv/local_sender.go:173 range not contained in one range: ["","A013\x00"), but have ["","A013")
I1104 00:08:55.420853 45133 storage/replica_command.go:1191 initiating a split of range=1 [""-"A012") at key "A011"
I1104 00:08:55.425652 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:55.426917 45133 storage/replica_command.go:1119 range 10: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:55.432135 45133 kv/local_sender.go:173 range not contained in one range: ["","A012\x00"), but have ["","A012")
I1104 00:08:55.445994 45133 storage/replica_command.go:1191 initiating a split of range=1 [""-"A011") at key "A010"
I1104 00:08:55.452234 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:55.453008 45133 storage/replica_command.go:1119 range 11: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:55.456588 45133 kv/local_sender.go:173 range not contained in one range: ["","A011\x00"), but have ["","A011")
I1104 00:08:55.479490 45133 storage/replica_command.go:1191 initiating a split of range=1 [""-"A010") at key "A009"
I1104 00:08:55.484237 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:55.485307 45133 storage/replica_command.go:1119 range 12: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:55.492471 45133 kv/local_sender.go:173 range not contained in one range: ["","A010\x00"), but have ["","A010")
I1104 00:08:55.521001 45133 storage/replica_command.go:1191 initiating a split of range=1 [""-"A009") at key "A008"
I1104 00:08:55.528234 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:55.528757 45133 storage/replica_command.go:1119 range 13: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:55.530631 45133 kv/local_sender.go:173 range not contained in one range: ["","A009\x00"), but have ["","A009")
I1104 00:08:55.551995 45133 storage/replica_command.go:1191 initiating a split of range=1 [""-"A008") at key "A007"
I1104 00:08:55.556881 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:55.557641 45133 storage/replica_command.go:1119 range 14: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:55.562955 45133 kv/local_sender.go:173 range not contained in one range: ["","A008\x00"), but have ["","A008")
I1104 00:08:55.578919 45133 storage/replica_command.go:1191 initiating a split of range=1 [""-"A007") at key "A006"
I1104 00:08:55.583990 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:55.584755 45133 storage/replica_command.go:1119 range 15: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:55.587853 45133 kv/local_sender.go:173 range not contained in one range: ["","A007\x00"), but have ["","A007")
I1104 00:08:55.610771 45133 storage/replica_command.go:1191 initiating a split of range=1 [""-"A006") at key "A005"
I1104 00:08:55.616130 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:55.616934 45133 storage/replica_command.go:1119 range 16: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:55.623475 45133 kv/local_sender.go:173 range not contained in one range: ["","A006\x00"), but have ["","A006")
I1104 00:08:55.638084 45133 storage/replica_command.go:1191 initiating a split of range=1 [""-"A005") at key "A004"
I1104 00:08:55.643164 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:55.643825 45133 storage/replica_command.go:1119 range 17: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:55.646465 45133 kv/local_sender.go:173 range not contained in one range: ["","A005\x00"), but have ["","A005")
I1104 00:08:55.674785 45133 storage/replica_command.go:1191 initiating a split of range=1 [""-"A004") at key "A003"
I1104 00:08:55.679759 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:55.681274 45133 storage/replica_command.go:1119 range 18: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:55.688062 45133 kv/local_sender.go:173 range not contained in one range: ["","A004\x00"), but have ["","A004")
I1104 00:08:55.703408 45133 storage/replica_command.go:1191 initiating a split of range=1 [""-"A003") at key "A002"
I1104 00:08:55.713267 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:55.714049 45133 storage/replica_command.go:1119 range 19: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:55.716251 45133 kv/local_sender.go:173 range not contained in one range: ["","A003\x00"), but have ["","A003")
I1104 00:08:55.738384 45133 storage/replica_command.go:1191 initiating a split of range=1 [""-"A002") at key "A001"
I1104 00:08:55.742871 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:55.743569 45133 storage/replica_command.go:1119 range 20: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:55.748666 45133 kv/local_sender.go:173 range not contained in one range: ["","A002\x00"), but have ["","A002")
I1104 00:08:55.770847 45133 storage/replica_command.go:1191 initiating a split of range=2 ["A020"-"\xff\xff") at key "B000"
I1104 00:08:55.783717 45133 storage/replica_command.go:1191 initiating a split of range=22 ["B000"-"\xff\xff") at key "B001"
I1104 00:08:55.789253 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:55.789817 45133 storage/replica_command.go:1119 range 22: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:55.796358 45133 kv/local_sender.go:173 range not contained in one range: ["A020","B001\x00"), but have ["A020","B000")
W1104 00:08:55.801995 45133 storage/replica.go:1370 failed to resolve on inconsistent read: conflicting intents on ["\x00\x00meta2B001" "\x00\x00meta2\xff\xff"]: resolved? false
I1104 00:08:55.806154 45133 storage/replica_command.go:1191 initiating a split of range=23 ["B001"-"\xff\xff") at key "B002"
I1104 00:08:55.810611 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:55.811577 45133 storage/replica_command.go:1119 range 23: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:55.825586 45133 kv/local_sender.go:173 range not contained in one range: ["B000","B002\x00"), but have ["B000","B001")
W1104 00:08:55.831467 45133 storage/replica.go:1370 failed to resolve on inconsistent read: conflicting intents on ["\x00\x00meta2B002" "\x00\x00meta2\xff\xff"]: resolved? false
I1104 00:08:55.836896 45133 storage/replica_command.go:1191 initiating a split of range=24 ["B002"-"\xff\xff") at key "B003"
I1104 00:08:55.843660 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:55.844471 45133 storage/replica_command.go:1119 range 24: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:55.852027 45133 kv/local_sender.go:173 range not contained in one range: ["B001","B003\x00"), but have ["B001","B002")
W1104 00:08:55.856070 45133 storage/replica.go:1370 failed to resolve on inconsistent read: conflicting intents on ["\x00\x00meta2B003" "\x00\x00meta2\xff\xff"]: resolved? false
I1104 00:08:55.861624 45133 storage/replica_command.go:1191 initiating a split of range=25 ["B003"-"\xff\xff") at key "B004"
I1104 00:08:55.868544 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:55.869992 45133 storage/replica_command.go:1119 range 25: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:55.884090 45133 kv/local_sender.go:173 range not contained in one range: ["B002","B004\x00"), but have ["B002","B003")
W1104 00:08:55.890864 45133 storage/replica.go:1370 failed to resolve on inconsistent read: conflicting intents on ["\x00\x00meta2B004" "\x00\x00meta2\xff\xff"]: resolved? false
I1104 00:08:55.898138 45133 storage/replica_command.go:1191 initiating a split of range=26 ["B004"-"\xff\xff") at key "B005"
I1104 00:08:55.902826 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:55.903708 45133 storage/replica_command.go:1119 range 26: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:55.916048 45133 kv/local_sender.go:173 range not contained in one range: ["B003","B005\x00"), but have ["B003","B004")
W1104 00:08:55.923006 45133 storage/replica.go:1370 failed to resolve on inconsistent read: conflicting intents on ["\x00\x00meta2B005" "\x00\x00meta2\xff\xff"]: resolved? false
I1104 00:08:55.928931 45133 storage/replica_command.go:1191 initiating a split of range=27 ["B005"-"\xff\xff") at key "B006"
I1104 00:08:55.930601 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:55.931388 45133 storage/replica_command.go:1119 range 27: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:55.961618 45133 kv/local_sender.go:173 range not contained in one range: ["B004","B006\x00"), but have ["B004","B005")
W1104 00:08:55.968385 45133 storage/replica.go:1370 failed to resolve on inconsistent read: conflicting intents on ["\x00\x00meta2B006" "\x00\x00meta2\xff\xff"]: resolved? false
I1104 00:08:55.977395 45133 storage/replica_command.go:1191 initiating a split of range=28 ["B006"-"\xff\xff") at key "B007"
I1104 00:08:55.982452 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:55.983526 45133 storage/replica_command.go:1119 range 28: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:56.000993 45133 kv/local_sender.go:173 range not contained in one range: ["B005","B007\x00"), but have ["B005","B006")
W1104 00:08:56.005157 45133 storage/replica.go:1370 failed to resolve on inconsistent read: conflicting intents on ["\x00\x00meta2B007" "\x00\x00meta2\xff\xff"]: resolved? false
I1104 00:08:56.014624 45133 storage/replica_command.go:1191 initiating a split of range=29 ["B007"-"\xff\xff") at key "B008"
I1104 00:08:56.018234 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:56.019065 45133 storage/replica_command.go:1119 range 29: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:56.035633 45133 kv/local_sender.go:173 range not contained in one range: ["B006","B008\x00"), but have ["B006","B007")
W1104 00:08:56.039966 45133 storage/replica.go:1370 failed to resolve on inconsistent read: conflicting intents on ["\x00\x00meta2B008" "\x00\x00meta2\xff\xff"]: resolved? false
I1104 00:08:56.049375 45133 storage/replica_command.go:1191 initiating a split of range=30 ["B008"-"\xff\xff") at key "B009"
I1104 00:08:56.052800 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:56.054003 45133 storage/replica_command.go:1119 range 30: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:56.069024 45133 kv/local_sender.go:173 range not contained in one range: ["B007","B009\x00"), but have ["B007","B008")
W1104 00:08:56.075300 45133 storage/replica.go:1370 failed to resolve on inconsistent read: conflicting intents on ["\x00\x00meta2B009" "\x00\x00meta2\xff\xff"]: resolved? false
I1104 00:08:56.086312 45133 storage/replica_command.go:1191 initiating a split of range=31 ["B009"-"\xff\xff") at key "B010"
I1104 00:08:56.088273 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:56.089386 45133 storage/replica_command.go:1119 range 31: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:56.123323 45133 kv/local_sender.go:173 range not contained in one range: ["B008","B010\x00"), but have ["B008","B009")
W1104 00:08:56.129078 45133 storage/replica.go:1370 failed to resolve on inconsistent read: conflicting intents on ["\x00\x00meta2B010" "\x00\x00meta2\xff\xff"]: resolved? false
I1104 00:08:56.140758 45133 storage/replica_command.go:1191 initiating a split of range=32 ["B010"-"\xff\xff") at key "B011"
I1104 00:08:56.144250 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:56.145892 45133 storage/replica_command.go:1119 range 32: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:56.178025 45133 kv/local_sender.go:173 range not contained in one range: ["B009","B011\x00"), but have ["B009","B010")
W1104 00:08:56.184173 45133 storage/replica.go:1370 failed to resolve on inconsistent read: conflicting intents on ["\x00\x00meta2B011" "\x00\x00meta2\xff\xff"]: resolved? false
I1104 00:08:56.193573 45133 storage/replica_command.go:1191 initiating a split of range=33 ["B011"-"\xff\xff") at key "B012"
I1104 00:08:56.200402 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:56.201702 45133 storage/replica_command.go:1119 range 33: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:56.218327 45133 kv/local_sender.go:173 range not contained in one range: ["B010","B012\x00"), but have ["B010","B011")
W1104 00:08:56.227746 45133 storage/replica.go:1370 failed to resolve on inconsistent read: conflicting intents on ["\x00\x00meta2B012" "\x00\x00meta2\xff\xff"]: resolved? false
I1104 00:08:56.235376 45133 storage/replica_command.go:1191 initiating a split of range=34 ["B012"-"\xff\xff") at key "B013"
I1104 00:08:56.239933 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:56.240666 45133 storage/replica_command.go:1119 range 34: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:56.257735 45133 kv/local_sender.go:173 range not contained in one range: ["B011","B013\x00"), but have ["B011","B012")
W1104 00:08:56.264768 45133 storage/replica.go:1370 failed to resolve on inconsistent read: conflicting intents on ["\x00\x00meta2B013" "\x00\x00meta2\xff\xff"]: resolved? false
I1104 00:08:56.277572 45133 storage/replica_command.go:1191 initiating a split of range=35 ["B013"-"\xff\xff") at key "B014"
I1104 00:08:56.280733 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:56.281574 45133 storage/replica_command.go:1119 range 35: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:56.304810 45133 kv/local_sender.go:173 range not contained in one range: ["B012","B014\x00"), but have ["B012","B013")
W1104 00:08:56.311872 45133 storage/replica.go:1370 failed to resolve on inconsistent read: conflicting intents on ["\x00\x00meta2B014" "\x00\x00meta2\xff\xff"]: resolved? false
I1104 00:08:56.323538 45133 storage/replica_command.go:1191 initiating a split of range=36 ["B014"-"\xff\xff") at key "B015"
I1104 00:08:56.328291 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:56.329231 45133 storage/replica_command.go:1119 range 36: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:56.350480 45133 kv/local_sender.go:173 range not contained in one range: ["B013","B015\x00"), but have ["B013","B014")
W1104 00:08:56.355898 45133 storage/replica.go:1370 failed to resolve on inconsistent read: conflicting intents on ["\x00\x00meta2B015" "\x00\x00meta2\xff\xff"]: resolved? false
I1104 00:08:56.367583 45133 storage/replica_command.go:1191 initiating a split of range=37 ["B015"-"\xff\xff") at key "B016"
I1104 00:08:56.373764 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:56.374545 45133 storage/replica_command.go:1119 range 37: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:56.392919 45133 kv/local_sender.go:173 range not contained in one range: ["B014","B016\x00"), but have ["B014","B015")
W1104 00:08:56.398318 45133 storage/replica.go:1370 failed to resolve on inconsistent read: conflicting intents on ["\x00\x00meta2B016" "\x00\x00meta2\xff\xff"]: resolved? false
I1104 00:08:56.412928 45133 storage/replica_command.go:1191 initiating a split of range=38 ["B016"-"\xff\xff") at key "B017"
I1104 00:08:56.420548 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:56.421397 45133 storage/replica_command.go:1119 range 38: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:56.432628 45133 kv/local_sender.go:173 range not contained in one range: ["B015","B017\x00"), but have ["B015","B016")
W1104 00:08:56.441014 45133 storage/replica.go:1370 failed to resolve on inconsistent read: conflicting intents on ["\x00\x00meta2B017" "\x00\x00meta2\xff\xff"]: resolved? false
I1104 00:08:56.537398 45133 storage/replica_command.go:1191 initiating a split of range=39 ["B017"-"\xff\xff") at key "B018"
I1104 00:08:56.540647 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:56.541596 45133 storage/replica_command.go:1119 range 39: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:56.591952 45133 kv/local_sender.go:173 range not contained in one range: ["B016","B018\x00"), but have ["B016","B017")
W1104 00:08:56.606566 45133 storage/replica.go:1370 failed to resolve on inconsistent read: conflicting intents on ["\x00\x00meta2B018" "\x00\x00meta2\xff\xff"]: resolved? false
I1104 00:08:56.637603 45133 storage/replica_command.go:1191 initiating a split of range=40 ["B018"-"\xff\xff") at key "B019"
I1104 00:08:56.640824 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:56.641993 45133 storage/replica_command.go:1119 range 40: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
W1104 00:08:56.667843 45133 kv/local_sender.go:173 range not contained in one range: ["B017","B019\x00"), but have ["B017","B018")
W1104 00:08:56.674795 45133 storage/replica.go:1370 failed to resolve on inconsistent read: conflicting intents on ["\x00\x00meta2B019" "\x00\x00meta2\xff\xff"]: resolved? false
I1104 00:08:56.689970 45133 stopper.go:236 draining; tasks left:
1 storage/replica.go:1551
W1104 00:08:56.690151 45133 storage/replica.go:1548 unable to resolve intent: failed to send RPC: store is stopped
I1104 00:08:56.690881 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:56.696874 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:56.697727 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:56.698724 45133 gossip/gossip.go:175 gossiping node descriptor node_id:1 address:<network_field:"localhost" address_field:"1" > attrs:<>
I1104 00:08:56.700006 45133 gossip/gossip.go:175 gossiping node descriptor node_id:2 address:<network_field:"localhost" address_field:"2" > attrs:<>
I1104 00:08:56.701890 45133 gossip/gossip.go:175 gossiping node descriptor node_id:3 address:<network_field:"localhost" address_field:"3" > attrs:<>
I1104 00:08:56.703394 45133 storage/replica_command.go:1191 initiating a split of range=1 [""-"\xff\xff") at key "b"
W1104 00:08:56.710307 45133 kv/local_sender.go:173 range not contained in one range: ["\x00\x00meta2\xff\xff","b\x00"), but have ["","b")
I1104 00:08:56.712259 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:56.712990 45133 storage/replica_command.go:1119 range 2: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:56.715458 45133 storage/store.go:1448 changing raft replica {2 2 2} for range 2
W1104 00:08:56.715688 45133 storage/replica.go:1370 failed to resolve on inconsistent read: conflicting intents on ["\x00\x00meta2\xff\xff"]: resolved? false
I1104 00:08:56.724725 45133 storage/store.go:1448 changing raft replica {3 3 3} for range 2
I1104 00:08:56.886669 45133 storage/store.go:1448 changing raft replica {3 3 3} for range 2
I1104 00:08:56.903248 45133 storage/store.go:1448 changing raft replica {2 2 2} for range 2
I1104 00:08:56.913302 45133 storage/store.go:1448 changing raft replica {2 2 2} for range 1
I1104 00:08:57.063353 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:57.063801 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:57.064010 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:57.069846 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:57.070513 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:57.071633 45133 gossip/gossip.go:175 gossiping node descriptor node_id:1 address:<network_field:"localhost" address_field:"1" > attrs:<>
I1104 00:08:57.072918 45133 gossip/gossip.go:175 gossiping node descriptor node_id:2 address:<network_field:"localhost" address_field:"2" > attrs:<>
I1104 00:08:57.074483 45133 gossip/gossip.go:175 gossiping node descriptor node_id:3 address:<network_field:"localhost" address_field:"3" > attrs:<>
I1104 00:08:57.078195 45133 storage/store.go:1448 changing raft replica {2 2 2} for range 1
I1104 00:08:57.087839 45133 storage/store.go:1448 changing raft replica {3 3 3} for range 1
I1104 00:08:57.240523 45133 storage/store.go:1448 changing raft replica {3 3 3} for range 1
I1104 00:08:57.249006 45133 storage/store.go:1448 changing raft replica {3 3 4} for range 1
I1104 00:08:57.293353 45133 storage/store.go:1448 changing raft replica {3 3 4} for range 1
I1104 00:08:57.306413 45133 storage/store.go:1448 changing raft replica {3 3 5} for range 1
I1104 00:08:57.390128 45133 storage/store.go:1448 changing raft replica {3 3 5} for range 1
I1104 00:08:57.396400 45133 storage/store.go:1448 changing raft replica {3 3 6} for range 1
I1104 00:08:57.546734 45133 storage/store.go:1448 changing raft replica {3 3 6} for range 1
I1104 00:08:57.552553 45133 storage/store.go:1448 changing raft replica {3 3 7} for range 1
I1104 00:08:57.579591 45133 storage/store.go:1448 changing raft replica {3 3 7} for range 1
I1104 00:08:57.586954 45133 storage/store.go:1448 changing raft replica {3 3 8} for range 1
I1104 00:08:57.741886 45133 storage/store.go:1448 changing raft replica {3 3 8} for range 1
I1104 00:08:57.747593 45133 storage/store.go:1448 changing raft replica {3 3 9} for range 1
I1104 00:08:57.791917 45133 storage/store.go:1448 changing raft replica {3 3 9} for range 1
I1104 00:08:57.798635 45133 storage/store.go:1448 changing raft replica {3 3 10} for range 1
I1104 00:08:57.947519 45133 storage/store.go:1448 changing raft replica {3 3 10} for range 1
I1104 00:08:57.954947 45133 storage/store.go:1448 changing raft replica {3 3 11} for range 1
I1104 00:08:57.981334 45133 storage/store.go:1448 changing raft replica {3 3 11} for range 1
I1104 00:08:57.986948 45133 storage/store.go:1448 changing raft replica {3 3 12} for range 1
I1104 00:08:58.143916 45133 storage/store.go:1448 changing raft replica {3 3 12} for range 1
I1104 00:08:58.150011 45133 storage/store.go:1448 changing raft replica {3 3 13} for range 1
I1104 00:08:58.194044 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:58.194368 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:58.194567 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:58.199876 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:58.200486 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:58.201371 45133 gossip/gossip.go:175 gossiping node descriptor node_id:1 address:<network_field:"localhost" address_field:"1" > attrs:<>
I1104 00:08:58.203587 45133 gossip/gossip.go:175 gossiping node descriptor node_id:2 address:<network_field:"localhost" address_field:"2" > attrs:<>
I1104 00:08:58.205585 45133 gossip/gossip.go:175 gossiping node descriptor node_id:3 address:<network_field:"localhost" address_field:"3" > attrs:<>
I1104 00:08:58.207453 45133 storage/store.go:1448 changing raft replica {2 2 2} for range 1
I1104 00:08:58.215369 45133 storage/store.go:1448 changing raft replica {3 3 3} for range 1
W1104 00:08:58.361504 45133 storage/store_pool.go:59 store 3 on node 3 is now considered offline
I1104 00:08:58.366574 45133 storage/store.go:1448 changing raft replica {3 3 3} for range 1
E1104 00:08:58.369872 45133 storage/queue.go:368 failure processing replica range=1 [""-"\xff\xff") from replicate queue: storage/allocator.go:186: unable to allocate a target store; no candidates available
I1104 00:08:58.372262 45133 stopper.go:236 draining; tasks left:
1 storage/replica.go:1532
W1104 00:08:58.372694 45133 storage/store_pool.go:59 store 1 on node 1 is now considered offline
W1104 00:08:58.372796 45133 storage/store_pool.go:59 store 2 on node 2 is now considered offline
I1104 00:08:58.375407 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:58.375807 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:58.376005 45133 storage/engine/rocksdb.go:132 closing in-memory rocksdb instance
I1104 00:08:58.381975 45133 multiraft/multiraft.go:936 node 1 campaigning because initial confstate is [1]
I1104 00:08:58.382566 45133 storage/replica_command.go:1119 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC +1.000s
I1104 00:08:58.384138 45133 gossip/gossip.go:175 gossiping node descriptor node_id:1 address:<network_field:"localhost" address_field:"1" > attrs:<>
I1104 00:08:58.385485 45133 gossip/gossip.go:175 gossiping node descriptor node_id:2 address:<network_field:"localhost" address_field:"2" > attrs:<>
I1104 00:08:58.386767 45133 gossip/gossip.go:175 gossiping node descriptor node_id:3 address:<network_field:"localhost" address_field:"3" > attrs:<>
I1104 00:08:58.388368 45133 gossip/gossip.go:175 gossiping node descriptor node_id:4 address:<network_field:"localhost" address_field:"4" > attrs:<>
I1104 00:08:58.390475 45133 storage/store.go:1448 changing raft replica {2 2 2} for range 1
I1104 00:08:58.397996 45133 storage/store.go:1448 changing raft replica {3 3 3} for range 1
I1104 00:08:58.566365 45133 storage/replica_command.go:1119 range 1: new leader lease replica {2 2 2} 1970-01-01 00:00:01 +0000 UTC +1.000s
I1104 00:08:58.572317 45133 storage/store.go:1448 changing raft replica {4 4 4} for range 1
I1104 00:08:58.586477 45133 storage/store.go:1448 changing raft replica {2 2 2} for range 1
I1104 00:08:58.587151 45133 stopper.go:236 draining; tasks left:
1 storage/queue.go:303
I1104 00:08:58.595111 45133 stopper.go:236 draining; tasks left:
1 storage/replica.go:1532
==================
WARNING: DATA RACE
Read by goroutine 67:
github.com/cockroachdb/cockroach/util/tracer.(*Trace).epoch()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/tracer/tracer.go:122 +0x45
github.com/cockroachdb/cockroach/util/tracer.(*Trace).Epoch()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/tracer/tracer.go:118 +0x5f
github.com/cockroachdb/cockroach/storage.(*Replica).processRaftCommand()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/replica.go:939 +0x2a1
github.com/cockroachdb/cockroach/storage.(*Store).processRaft.func1()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/store.go:1518 +0xdd1
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:88 +0x5f
Previous write by goroutine 84:
github.com/cockroachdb/cockroach/util/tracer.(*Trace).Finalize()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/tracer/tracer.go:154 +0x358
github.com/cockroachdb/cockroach/storage.(*Replica).resolveIntents.func1()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/replica.go:1530 +0x627
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunAsyncTask.func1()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:130 +0x65
Goroutine 67 (running) created at:
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:89 +0x6f
github.com/cockroachdb/cockroach/storage.(*Store).processRaft()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/store.go:1535 +0xb6
github.com/cockroachdb/cockroach/storage.(*Store).Start()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/store.go:519 +0x109a
github.com/cockroachdb/cockroach/storage_test.(*multiTestContext).addStore()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/client_test.go:404 +0xbb0
github.com/cockroachdb/cockroach/storage_test.(*multiTestContext).Start()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/client_test.go:228 +0x1961
github.com/cockroachdb/cockroach/storage_test.TestStoreRangeRebalance()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/client_raft_test.go:1225 +0x2ac
testing.tRunner()
/Users/tamird/src/go1.5/src/testing/testing.go:456 +0xdc
Goroutine 84 (finished) created at:
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunAsyncTask()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:131 +0x270
github.com/cockroachdb/cockroach/storage.(*Replica).resolveIntents()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/replica.go:1532 +0x9f5
github.com/cockroachdb/cockroach/storage.(*Store).resolveWriteIntentError()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/store.go:1392 +0x1224
github.com/cockroachdb/cockroach/storage.(*Store).Send()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/store.go:1233 +0x1006
github.com/cockroachdb/cockroach/kv.(*LocalSender).Send()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/kv/local_sender.go:153 +0x444
github.com/cockroachdb/cockroach/storage_test.(*multiTestContext).rpcSend.func2()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/client_test.go:306 +0x1eb
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunTask()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:115 +0x283
github.com/cockroachdb/cockroach/storage_test.(*multiTestContext).rpcSend()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/client_test.go:307 +0x57e
github.com/cockroachdb/cockroach/storage_test.(*multiTestContext).(github.com/cockroachdb/cockroach/storage_test.rpcSend)-fm()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/client_test.go:221 +0x115
github.com/cockroachdb/cockroach/kv.(*DistSender).sendRPC()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/kv/dist_sender.go:371 +0x822
github.com/cockroachdb/cockroach/kv.(*DistSender).sendAttempt()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/kv/dist_sender.go:442 +0x243
github.com/cockroachdb/cockroach/kv.(*DistSender).sendChunk.func1()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/kv/dist_sender.go:550 +0x4df
github.com/cockroachdb/cockroach/kv.(*DistSender).sendChunk()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/kv/dist_sender.go:558 +0x156e
github.com/cockroachdb/cockroach/kv.(*DistSender).(github.com/cockroachdb/cockroach/kv.sendChunk)-fm()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/kv/dist_sender.go:469 +0x64
github.com/cockroachdb/cockroach/kv.(*chunkingSender).Send()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/kv/batch.go:206 +0x395
github.com/cockroachdb/cockroach/kv.(*DistSender).Send()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/kv/dist_sender.go:469 +0x2c1
github.com/cockroachdb/cockroach/kv.(*TxnCoordSender).Send()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/kv/txn_coord_sender.go:397 +0x1252
github.com/cockroachdb/cockroach/client.(*DB).send()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/client/db.go:482 +0x289
github.com/cockroachdb/cockroach/client.(*DB).(github.com/cockroachdb/cockroach/client.send)-fm()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/client/db.go:452 +0x4b
github.com/cockroachdb/cockroach/client.sendAndFill()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/client/db.go:418 +0x7c
github.com/cockroachdb/cockroach/client.(*DB).RunWithResponse()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/client/db.go:452 +0xd2
github.com/cockroachdb/cockroach/storage_test.getRangeMetadata()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/client_raft_test.go:588 +0x2f8
github.com/cockroachdb/cockroach/storage_test.TestStoreRangeRebalance()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/client_raft_test.go:1262 +0xcaa
testing.tRunner()
/Users/tamird/src/go1.5/src/testing/testing.go:456 +0xdc
==================
==================
WARNING: DATA RACE
Read by goroutine 67:
github.com/cockroachdb/cockroach/util/tracer.Trace.String()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/tracer/tracer.go:196 +0x98e
github.com/cockroachdb/cockroach/util/tracer.(*Trace).epoch()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/tracer/tracer.go:123 +0x8a
github.com/cockroachdb/cockroach/util/tracer.(*Trace).Epoch()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/tracer/tracer.go:118 +0x5f
github.com/cockroachdb/cockroach/storage.(*Replica).processRaftCommand()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/replica.go:939 +0x2a1
github.com/cockroachdb/cockroach/storage.(*Store).processRaft.func1()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/store.go:1518 +0xdd1
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:88 +0x5f
Previous write by goroutine 84:
github.com/cockroachdb/cockroach/util/tracer.(*Trace).epoch.func1()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/tracer/tracer.go:134 +0x2b3
github.com/cockroachdb/cockroach/storage.(*Replica).addWriteCmd()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/replica.go:865 +0x943
github.com/cockroachdb/cockroach/storage.(*Replica).resolveIntents.func1()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/replica.go:1508 +0x45b
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunAsyncTask.func1()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:130 +0x65
Goroutine 67 (running) created at:
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:89 +0x6f
github.com/cockroachdb/cockroach/storage.(*Store).processRaft()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/store.go:1535 +0xb6
github.com/cockroachdb/cockroach/storage.(*Store).Start()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/store.go:519 +0x109a
github.com/cockroachdb/cockroach/storage_test.(*multiTestContext).addStore()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/client_test.go:404 +0xbb0
github.com/cockroachdb/cockroach/storage_test.(*multiTestContext).Start()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/client_test.go:228 +0x1961
github.com/cockroachdb/cockroach/storage_test.TestStoreRangeRebalance()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/client_raft_test.go:1225 +0x2ac
E1104 00:08:58.598274 45133 multiraft/transport.go:176 sending rpc failed: read tcp 127.0.0.1:59805->127.0.0.1:59800: read: connection reset by peer
testing.tRunner()
/Users/tamird/src/go1.5/src/testing/testing.go:456 +0xdc
Goroutine 84 (finished) created at:
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunAsyncTask()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:131 +0x270
github.com/cockroachdb/cockroach/storage.(*Replica).resolveIntents()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/replica.go:1532 +0x9f5
github.com/cockroachdb/cockroach/storage.(*Store).resolveWriteIntentError()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/store.go:1392 +0x1224
github.com/cockroachdb/cockroach/storage.(*Store).Send()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/store.go:1233 +0x1006
github.com/cockroachdb/cockroach/kv.(*LocalSender).Send()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/kv/local_sender.go:153 +0x444
github.com/cockroachdb/cockroach/storage_test.(*multiTestContext).rpcSend.func2()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/client_test.go:306 +0x1eb
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunTask()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:115 +0x283
github.com/cockroachdb/cockroach/storage_test.(*multiTestContext).rpcSend()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/client_test.go:307 +0x57e
github.com/cockroachdb/cockroach/storage_test.(*multiTestContext).(github.com/cockroachdb/cockroach/storage_test.rpcSend)-fm()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/client_test.go:221 +0x115
github.com/cockroachdb/cockroach/kv.(*DistSender).sendRPC()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/kv/dist_sender.go:371 +0x822
github.com/cockroachdb/cockroach/kv.(*DistSender).sendAttempt()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/kv/dist_sender.go:442 +0x243
github.com/cockroachdb/cockroach/kv.(*DistSender).sendChunk.func1()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/kv/dist_sender.go:550 +0x4df
github.com/cockroachdb/cockroach/kv.(*DistSender).sendChunk()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/kv/dist_sender.go:558 +0x156e
github.com/cockroachdb/cockroach/kv.(*DistSender).(github.com/cockroachdb/cockroach/kv.sendChunk)-fm()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/kv/dist_sender.go:469 +0x64
github.com/cockroachdb/cockroach/kv.(*chunkingSender).Send()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/kv/batch.go:206 +0x395
github.com/cockroachdb/cockroach/kv.(*DistSender).Send()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/kv/dist_sender.go:469 +0x2c1
github.com/cockroachdb/cockroach/kv.(*TxnCoordSender).Send()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/kv/txn_coord_sender.go:397 +0x1252
github.com/cockroachdb/cockroach/client.(*DB).send()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/client/db.go:482 +0x289
github.com/cockroachdb/cockroach/client.(*DB).(github.com/cockroachdb/cockroach/client.send)-fm()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/client/db.go:452 +0x4b
github.com/cockroachdb/cockroach/client.sendAndFill()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/client/db.go:418 +0x7c
github.com/cockroachdb/cockroach/client.(*DB).RunWithResponse()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/client/db.go:452 +0xd2
github.com/cockroachdb/cockroach/storage_test.getRangeMetadata()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/client_raft_test.go:588 +0x2f8
github.com/cockroachdb/cockroach/storage_test.TestStoreRangeRebalance()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/client_raft_test.go:1262 +0xcaa
testing.tRunner()
/Users/tamird/src/go1.5/src/testing/testing.go:456 +0xdc
==================
E1104 00:08:58.598424 45133 multiraft/transport.go:176 sending rpc failed: read tcp 127.0.0.1:59807->127.0.0.1:59803: read: connection reset by peer
panic: use of finalized Trace:
Name Origin Ts Dur Desc File
c1000000001.38 00:08:58.579862 5.96899ms command queue storage/replica.go:794
c1000000001.38 00:08:58.585875 9.431613ms raft storage/replica.go:850
goroutine 6969 [running]:
github.com/cockroachdb/cockroach/util/tracer.(*Trace).epoch(0xc820059880, 0x5133320, 0xe, 0xc820059880)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/tracer/tracer.go:123 +0x12b
github.com/cockroachdb/cockroach/util/tracer.(*Trace).Epoch(0xc820059880, 0x5133320, 0xe, 0x10)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/tracer/tracer.go:118 +0x60
github.com/cockroachdb/cockroach/storage.(*Replica).processRaftCommand(0xc82034e240, 0xc82033a350, 0x10, 0x17, 0x1, 0x200000002, 0x2, 0x3b9aca01, 0x3a, 0x3b9aca01, ...)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/replica.go:939 +0x2a2
github.com/cockroachdb/cockroach/storage.(*Store).processRaft.func1()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/store.go:1518 +0xdd2
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc8202aa5a0, 0xc820456710)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:88 +0x60
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:89 +0x70
goroutine 1 [chan receive]:
testing.RunTests(0x535efa0, 0x56e2780, 0xc9, 0xc9, 0x5036601)
/Users/tamird/src/go1.5/src/testing/testing.go:562 +0xafa
testing.(*M).Run(0xc820727f10, 0x4cb4402076f1275)
/Users/tamird/src/go1.5/src/testing/testing.go:494 +0xe5
github.com/cockroachdb/cockroach/util/leaktest.TestMainWithLeakCheck(0xc820727f10)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/leaktest/leaktest.go:34 +0x2f
github.com/cockroachdb/cockroach/storage_test.TestMain(0xc820727f10)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/main_test.go:35 +0x2f
main.main()
github.com/cockroachdb/cockroach/storage/_test/_testmain.go:462 +0x20a
goroutine 17 [syscall, locked to thread]:
runtime.goexit()
/Users/tamird/src/go1.5/src/runtime/asm_amd64.s:1696 +0x1
goroutine 5 [chan receive]:
github.com/cockroachdb/cockroach/util/log.(*loggingT).flushDaemon(0x60c6b40)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/log/clog.go:1029 +0x76
created by github.com/cockroachdb/cockroach/util/log.init.1
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/log/clog.go:610 +0x117
goroutine 6921 [select]:
github.com/coreos/etcd/raft.(*multiNode).run(0xc82046b1a0)
/Users/tamird/src/go/src/github.com/coreos/etcd/raft/multinode.go:195 +0x2c7d
created by github.com/coreos/etcd/raft.StartMultiNode
/Users/tamird/src/go/src/github.com/coreos/etcd/raft/multinode.go:71 +0x338
goroutine 6964 [chan receive]:
github.com/cockroachdb/cockroach/storage/engine.(*RocksDB).Open.func1(0xc820254230)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/engine/rocksdb.go:119 +0x5b
created by github.com/cockroachdb/cockroach/storage/engine.(*RocksDB).Open
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/engine/rocksdb.go:120 +0x558
goroutine 6900 [chan receive]:
github.com/cockroachdb/cockroach/storage/engine.(*RocksDB).Open.func1(0xc82051a4b0)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/engine/rocksdb.go:119 +0x5b
created by github.com/cockroachdb/cockroach/storage/engine.(*RocksDB).Open
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/engine/rocksdb.go:120 +0x558
goroutine 6981 [chan receive]:
github.com/cockroachdb/cockroach/storage/engine.(*RocksDB).Open.func1(0xc8202545a0)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/engine/rocksdb.go:119 +0x5b
created by github.com/cockroachdb/cockroach/storage/engine.(*RocksDB).Open
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/engine/rocksdb.go:120 +0x558
goroutine 6902 [runnable]:
net.runtime_pollWait(0x6f2a638, 0x72, 0x0)
/Users/tamird/src/go1.5/src/runtime/netpoll.go:157 +0x63
net.(*pollDesc).Wait(0xc82017c4c0, 0x72, 0x0, 0x0)
/Users/tamird/src/go1.5/src/net/fd_poll_runtime.go:73 +0x56
net.(*pollDesc).WaitRead(0xc82017c4c0, 0x0, 0x0)
/Users/tamird/src/go1.5/src/net/fd_poll_runtime.go:78 +0x44
net.(*netFD).accept(0xc82017c460, 0x0, 0x6f64168, 0xc820461100)
/Users/tamird/src/go1.5/src/net/fd_unix.go:408 +0x2f6
net.(*TCPListener).AcceptTCP(0xc820148118, 0xc82038de20, 0x0, 0x0)
/Users/tamird/src/go1.5/src/net/tcpsock_posix.go:254 +0x77
net.(*TCPListener).Accept(0xc820148118, 0x0, 0x0, 0x0, 0x0)
/Users/tamird/src/go1.5/src/net/tcpsock_posix.go:264 +0x4b
net/http.(*Server).Serve(0xc82039bf80, 0x6f29e10, 0xc820148118, 0x0, 0x0)
/Users/tamird/src/go1.5/src/net/http/server.go:1887 +0xc4
github.com/cockroachdb/cockroach/rpc.(*Server).Serve.func2()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/rpc/server.go:267 +0x78
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82046a9c0, 0xc8207466c0)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:88 +0x60
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:89 +0x70
goroutine 6896 [select]:
github.com/cockroachdb/cockroach/multiraft.(*writeTask).start.func1()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/multiraft/storage.go:199 +0x1031
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82046ad20, 0xc820528080)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:88 +0x60
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:89 +0x70
goroutine 6907 [select]:
github.com/cockroachdb/cockroach/storage.(*Store).startGossip.func1()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/store.go:584 +0x447
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82039b0e0, 0xc820746700)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:88 +0x60
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:89 +0x70
goroutine 6958 [chan receive]:
github.com/cockroachdb/cockroach/rpc.(*Server).sendResponses(0xc8200a0380, 0x6f2f5f8, 0xc82070de00, 0xc820384300)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/rpc/server.go:429 +0xb5
github.com/cockroachdb/cockroach/rpc.(*Server).ServeHTTP.func1(0xc8200a0380, 0x6f2f5f8, 0xc82070de00, 0xc820384300, 0xc8206e8920)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/rpc/server.go:198 +0x4d
created by github.com/cockroachdb/cockroach/rpc.(*Server).ServeHTTP
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/rpc/server.go:200 +0xb18
goroutine 7132 [select]:
github.com/cockroachdb/cockroach/multiraft.(*localRPCTransport).Send.func1(0xc8202b0000, 0xc82078a1b0)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/multiraft/transport.go:170 +0x26c
created by github.com/cockroachdb/cockroach/multiraft.(*localRPCTransport).Send
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/multiraft/transport.go:178 +0x217
goroutine 6937 [runnable]:
sync.runtime_Semacquire(0xc820521e5c)
/Users/tamird/src/go1.5/src/runtime/sema.go:43 +0x26
sync.(*WaitGroup).Wait(0xc820521e50)
/Users/tamird/src/go1.5/src/sync/waitgroup.go:126 +0x118
github.com/cockroachdb/cockroach/rpc.(*Server).ServeHTTP(0xc820126600, 0x6f2f538, 0xc82022f1e0, 0xc82016a620)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/rpc/server.go:202 +0xb6e
net/http.serverHandler.ServeHTTP(0xc82046b8c0, 0x6f2f538, 0xc82022f1e0, 0xc82016a620)
/Users/tamird/src/go1.5/src/net/http/server.go:1862 +0x207
net/http.(*conn).serve(0xc82022f130)
/Users/tamird/src/go1.5/src/net/http/server.go:1361 +0x117d
created by net/http.(*Server).Serve
/Users/tamird/src/go1.5/src/net/http/server.go:1910 +0x465
goroutine 6926 [select]:
github.com/cockroachdb/cockroach/storage.(*Store).startGossip.func1()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/store.go:584 +0x447
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82046ad20, 0xc82000fb20)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:88 +0x60
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:89 +0x70
goroutine 6965 [select]:
github.com/coreos/etcd/raft.(*multiNode).run(0xc8202aaa20)
/Users/tamird/src/go/src/github.com/coreos/etcd/raft/multinode.go:195 +0x2c7d
created by github.com/coreos/etcd/raft.StartMultiNode
/Users/tamird/src/go/src/github.com/coreos/etcd/raft/multinode.go:71 +0x338
goroutine 6982 [select]:
github.com/coreos/etcd/raft.(*multiNode).run(0xc8202abaa0)
/Users/tamird/src/go/src/github.com/coreos/etcd/raft/multinode.go:195 +0x2c7d
created by github.com/coreos/etcd/raft.StartMultiNode
/Users/tamird/src/go/src/github.com/coreos/etcd/raft/multinode.go:71 +0x338
goroutine 6915 [select]:
github.com/cockroachdb/cockroach/storage_test.(*multiTestContext).Stop(0xc820262140)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/client_test.go:261 +0x318
github.com/cockroachdb/cockroach/storage_test.TestStoreRangeRebalance(0xc82008a630)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/client_raft_test.go:1279 +0xbee
testing.tRunner(0xc82008a630, 0x56e3848)
/Users/tamird/src/go1.5/src/testing/testing.go:456 +0xdd
created by testing.RunTests
/Users/tamird/src/go1.5/src/testing/testing.go:561 +0xaa4
goroutine 6968 [select]:
github.com/cockroachdb/cockroach/multiraft.(*state).start.func1()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/multiraft/multiraft.go:570 +0x1cf1
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc8202aa5a0, 0xc820456700)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:88 +0x60
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:89 +0x70
goroutine 6925 [select]:
github.com/cockroachdb/cockroach/storage.(*Store).processRaft.func1()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/store.go:1466 +0x15db
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82046ad20, 0xc8202cd1c0)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:88 +0x60
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:89 +0x70
goroutine 6986 [select]:
github.com/cockroachdb/cockroach/storage.(*Store).processRaft.func1()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/store.go:1466 +0x15db
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc8202ab620, 0xc820456c80)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:88 +0x60
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:89 +0x70
goroutine 6920 [chan receive]:
github.com/cockroachdb/cockroach/storage/engine.(*RocksDB).Open.func1(0xc820428190)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/engine/rocksdb.go:119 +0x5b
created by github.com/cockroachdb/cockroach/storage/engine.(*RocksDB).Open
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/engine/rocksdb.go:120 +0x558
goroutine 7133 [runnable]:
sync.(*Mutex).Unlock(0x60c6b7c)
/Users/tamird/src/go1.5/src/sync/mutex.go:99
github.com/cockroachdb/cockroach/util/log.(*loggingT).outputLogEntry(0x60c6b40, 0x2, 0x568ff68, 0x16, 0xb0, 0xc820495e00, 0xc8200ea000)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/log/clog.go:791 +0x386
github.com/cockroachdb/cockroach/util/log.AddStructured(0x0, 0x0, 0xc800000002, 0x2, 0x51bd210, 0x16, 0xc820495ed0, 0x1, 0x1)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/log/structured.go:39 +0x176
github.com/cockroachdb/cockroach/util/log.logDepth(0x0, 0x0, 0x1, 0x2, 0x51bd210, 0x16, 0xc820495ed0, 0x1, 0x1)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/log/log.go:65 +0x8d
github.com/cockroachdb/cockroach/util/log.Errorf(0x51bd210, 0x16, 0xc820495ed0, 0x1, 0x1)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/log/log.go:139 +0x74
github.com/cockroachdb/cockroach/multiraft.(*localRPCTransport).Send.func1(0xc8202b00a0, 0xc82078a1b0)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/multiraft/transport.go:176 +0x1f7
created by github.com/cockroachdb/cockroach/multiraft.(*localRPCTransport).Send
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/multiraft/transport.go:178 +0x217
goroutine 6971 [select]:
github.com/cockroachdb/cockroach/storage.(*Store).startGossip.func1()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/store.go:584 +0x447
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc8202aa5a0, 0xc820528420)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:88 +0x60
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:89 +0x70
goroutine 6966 [runnable]:
net.runtime_pollWait(0x6f2a4b8, 0x72, 0x0)
/Users/tamird/src/go1.5/src/runtime/netpoll.go:157 +0x63
net.(*pollDesc).Wait(0xc8200583e0, 0x72, 0x0, 0x0)
/Users/tamird/src/go1.5/src/net/fd_poll_runtime.go:73 +0x56
net.(*pollDesc).WaitRead(0xc8200583e0, 0x0, 0x0)
/Users/tamird/src/go1.5/src/net/fd_poll_runtime.go:78 +0x44
net.(*netFD).accept(0xc820058380, 0x0, 0x6f64168, 0xc820746b20)
/Users/tamird/src/go1.5/src/net/fd_unix.go:408 +0x2f6
net.(*TCPListener).AcceptTCP(0xc82002c040, 0xc82003be20, 0x0, 0x0)
/Users/tamird/src/go1.5/src/net/tcpsock_posix.go:254 +0x77
net.(*TCPListener).Accept(0xc82002c040, 0x0, 0x0, 0x0, 0x0)
/Users/tamird/src/go1.5/src/net/tcpsock_posix.go:264 +0x4b
net/http.(*Server).Serve(0xc8202ab140, 0x6f29e10, 0xc82002c040, 0x0, 0x0)
/Users/tamird/src/go1.5/src/net/http/server.go:1887 +0xc4
github.com/cockroachdb/cockroach/rpc.(*Server).Serve.func2()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/rpc/server.go:267 +0x78
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82046a9c0, 0xc820528400)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:88 +0x60
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:89 +0x70
goroutine 7002 [runnable]:
sync.runtime_Semacquire(0xc8204cec0c)
/Users/tamird/src/go1.5/src/runtime/sema.go:43 +0x26
sync.(*WaitGroup).Wait(0xc8204cec00)
/Users/tamird/src/go1.5/src/sync/waitgroup.go:126 +0x118
github.com/cockroachdb/cockroach/rpc.(*Server).ServeHTTP(0xc820172400, 0x6f2f538, 0xc8200ea420, 0xc820764000)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/rpc/server.go:202 +0xb6e
net/http.serverHandler.ServeHTTP(0xc8202ab140, 0x6f2f538, 0xc8200ea420, 0xc820764000)
/Users/tamird/src/go1.5/src/net/http/server.go:1862 +0x207
net/http.(*conn).serve(0xc8200ea370)
/Users/tamird/src/go1.5/src/net/http/server.go:1361 +0x117d
created by net/http.(*Server).Serve
/Users/tamird/src/go1.5/src/net/http/server.go:1910 +0x465
goroutine 6905 [select]:
github.com/cockroachdb/cockroach/storage.(*Store).processRaft.func1()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/store.go:1466 +0x15db
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82039b0e0, 0xc8204ce9d0)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:88 +0x60
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:89 +0x70
goroutine 6934 [select]:
github.com/cockroachdb/cockroach/multiraft.(*writeTask).start.func1()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/multiraft/storage.go:199 +0x1031
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82039b0e0, 0xc82022a9e0)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:88 +0x60
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:89 +0x70
goroutine 6972 [select]:
github.com/cockroachdb/cockroach/storage.(*Store).startGossip.func2()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/store.go:603 +0x447
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc8202aa5a0, 0xc820528440)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:88 +0x60
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:89 +0x70
goroutine 7131 [runnable]:
github.com/cockroachdb/cockroach/multiraft.(*localRPCTransport).Send.func1(0xc8202acfa0, 0xc82078a1b0)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/multiraft/transport.go:170 +0x26c
created by github.com/cockroachdb/cockroach/multiraft.(*localRPCTransport).Send
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/multiraft/transport.go:178 +0x217
goroutine 6924 [select]:
github.com/cockroachdb/cockroach/multiraft.(*state).start.func1()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/multiraft/multiraft.go:570 +0x1cf1
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82046ad20, 0xc8202cd1b0)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:88 +0x60
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:89 +0x70
goroutine 6904 [select]:
github.com/cockroachdb/cockroach/multiraft.(*state).start.func1()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/multiraft/multiraft.go:570 +0x1cf1
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82039b0e0, 0xc8204ce9c0)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:88 +0x60
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:89 +0x70
goroutine 6985 [select]:
github.com/cockroachdb/cockroach/multiraft.(*state).start.func1()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/multiraft/multiraft.go:570 +0x1cf1
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc8202ab620, 0xc820456c70)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:88 +0x60
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:89 +0x70
goroutine 6908 [select]:
github.com/cockroachdb/cockroach/storage.(*Store).startGossip.func2()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/store.go:603 +0x447
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82039b0e0, 0xc820746720)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:88 +0x60
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:89 +0x70
goroutine 6989 [select]:
github.com/cockroachdb/cockroach/storage.(*Store).startGossip.func2()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/store.go:603 +0x447
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc8202ab620, 0xc820528860)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:88 +0x60
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:89 +0x70
goroutine 6927 [select]:
github.com/cockroachdb/cockroach/storage.(*Store).startGossip.func2()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/store.go:603 +0x447
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc82046ad20, 0xc82000fc00)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:88 +0x60
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:89 +0x70
goroutine 7008 [IO wait]:
net.runtime_pollWait(0x6f2adb8, 0x72, 0x0)
/Users/tamird/src/go1.5/src/runtime/netpoll.go:157 +0x63
net.(*pollDesc).Wait(0xc82015eca0, 0x72, 0x0, 0x0)
/Users/tamird/src/go1.5/src/net/fd_poll_runtime.go:73 +0x56
net.(*pollDesc).WaitRead(0xc82015eca0, 0x0, 0x0)
/Users/tamird/src/go1.5/src/net/fd_poll_runtime.go:78 +0x44
net.(*netFD).Read(0xc82015ec40, 0xc820720000, 0x1000, 0x1000, 0x0, 0x669f050, 0xc820010220)
/Users/tamird/src/go1.5/src/net/fd_unix.go:232 +0x27b
net.(*conn).Read(0xc820148258, 0xc820720000, 0x1000, 0x1000, 0x6f2f7a8, 0x0, 0x0)
/Users/tamird/src/go1.5/src/net/net.go:172 +0x121
net.(*TCPConn).Read(0xc820148258, 0xc820720000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
<autogenerated>:74 +0x7d
bufio.(*Reader).fill(0xc820438f00)
/Users/tamird/src/go1.5/src/bufio/bufio.go:97 +0x365
bufio.(*Reader).ReadByte(0xc820438f00, 0x434e627, 0x0, 0x0)
/Users/tamird/src/go1.5/src/bufio/bufio.go:229 +0x153
encoding/binary.ReadUvarint(0x6f2f670, 0xc820438f00, 0xc82028bbe8, 0x0, 0x0)
/Users/tamird/src/go1.5/src/encoding/binary/varint.go:110 +0x60
github.com/cockroachdb/cockroach/rpc/codec.(*baseConn).recvProto(0xc8206ca180, 0x6fa8fd0, 0xc8206ca2b8, 0xc800000000, 0x535d6f0, 0x0, 0x0)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/rpc/codec/conn.go:88 +0x9e
github.com/cockroachdb/cockroach/rpc/codec.(*clientCodec).readResponseHeader(0xc8206ca180, 0xc8206ca2b8, 0x0, 0x0)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/rpc/codec/client.go:155 +0x8e
github.com/cockroachdb/cockroach/rpc/codec.(*clientCodec).ReadResponseHeader(0xc8206ca180, 0xc8206ba150, 0x0, 0x0)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/rpc/codec/client.go:80 +0x59
net/rpc.(*Client).input(0xc820438f60)
/Users/tamird/src/go1.5/src/net/rpc/client.go:109 +0x177
created by net/rpc.NewClientWithCodec
/Users/tamird/src/go1.5/src/net/rpc/client.go:201 +0x126
goroutine 6988 [select]:
github.com/cockroachdb/cockroach/storage.(*Store).startGossip.func1()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/store.go:584 +0x447
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc8202ab620, 0xc820528840)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:88 +0x60
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:89 +0x70
goroutine 6957 [runnable]:
net.runtime_pollWait(0x6f2a3f8, 0x72, 0x0)
/Users/tamird/src/go1.5/src/runtime/netpoll.go:157 +0x63
net.(*pollDesc).Wait(0xc82015ed10, 0x72, 0x0, 0x0)
/Users/tamird/src/go1.5/src/net/fd_poll_runtime.go:73 +0x56
net.(*pollDesc).WaitRead(0xc82015ed10, 0x0, 0x0)
/Users/tamird/src/go1.5/src/net/fd_poll_runtime.go:78 +0x44
net.(*netFD).Read(0xc82015ecb0, 0xc82071e000, 0x1000, 0x1000, 0x0, 0x669f050, 0xc820010220)
/Users/tamird/src/go1.5/src/net/fd_unix.go:232 +0x27b
net.(*conn).Read(0xc82011e338, 0xc82071e000, 0x1000, 0x1000, 0xc82005a000, 0x0, 0x0)
/Users/tamird/src/go1.5/src/net/net.go:172 +0x121
net.(*TCPConn).Read(0xc82011e338, 0xc82071e000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
<autogenerated>:74 +0x7d
bufio.(*Reader).fill(0xc820385920)
/Users/tamird/src/go1.5/src/bufio/bufio.go:97 +0x365
bufio.(*Reader).ReadByte(0xc820385920, 0x433e2a5, 0x0, 0x0)
/Users/tamird/src/go1.5/src/bufio/bufio.go:229 +0x153
encoding/binary.ReadUvarint(0x6f2f670, 0xc820385920, 0x0, 0x0, 0x0)
/Users/tamird/src/go1.5/src/encoding/binary/varint.go:110 +0x60
github.com/cockroachdb/cockroach/rpc/codec.(*baseConn).recvProto(0xc82070de00, 0x6f2f638, 0xc82070df58, 0x0, 0x535d6f0, 0x0, 0x0)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/rpc/codec/conn.go:88 +0x9e
github.com/cockroachdb/cockroach/rpc/codec.(*serverCodec).readRequestHeader(0xc82070de00, 0xc820385920, 0xc82070df58, 0x0, 0x0)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/rpc/codec/server.go:173 +0x91
github.com/cockroachdb/cockroach/rpc/codec.(*serverCodec).ReadRequestHeader(0xc82070de00, 0xc8203432c0, 0x0, 0x0)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/rpc/codec/server.go:60 +0x8c
github.com/cockroachdb/cockroach/rpc.(*Server).readRequest(0xc8200a0380, 0x6f2f5f8, 0xc82070de00, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/rpc/server.go:406 +0xe7
github.com/cockroachdb/cockroach/rpc.(*Server).readRequests(0xc8200a0380, 0x6f2f5f8, 0xc82070de00, 0xc820461140, 0xc820384300)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/rpc/server.go:364 +0xfa
github.com/cockroachdb/cockroach/rpc.(*Server).ServeHTTP(0xc8200a0380, 0x6f2f538, 0xc8200bb290, 0xc8200b8620)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/rpc/server.go:201 +0xb5d
net/http.serverHandler.ServeHTTP(0xc82039bf80, 0x6f2f538, 0xc8200bb290, 0xc8200b8620)
/Users/tamird/src/go1.5/src/net/http/server.go:1862 +0x207
net/http.(*conn).serve(0xc8200bb1e0)
/Users/tamird/src/go1.5/src/net/http/server.go:1361 +0x117d
created by net/http.(*Server).Serve
/Users/tamird/src/go1.5/src/net/http/server.go:1910 +0x465
goroutine 7112 [semacquire]:
sync.runtime_Semacquire(0xc82046a9dc)
/Users/tamird/src/go1.5/src/runtime/sema.go:43 +0x26
sync.(*WaitGroup).Wait(0xc82046a9d0)
/Users/tamird/src/go1.5/src/sync/waitgroup.go:126 +0x118
github.com/cockroachdb/cockroach/util/stop.(*Stopper).Stop(0xc82046a9c0)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:199 +0x70
github.com/cockroachdb/cockroach/storage_test.(*multiTestContext).Stop.func1(0xc820262140, 0xc82042f740)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/client_test.go:252 +0x44e
created by github.com/cockroachdb/cockroach/storage_test.(*multiTestContext).Stop
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/storage/client_test.go:259 +0x8c
goroutine 6933 [select]:
github.com/cockroachdb/cockroach/multiraft.(*writeTask).start.func1()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/multiraft/storage.go:199 +0x1031
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc8202aa5a0, 0xc82022a940)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:88 +0x60
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:89 +0x70
goroutine 6951 [select]:
github.com/cockroachdb/cockroach/multiraft.(*writeTask).start.func1()
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/multiraft/storage.go:199 +0x1031
github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker.func1(0xc8202ab620, 0xc82000fe20)
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:88 +0x60
created by github.com/cockroachdb/cockroach/util/stop.(*Stopper).RunWorker
/Users/tamird/src/go/src/github.com/cockroachdb/cockroach/util/stop/stopper.go:89 +0x70
goroutine 6901 [select]:
github.com/coreos/etcd/raft.(*multiNode).run(0xc82039b620)
/Users/tamird/src/go/src/github.com/coreos/etcd/raft/multinode.go:195 +0x2c7d
created by github.com/coreos/etcd/raft.StartMultiNode
/Users/tamird/src/go/src/github.com/coreos/etcd/raft/multinode.go:71 +0x338
FAIL github.com/cockroachdb/cockroach/storage 15.049s
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment