Skip to content

Instantly share code, notes, and snippets.

@ppearcy
Created October 14, 2010 22:32
Show Gist options
  • Save ppearcy/627215 to your computer and use it in GitHub Desktop.
Save ppearcy/627215 to your computer and use it in GitHub Desktop.
Node shutdown, closing shard:
[2010-10-13 22:40:01,533][DEBUG][index.shard.service ] [DM-ADSEARCHD102.dev.local] [cbsmw_20101012154845][0] state: [STARTED]->[CLOSED]
Starting back up (typo in config file changed name):
[2010-10-13 22:41:45,613][INFO ][node ] [Box IV] {elasticsearch/0.11.0}[3389]: initializing ...
Recovering index on bad node:
[2010-10-13 22:41:53,799][DEBUG][indices.cluster ] [Box IV] [cbsmw_20101012154845] creating index
[2010-10-13 22:41:53,799][DEBUG][indices ] [Box IV] creating Index [cbsmw_20101012154845], shards [1]/[1]
[2010-10-13 22:41:53,860][DEBUG][index.mapper ] [Box IV] [cbsmw_20101012154845] using dynamic[false], default mapping: location[null] and source[{
"_default_" : {
}
}]
[2010-10-13 22:42:06,308][DEBUG][indices.cluster ] [Box IV] [cbsmw_20101012154845][0] creating shard
[2010-10-13 22:42:06,308][DEBUG][index.service ] [Box IV] [cbsmw_20101012154845] creating shard_id [0]
[2010-10-13 22:42:06,317][DEBUG][index.store.fs ] [Box IV] [cbsmw_20101012154845][0] using [nio_fs] store with path [/var/opt/elasticsearch/dev/nodes/0/indices/cbsmw_20101012154845/0/index]
[2010-10-13 22:42:06,317][DEBUG][index.deletionpolicy ] [Box IV] [cbsmw_20101012154845][0] Using [keep_only_last] deletion policy
[2010-10-13 22:42:06,317][DEBUG][index.merge.policy ] [Box IV] [cbsmw_20101012154845][0] using [log_bytes_size] merge policy with merge_factor[10], min_merge_size[1.5mb], max_merge_size[8.5E9gb], max_merge_docs[2147483647] use_compound_file[false], calibrate_size_by_deletes[true]
[2010-10-13 22:42:06,317][DEBUG][index.merge.scheduler ] [Box IV] [cbsmw_20101012154845][0] using [concurrent] merge scheduler with max_thread_count[1]
[2010-10-13 22:42:06,317][DEBUG][index.shard.service ] [Box IV] [cbsmw_20101012154845][0] state: [CREATED]
[2010-10-13 22:42:06,320][DEBUG][indices.memory ] [Box IV] recalculating shard indexing buffer (reason=created_shard[cbsmw_20101012154845][0]), total is [2.3gb] with [31] shards, each shard set to [79.1mb]
[2010-10-13 22:42:08,305][DEBUG][index.shard.service ] [Box IV] [cbsmw_20101012154845][0] state: [CREATED]->[RECOVERING]
[2010-10-13 22:42:47,763][DEBUG][index.engine.robin ] [Box IV] [cbsmw_20101012154845][0] Starting engine with ram_buffer_size[48.1mb], refresh_interval[1s]
[2010-10-13 22:42:55,003][DEBUG][index.shard.service ] [Box IV] [cbsmw_20101012154845][0] state: [RECOVERING]->[STARTED]
[2010-10-13 22:42:55,086][DEBUG][index.engine.robin ] [Box IV] [index07_20101012154848][0] Starting engine with ram_buffer_size[48.1mb], refresh_interval[1s]
[2010-10-13 22:42:55,087][DEBUG][index.shard.recovery ] [Box IV] [cbsmw_20101012154845][0] recovery completed from [dm-adsearchd103.dev.local][b70ec27a-7371-4680-8905-a24607a364ce][inet[/10.2.20.164:9301]], took[46.7s]
phase1: recovered_files [126] with total_size of [123.3mb], took [39.4s], throttling_wait [0s]
: reusing_files [11] with total_size of [2.1kb]
phase2: recovered [1249] transaction log operations, took [7.2s]
phase3: recovered [0] transaction log operations, took [83ms]
[2010-10-13 22:42:55,087][DEBUG][cluster.action.shard ] [Box IV] sending shard started for [cbsmw_20101012154845][0], node[61c79955-0c93-4884-a06f-c47dd007315a], [R], s[INITIALIZING], reason [after recovery (replica) from node [[dm-adsearchd103.dev.local][b70ec27a-7371-4680-8905-a24607a364ce][inet[/10.2.20.164:9301]]]]
Good node that provided index:
[2010-10-13 22:41:47,142][DEBUG][transport.netty ] [dm-adsearchd103.dev.local] Connected to node [[Box IV][61c79955-0c93-4884-a06f-c47dd007315a][inet[/10.2.20.160:9300]]]
Similar messages for all other indexes:
[2010-10-13 22:41:51,507][DEBUG][gateway.blobstore ] [dm-adsearchd103.dev.local] [cbsmw_20101012154845][0], node[null], [R], s[UNASSIGNED]: failures when trying to list stores on nodes:
-> org.elasticsearch.action.FailedNodeException: Failed node [61c79955-0c93-4884-a06f-c47dd007315a]; org.elasticsearch.transport.RemoteTransportException: [Box IV][inet[/10.2.20.160:9300]][/cluster/nodes/indices/shard/store/node]; org.elasticsearch.indices.IndexMissingException: [cbsmw_20101012154845] missing
[2010-10-13 22:42:06,259][DEBUG][gateway.blobstore ] [dm-adsearchd103.dev.local] [cbsmw_20101012154845][0]: allocating [[cbsmw_20101012154845][0], node[null], [R], s[UNASSIGNED]] to [[Box IV
][61c79955-0c93-4884-a06f-c47dd007315a][inet[/10.2.20.160:9300]]] in order to reuse its unallocated persistent store with total_size [2.1kb]
[2010-10-13 22:42:55,074][DEBUG][cluster.action.shard ] [dm-adsearchd103.dev.local] received shard started for [cbsmw_20101012154845][0], node[61c79955-0c93-4884-a06f-c47dd007315a], [R], s[INIT
IALIZING], reason [after recovery (replica) from node [[dm-adsearchd103.dev.local][b70ec27a-7371-4680-8905-a24607a364ce][inet[/10.2.20.164:9301]]]]
[2010-10-13 22:42:55,074][DEBUG][cluster.service ] [dm-adsearchd103.dev.local] processing [shard-started ([cbsmw_20101012154845][0], node[61c79955-0c93-4884-a06f-c47dd007315a], [R], s[INIT
IALIZING]), reason [after recovery (replica) from node [[dm-adsearchd103.dev.local][b70ec27a-7371-4680-8905-a24607a364ce][inet[/10.2.20.164:9301]]]]]: execute
[2010-10-13 22:42:55,074][DEBUG][cluster.action.shard ] [dm-adsearchd103.dev.local] applying started shard [cbsmw_20101012154845][0], node[61c79955-0c93-4884-a06f-c47dd007315a], [R], s[INITIALI
ZING], reason [after recovery (replica) from node [[dm-adsearchd103.dev.local][b70ec27a-7371-4680-8905-a24607a364ce][inet[/10.2.20.164:9301]]]]
[2010-10-13 22:42:55,075][DEBUG][cluster.service ] [dm-adsearchd103.dev.local] cluster state updated, version [418], source [shard-started ([cbsmw_20101012154845][0], node[61c79955-0c93-48
84-a06f-c47dd007315a], [R], s[INITIALIZING]), reason [after recovery (replica) from node [[dm-adsearchd103.dev.local][b70ec27a-7371-4680-8905-a24607a364ce][inet[/10.2.20.164:9301]]]]]
[2010-10-13 22:42:55,086][DEBUG][cluster.service ] [dm-adsearchd103.dev.local] processing [shard-started ([cbsmw_20101012154845][0], node[61c79955-0c93-4884-a06f-c47dd007315a], [R], s[INIT
IALIZING]), reason [after recovery (replica) from node [[dm-adsearchd103.dev.local][b70ec27a-7371-4680-8905-a24607a364ce][inet[/10.2.20.164:9301]]]]]: done applying updated cluster_state
[
This exception is reported on my Node client
2010-10-14 07:13:53,052 DEBUG > [dm-adsearchd102.dev.local-essearcherserver] [1098] Failed to execute query phase (New I/O client worker #1-10)
org.elasticsearch.transport.RemoteTransportException: [dm-adsearchd103.dev.local][inet[/10.2.20.164:9300]][search/phase/query/id]
Caused by: org.elasticsearch.search.query.QueryPhaseExecutionException: [cbsmw_20101012154845][0]: query[feedid:100],from[0],size[100],sort[<custom:"__documentdate": org.elasticsearch.index.field.data.FieldData$Type$4$1@46b0563c>!]: Query Failed [Failed to execute main query]
at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:132)
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:199)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryByIdTransportHandler.messageReceived(SearchServiceTransportAction.java:390)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryByIdTransportHandler.messageReceived(SearchServiceTransportAction.java:381)
at org.elasticsearch.transport.netty.MessageChannelHandler$3.run(MessageChannelHandler.java:195)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 712
at org.apache.lucene.util.BitVector.get(BitVector.java:104)
at org.apache.lucene.index.SegmentTermDocs.next(SegmentTermDocs.java:127)
at org.elasticsearch.index.field.data.support.FieldDataLoader.load(FieldDataLoader.java:56)
at org.elasticsearch.index.field.data.longs.LongFieldData.load(LongFieldData.java:129)
at org.elasticsearch.index.field.data.FieldData.load(FieldData.java:225)
at org.elasticsearch.index.cache.field.data.support.AbstractConcurrentMapFieldDataCache.cache(AbstractConcurrentMapFieldDataCache.java:96)
at org.elasticsearch.index.cache.field.data.support.AbstractConcurrentMapFieldDataCache.cache(AbstractConcurrentMapFieldDataCache.java:73)
at org.elasticsearch.index.field.data.support.NumericFieldDataComparator.setNextReader(NumericFieldDataComparator.java:49)
at org.apache.lucene.search.TopFieldCollector$OneComparatorNonScoringCollector.setNextReader(TopFieldCollector.java:96)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:209)
at org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:125)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:199)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:177)
at org.apache.lucene.search.Searcher.search(Searcher.java:49)
at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:126)
... 7 more
2010-10-14 07:13:53,458 ERROR >
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment