Skip to content

Instantly share code, notes, and snippets.

@wyukawa
wyukawa / gist:8094065
Created December 23, 2013 09:30
hbase error
2013-12-23 17:54:37,272 ERROR org.apache.hadoop.hbase.executor.EventHandler: Caught throwable while processing event M_SERVER_SHUTDOWN
java.io.IOException: failed log splitting for ..., will retry
at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:136)
at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:175)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.io.IOException: error or interrupted while splitting logs in [hdfs://...-splitting] Task = installed
= 1 done = 0 error = 1
at org.apache.hadoop.hbase.master.SplitLogManager.splitLogDistributed(SplitLogManager.java:299)
@wyukawa
wyukawa / gist:8108213
Created December 24, 2013 02:55
hive error
java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on … File does not exist. Holder DFSClient_attempt_… does not have any open files
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1999)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1990)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1899)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)
at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)
@wyukawa
wyukawa / gist:8131285
Created December 26, 2013 08:33
hive outofmemory
2013-12-26 17:23:08,666 FATAL org.apache.hadoop.mapred.Child: Error running child : java.lang.OutOfMemoryError: Java heap space
at org.apache.hadoop.hive.common.io.NonSyncByteArrayOutputStream.enLargeBuffer(NonSyncByteArrayOutputStream.java:77)
at org.apache.hadoop.hive.common.io.NonSyncByteArrayOutputStream.write(NonSyncByteArrayOutputStream.java:55)
at org.apache.hadoop.hive.ql.io.NonSyncDataOutputBuffer.write(NonSyncDataOutputBuffer.java:66)
at org.apache.hadoop.hive.ql.io.RCFile$ValueBuffer$LazyDecompressionCallbackImpl.decompress(RCFile.java:578)
at org.apache.hadoop.hive.serde2.columnar.BytesRefWritable.lazyDecompress(BytesRefWritable.java:97)
at org.apache.hadoop.hive.serde2.columnar.BytesRefWritable.getData(BytesRefWritable.java:120)
at org.apache.hadoop.hive.serde2.columnar.ColumnarStructBase$FieldInfo.uncheckedGetField(ColumnarStructBase.java:98)
at org.apache.hadoop.hive.serde2.columnar.ColumnarStructBase.getField(ColumnarStructBase.java:179)
at org.apache.hadoop.hive.serde2.objectinspecto
@wyukawa
wyukawa / gist:8172948
Created December 29, 2013 18:01
Could not obtain block
Caused by: java.io.IOException: Could not obtain block: …
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2460)
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2252)
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2415)
at java.io.DataInputStream.read(DataInputStream.java:100)
at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:205)
at org.apache.hadoop.util.LineReader.readLine(LineReader.java:169)
at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:176)
at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:43)
at org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:274)
@wyukawa
wyukawa / gist:8727993
Created January 31, 2014 07:42
shib error
buffer.js:194
this.parent = new SlowBuffer(this.length);
^
RangeError: length > kMaxLength
at new Buffer (buffer.js:194:21)
at fs.js:220:16
at Object.oncomplete (fs.js:107:15)
@wyukawa
wyukawa / gist:9582670
Created March 16, 2014 12:43
jstat ruby
io=IO.popen("jstat -gcutil 1000", "r")
a=io.readlines()
headers=a[0].split()
datas=a[1].split()
h=Hash.new
headers.each_with_index{|header, i|
h[header]=datas[i]
}
@wyukawa
wyukawa / gist:11338420
Last active August 29, 2015 14:00
DataNodeがdeadnodeになったときのjstack
"org.apache.hadoop.hdfs.server.datanode.DataXceiver@2df8d003" daemon prio=10 tid=0x00002aaabc048000 nid=0x5996 waiting for monitor entry [0x0000000076726000]
java.lang.Thread.State: BLOCKED (on object monitor)
at org.apache.hadoop.hdfs.server.datanode.FSDataset.getVisibleLength(FSDataset.java:1040)
- waiting to lock <0x00000000c238f170> (a org.apache.hadoop.hdfs.server.datanode.FSDataset)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:115)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:194)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:104)
at java.lang.Thread.run(Thread.java:724)
"Thread-153" prio=10 tid=0x00002aaab037b800 nid=0x23acb runnable [0x00000000440b1000]
java.lang.Thread.State: RUNNABLE
at java.util.WeakHashMap.get(WeakHashMap.java:471)
at com.sun.beans.WeakCache.get(WeakCache.java:55)
at com.sun.beans.finder.MethodFinder.findMethod(MethodFinder.java:68)
at java.beans.Statement.getMethod(Statement.java:357)
at java.beans.Statement.invokeInternal(Statement.java:287)
at java.beans.Statement.access$000(Statement.java:58)
at java.beans.Statement$2.run(Statement.java:185)
at java.security.AccessController.doPrivileged(Native Method)
@wyukawa
wyukawa / gist:a6e525c7dc5c04ed6927
Created May 21, 2014 11:31
python+hiveserver2
import sys
import os
sys.path.append('/opt/cloudera/parcels/CDH/share/hue/apps/beeswax/gen-py')
from TCLIService import TCLIService
from TCLIService.ttypes import TOpenSessionReq, TGetTablesReq, TFetchResultsReq,\
TStatusCode, TGetResultSetMetadataReq, TGetColumnsReq, TType,\
TExecuteStatementReq, TGetOperationStatusReq, TFetchOrientation,\
TCloseSessionReq, TGetSchemasReq, TCancelOperationReq
@wyukawa
wyukawa / gist:6d1788bce3008a8b23a4
Created June 10, 2014 10:51
hiveでnullpointer
Got exception: java.lang.NullPointerException: java.lang.NullPointerException
at org.apache.hadoop.yarn.server.nodemanager.security.NMContainerTokenSecretManager.retrievePassword(NMContainerTokenSecretManager.java:96)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.verifyAndGetContainerTokenIdentifier(ContainerManagerImpl.java:649)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.startContainers(ContainerManagerImpl.java:525)
at org.apache.hadoop.yarn.api.impl.pb.service.ContainerManagementProtocolPBServiceImpl.startContainers(ContainerManagementProtocolPBServiceImpl.java:60)
at org.apache.hadoop.yarn.proto.ContainerManagementProtocol$ContainerManagementProtocolService$2.callBlockingMethod(ContainerManagementProtocol.java:95)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013