Created
July 9, 2014 16:37
-
-
Save medined/46ce499603d75888a118 to your computer and use it in GitHub Desktop.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
-bash-4.1# cat /var/log/supervisor/namenode-stderr---supervisor-Ggf9Oz.log | |
14/07/09 12:33:42 INFO namenode.NameNode: STARTUP_MSG: | |
/************************************************************ | |
STARTUP_MSG: Starting NameNode | |
STARTUP_MSG: host = grail/172.17.0.40 | |
STARTUP_MSG: args = [] | |
STARTUP_MSG: version = 2.4.0.2.1.2.1-471 | |
STARTUP_MSG: classpath = /etc/hadoop/conf:/usr/lib/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/lib/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/lib/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/lib/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/lib/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/lib/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/lib/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/lib/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/lib/hadoop/share/hadoop/common/lib/microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/lib/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/lib/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/lib/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/lib/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/lib/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/share/hadoop/common/lib/zookeeper-3.4.5.2.1.2.1-471.jar:/usr/lib/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/lib/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/lib/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/lib/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/lib/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/lib/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/lib/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/lib/hadoop/share/hadoop/common/lib/hadoop-auth-2.4.0.2.1.2.1-471.jar:/usr/lib/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/lib/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/lib/hadoop/share/hadoop/common/lib/junit-4.8.2.jar:/usr/lib/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/lib/hadoop/share/hadoop/common/lib/hadoop-annotations-2.4.0.2.1.2.1-471.jar:/usr/lib/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/lib/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/lib/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/lib/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/lib/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/lib/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/lib/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/lib/hadoop/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/lib/hadoop/share/hadoop/common/lib/jackson-core-2.2.3.jar:/usr/lib/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/lib/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/lib/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/lib/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/lib/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/usr/lib/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/lib/hadoop/share/hadoop/common/hadoop-nfs-2.4.0.2.1.2.1-471.jar:/usr/lib/hadoop/share/hadoop/common/hadoop-common-2.4.0.2.1.2.1-471.jar:/usr/lib/hadoop/share/hadoop/common/hadoop-common-2.4.0.2.1.2.1-471-tests.jar:/usr/lib/hadoop/share/hadoop/hdfs:/usr/lib/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/lib/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/lib/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/lib/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/lib/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/lib/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/lib/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/lib/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/lib/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/lib/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/lib/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/lib/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/lib/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/lib/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/lib/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/lib/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/lib/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.4.0.2.1.2.1-471.jar:/usr/lib/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.4.0.2.1.2.1-471-tests.jar:/usr/lib/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.4.0.2.1.2.1-471.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.5.2.1.2.1-471.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/jackson-xc-1.8.8.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/lib/hadoop/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/lib/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.4.0.2.1.2.1-471.jar:/usr/lib/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.4.0.2.1.2.1-471.jar:/usr/lib/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.4.0.2.1.2.1-471.jar:/usr/lib/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.4.0.2.1.2.1-471.jar:/usr/lib/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.4.0.2.1.2.1-471.jar:/usr/lib/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.4.0.2.1.2.1-471.jar:/usr/lib/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.4.0.2.1.2.1-471.jar:/usr/lib/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.4.0.2.1.2.1-471.jar:/usr/lib/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.4.0.2.1.2.1-471.jar:/usr/lib/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.4.0.2.1.2.1-471.jar:/usr/lib/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.4.0.2.1.2.1-471.jar:/usr/lib/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/lib/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/lib/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/lib/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/lib/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/lib/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/lib/hadoop/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/lib/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/lib/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/lib/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/lib/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/lib/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/lib/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.4.0.2.1.2.1-471.jar:/usr/lib/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/lib/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/lib/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/lib/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/lib/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/lib/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/lib/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/lib/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/lib/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/lib/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.4.0.2.1.2.1-471.jar:/usr/lib/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.4.0.2.1.2.1-471.jar:/usr/lib/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.4.0.2.1.2.1-471.jar:/usr/lib/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.4.0.2.1.2.1-471.jar:/usr/lib/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.0.2.1.2.1-471.jar:/usr/lib/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.4.0.2.1.2.1-471.jar:/usr/lib/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0.2.1.2.1-471-tests.jar:/usr/lib/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0.2.1.2.1-471.jar:/usr/lib/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.4.0.2.1.2.1-471.jar | |
STARTUP_MSG: build = [email protected]:hortonworks/hadoop.git -r 9e5db004df1a751e93aa89b42956c5325f3a4482; compiled by 'jenkins' on 2014-05-27T18:57Z | |
STARTUP_MSG: java = 1.7.0_55 | |
************************************************************/ | |
14/07/09 12:33:42 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] | |
14/07/09 12:33:42 INFO namenode.NameNode: createNameNode [] | |
14/07/09 12:33:42 WARN impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties | |
14/07/09 12:33:42 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). | |
14/07/09 12:33:42 INFO impl.MetricsSystemImpl: NameNode metrics system started | |
14/07/09 12:33:43 INFO hdfs.DFSUtil: Starting web server as: ${dfs.web.authentication.kerberos.principal} | |
14/07/09 12:33:43 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070 | |
14/07/09 12:33:43 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog | |
14/07/09 12:33:43 INFO http.HttpRequestLog: Http request log for http.requests.namenode is not defined | |
14/07/09 12:33:43 INFO http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) | |
14/07/09 12:33:43 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs | |
14/07/09 12:33:43 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs | |
14/07/09 12:33:43 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static | |
14/07/09 12:33:43 INFO http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter) | |
14/07/09 12:33:43 INFO http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/* | |
14/07/09 12:33:43 INFO http.HttpServer2: Jetty bound to port 50070 | |
14/07/09 12:33:43 INFO mortbay.log: jetty-6.1.26 | |
14/07/09 12:33:43 WARN server.AuthenticationFilter: 'signature.secret' configuration not set, using a random value as secret | |
14/07/09 12:33:43 INFO mortbay.log: Started [email protected]:50070 | |
14/07/09 12:33:43 WARN common.Util: Path /data1/hdfs/nn should be specified as a URI in configuration files. Please update hdfs configuration. | |
14/07/09 12:33:43 WARN common.Util: Path /data1/hdfs/nn should be specified as a URI in configuration files. Please update hdfs configuration. | |
14/07/09 12:33:43 WARN namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of dataloss due to lack of redundant storage directories! | |
14/07/09 12:33:43 WARN namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of dataloss due to lack of redundant storage directories! | |
14/07/09 12:33:43 WARN common.Util: Path /data1/hdfs/nn should be specified as a URI in configuration files. Please update hdfs configuration. | |
14/07/09 12:33:43 WARN common.Util: Path /data1/hdfs/nn should be specified as a URI in configuration files. Please update hdfs configuration. | |
14/07/09 12:33:43 INFO namenode.FSNamesystem: fsLock is fair:true | |
14/07/09 12:33:44 INFO namenode.HostFileManager: read includes: | |
HostSet( | |
) | |
14/07/09 12:33:44 INFO namenode.HostFileManager: read excludes: | |
HostSet( | |
) | |
14/07/09 12:33:44 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 | |
14/07/09 12:33:44 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true | |
14/07/09 12:33:44 INFO util.GSet: Computing capacity for map BlocksMap | |
14/07/09 12:33:44 INFO util.GSet: VM type = 64-bit | |
14/07/09 12:33:44 INFO util.GSet: 2.0% max memory 910.5 MB = 18.2 MB | |
14/07/09 12:33:44 INFO util.GSet: capacity = 2^21 = 2097152 entries | |
14/07/09 12:33:44 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false | |
14/07/09 12:33:44 INFO blockmanagement.BlockManager: defaultReplication = 1 | |
14/07/09 12:33:44 INFO blockmanagement.BlockManager: maxReplication = 512 | |
14/07/09 12:33:44 INFO blockmanagement.BlockManager: minReplication = 1 | |
14/07/09 12:33:44 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 | |
14/07/09 12:33:44 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false | |
14/07/09 12:33:44 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000 | |
14/07/09 12:33:44 INFO blockmanagement.BlockManager: encryptDataTransfer = false | |
14/07/09 12:33:44 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000 | |
14/07/09 12:33:44 INFO namenode.FSNamesystem: fsOwner = hdfs (auth:SIMPLE) | |
14/07/09 12:33:44 INFO namenode.FSNamesystem: supergroup = supergroup | |
14/07/09 12:33:44 INFO namenode.FSNamesystem: isPermissionEnabled = true | |
14/07/09 12:33:44 INFO namenode.FSNamesystem: HA Enabled: false | |
14/07/09 12:33:44 INFO namenode.FSNamesystem: Append Enabled: true | |
14/07/09 12:33:44 INFO util.GSet: Computing capacity for map INodeMap | |
14/07/09 12:33:44 INFO util.GSet: VM type = 64-bit | |
14/07/09 12:33:44 INFO util.GSet: 1.0% max memory 910.5 MB = 9.1 MB | |
14/07/09 12:33:44 INFO util.GSet: capacity = 2^20 = 1048576 entries | |
14/07/09 12:33:44 INFO namenode.NameNode: Caching file names occuring more than 10 times | |
14/07/09 12:33:44 INFO util.GSet: Computing capacity for map cachedBlocks | |
14/07/09 12:33:44 INFO util.GSet: VM type = 64-bit | |
14/07/09 12:33:44 INFO util.GSet: 0.25% max memory 910.5 MB = 2.3 MB | |
14/07/09 12:33:44 INFO util.GSet: capacity = 2^18 = 262144 entries | |
14/07/09 12:33:44 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 | |
14/07/09 12:33:44 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 | |
14/07/09 12:33:44 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000 | |
14/07/09 12:33:44 INFO namenode.FSNamesystem: Retry cache on namenode is enabled | |
14/07/09 12:33:44 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis | |
14/07/09 12:33:44 INFO util.GSet: Computing capacity for map NameNodeRetryCache | |
14/07/09 12:33:44 INFO util.GSet: VM type = 64-bit | |
14/07/09 12:33:44 INFO util.GSet: 0.029999999329447746% max memory 910.5 MB = 279.7 KB | |
14/07/09 12:33:44 INFO util.GSet: capacity = 2^15 = 32768 entries | |
14/07/09 12:33:44 INFO namenode.AclConfigFlag: ACLs enabled? false | |
14/07/09 12:33:44 INFO common.Storage: Lock on /data1/hdfs/nn/in_use.lock acquired by nodename 18@grail | |
14/07/09 12:33:44 INFO namenode.FileJournalManager: Recovering unfinalized segments in /data1/hdfs/nn/current | |
14/07/09 12:33:44 INFO namenode.FileJournalManager: Finalizing edits file /data1/hdfs/nn/current/edits_inprogress_0000000000000000001 -> /data1/hdfs/nn/current/edits_0000000000000000001-0000000000000000021 | |
14/07/09 12:33:44 INFO namenode.FSImageFormatPBINode: Loading 1 INodes. | |
14/07/09 12:33:44 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds. | |
14/07/09 12:33:44 INFO namenode.FSImage: Loaded image for txid 0 from /data1/hdfs/nn/current/fsimage_0000000000000000000 | |
14/07/09 12:33:44 INFO namenode.FSImage: Reading org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@74a44ec3 expecting start txid #1 | |
14/07/09 12:33:44 INFO namenode.FSImage: Start loading edits file /data1/hdfs/nn/current/edits_0000000000000000001-0000000000000000021 | |
14/07/09 12:33:44 INFO namenode.EditLogInputStream: Fast-forwarding stream '/data1/hdfs/nn/current/edits_0000000000000000001-0000000000000000021' to transaction ID 1 | |
14/07/09 12:33:44 INFO namenode.FSImage: Edits file /data1/hdfs/nn/current/edits_0000000000000000001-0000000000000000021 of size 1048576 edits # 21 loaded in 0 seconds | |
14/07/09 12:33:44 INFO namenode.FSNamesystem: Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false) | |
14/07/09 12:33:44 INFO namenode.FSEditLog: Starting log segment at 22 | |
14/07/09 12:33:44 INFO namenode.NameCache: initialized with 0 entries 0 lookups | |
14/07/09 12:33:44 INFO namenode.FSNamesystem: Finished loading FSImage in 510 msecs | |
14/07/09 12:33:44 INFO namenode.NameNode: RPC server is binding to grail:8020 | |
14/07/09 12:33:44 INFO ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue | |
14/07/09 12:33:44 INFO ipc.Server: Starting Socket Reader #1 for port 8020 | |
14/07/09 12:33:44 INFO namenode.FSNamesystem: Registered FSNamesystemState MBean | |
14/07/09 12:33:44 WARN common.Util: Path /data1/hdfs/nn should be specified as a URI in configuration files. Please update hdfs configuration. | |
14/07/09 12:33:44 INFO namenode.FSNamesystem: Number of blocks under construction: 0 | |
14/07/09 12:33:44 INFO namenode.FSNamesystem: Number of blocks under construction: 0 | |
14/07/09 12:33:44 INFO namenode.FSNamesystem: initializing replication queues | |
14/07/09 12:33:44 INFO hdfs.StateChange: STATE* Leaving safe mode after 0 secs | |
14/07/09 12:33:44 INFO hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes | |
14/07/09 12:33:44 INFO hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks | |
14/07/09 12:33:44 INFO blockmanagement.BlockManager: Total number of blocks = 1 | |
14/07/09 12:33:44 INFO blockmanagement.BlockManager: Number of invalid blocks = 0 | |
14/07/09 12:33:44 INFO blockmanagement.BlockManager: Number of under-replicated blocks = 1 | |
14/07/09 12:33:44 INFO blockmanagement.BlockManager: Number of over-replicated blocks = 0 | |
14/07/09 12:33:44 INFO blockmanagement.BlockManager: Number of blocks being written = 0 | |
14/07/09 12:33:44 INFO hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 11 msec | |
14/07/09 12:33:44 INFO ipc.Server: IPC Server Responder: starting | |
14/07/09 12:33:44 INFO ipc.Server: IPC Server listener on 8020: starting | |
14/07/09 12:33:44 INFO namenode.NameNode: NameNode RPC up at: grail/172.17.0.40:8020 | |
14/07/09 12:33:44 INFO namenode.FSNamesystem: Starting services required for active state | |
14/07/09 12:33:44 INFO blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 30000 milliseconds | |
14/07/09 12:33:44 INFO blockmanagement.CacheReplicationMonitor: Rescanning because of pending operations | |
14/07/09 12:33:45 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 60 minutes, Emptier interval = 0 minutes. | |
14/07/09 12:33:45 INFO fs.TrashPolicyDefault: The configured checkpoint interval is 0 minutes. Using an interval of 60 minutes that is used for deletion instead | |
14/07/09 12:33:45 INFO blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 193 millisecond(s). | |
14/07/09 12:33:45 INFO hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(172.17.0.40, datanodeUuid=bf76aed9-73da-48a5-8d43-e5080646de07, infoPort=50075, ipcPort=50020, storageInfo=lv=-55;cid=CID-3dabf598-9e16-4540-ae94-52ad52ea4e8f;nsid=2048501721;c=0) storage bf76aed9-73da-48a5-8d43-e5080646de07 | |
14/07/09 12:33:45 INFO net.NetworkTopology: Adding a new node: /default-rack/172.17.0.40:50010 | |
14/07/09 12:33:45 INFO blockmanagement.DatanodeDescriptor: Adding new storage ID DS-b2f8902c-7006-44dd-84e0-de74173f2e78 for DN 172.17.0.40:50010 | |
14/07/09 12:33:45 INFO blockmanagement.BlockManager: BLOCK* processReport: Received first block report from org.apache.hadoop.hdfs.server.protocol.DatanodeStorage@9b2b940d after starting up or becoming active. Its block contents are no longer considered stale | |
14/07/09 12:33:45 INFO BlockStateChange: BLOCK* processReport: from storage DS-b2f8902c-7006-44dd-84e0-de74173f2e78 node DatanodeRegistration(172.17.0.40, datanodeUuid=bf76aed9-73da-48a5-8d43-e5080646de07, infoPort=50075, ipcPort=50020, storageInfo=lv=-55;cid=CID-3dabf598-9e16-4540-ae94-52ad52ea4e8f;nsid=2048501721;c=0), blocks: 1, processing time: 1 msecs | |
14/07/09 12:33:49 INFO hdfs.StateChange: BLOCK* allocateBlock: /accumulo/wal/172.17.0.40+9997/e0058bdf-e2f0-4663-8b0c-afb945221d70. BP-1274135865-172.17.0.10-1404767453280 blk_1073741826_1002{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-b2f8902c-7006-44dd-84e0-de74173f2e78:NORMAL|RBW]]} | |
14/07/09 12:33:49 INFO hdfs.StateChange: BLOCK* fsync: /accumulo/wal/172.17.0.40+9997/e0058bdf-e2f0-4663-8b0c-afb945221d70 for DFSClient_NONMAPREDUCE_361617672_12 | |
14/07/09 12:34:14 INFO blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds | |
14/07/09 12:34:14 INFO blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 1 millisecond(s). | |
14/07/09 12:34:18 INFO hdfs.StateChange: BLOCK* allocateBlock: /accumulo/tables/!0/root_tablet/F0000000.rf_tmp. BP-1274135865-172.17.0.10-1404767453280 blk_1073741827_1003{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-b2f8902c-7006-44dd-84e0-de74173f2e78:NORMAL|RBW]]} | |
14/07/09 12:34:18 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 172.17.0.40:50010 is added to blk_1073741827_1003{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-b2f8902c-7006-44dd-84e0-de74173f2e78:NORMAL|RBW]]} size 0 | |
14/07/09 12:34:18 INFO hdfs.StateChange: DIR* completeFile: /accumulo/tables/!0/root_tablet/F0000000.rf_tmp is closed by DFSClient_NONMAPREDUCE_361617672_12 | |
14/07/09 12:34:18 INFO hdfs.StateChange: BLOCK* allocateBlock: /accumulo/tables/!0/root_tablet/A0000001.rf_tmp. BP-1274135865-172.17.0.10-1404767453280 blk_1073741828_1004{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-b2f8902c-7006-44dd-84e0-de74173f2e78:NORMAL|RBW]]} | |
14/07/09 12:34:18 INFO namenode.FSNamesystem: BLOCK* checkFileProgress: blk_1073741828_1004{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-b2f8902c-7006-44dd-84e0-de74173f2e78:NORMAL|RBW]]} has not reached minimal replication 1 | |
14/07/09 12:34:18 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 172.17.0.40:50010 is added to blk_1073741828_1004{blockUCState=COMMITTED, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[[DISK]DS-b2f8902c-7006-44dd-84e0-de74173f2e78:NORMAL|RBW]]} size 449 | |
14/07/09 12:34:19 INFO hdfs.StateChange: DIR* completeFile: /accumulo/tables/!0/root_tablet/A0000001.rf_tmp is closed by DFSClient_NONMAPREDUCE_361617672_12 | |
14/07/09 12:34:44 INFO namenode.FSNamesystem: Roll Edit Log from 172.17.0.40 | |
14/07/09 12:34:44 INFO namenode.FSEditLog: Rolling edit logs | |
14/07/09 12:34:44 INFO namenode.FSEditLog: Ending log segment 22 | |
14/07/09 12:34:44 INFO namenode.FSEditLog: Number of transactions: 31 Total time for transactions(ms): 3 Number of transactions batched in Syncs: 2 Number of syncs: 18 SyncTimes(ms): 96 | |
14/07/09 12:34:44 INFO namenode.FSEditLog: Number of transactions: 31 Total time for transactions(ms): 3 Number of transactions batched in Syncs: 2 Number of syncs: 19 SyncTimes(ms): 99 | |
14/07/09 12:34:44 INFO namenode.FileJournalManager: Finalizing edits file /data1/hdfs/nn/current/edits_inprogress_0000000000000000022 -> /data1/hdfs/nn/current/edits_0000000000000000022-0000000000000000052 | |
14/07/09 12:34:44 INFO namenode.FSEditLog: Starting log segment at 53 | |
14/07/09 12:34:44 INFO blockmanagement.CacheReplicationMonitor: Rescanning after 30001 milliseconds | |
14/07/09 12:34:44 INFO blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s). | |
14/07/09 12:34:45 INFO namenode.TransferFsImage: Transfer took 0.01s at 200.00 KB/s | |
14/07/09 12:34:45 INFO namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000000052 size 2177 bytes. | |
14/07/09 12:34:45 INFO namenode.NNStorageRetentionManager: Going to retain 2 images with txid >= 0 | |
14/07/09 12:35:14 INFO blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds | |
14/07/09 12:35:14 INFO blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 0 millisecond(s). | |
14/07/09 12:35:44 INFO blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds | |
14/07/09 12:35:44 INFO blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 1 millisecond(s). | |
14/07/09 12:36:14 INFO blockmanagement.CacheReplicationMonitor: Rescanning after 30000 milliseconds | |
14/07/09 12:36:14 INFO blockmanagement.CacheReplicationMonitor: Scanned 0 directive(s) and 0 block(s) in 1 millisecond(s). |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment