Skip to content

Instantly share code, notes, and snippets.

@bcleenders
Created November 11, 2015 12:33
Show Gist options
  • Save bcleenders/42128c34efad804b1a5a to your computer and use it in GitHub Desktop.
Save bcleenders/42128c34efad804b1a5a to your computer and use it in GitHub Desktop.
Logged in as: dr.who
Logs for container_1447234668707_0019_01_000001
About Apache Hadoop
ResourceManager
RM Home
NodeManager
Tools
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/filecache/40/spark-assembly-1.5.1-hadoop2.4.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/word2vec/hop_distro/hadoop-2.4.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
15/11/11 13:31:14 INFO [main] (SignalLogger.scala:register(47)) - Registered signal handlers for [TERM, HUP, INT]
15/11/11 13:31:15 WARN [main] (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/11/11 13:31:15 INFO [main] (Logging.scala:logInfo(59)) - ApplicationAttemptId: appattempt_1447234668707_0019_000001
15/11/11 13:31:16 INFO [main] (Logging.scala:logInfo(59)) - Changing view acls to: word2vec
15/11/11 13:31:16 INFO [main] (Logging.scala:logInfo(59)) - Changing modify acls to: word2vec
15/11/11 13:31:16 INFO [main] (Logging.scala:logInfo(59)) - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(word2vec); users with modify permissions: Set(word2vec)
15/11/11 13:31:16 INFO [sparkYarnAM-akka.actor.default-dispatcher-3] (Slf4jLogger.scala:applyOrElse(80)) - Slf4jLogger started
15/11/11 13:31:16 INFO [sparkYarnAM-akka.actor.default-dispatcher-3] (Slf4jLogger.scala:apply$mcV$sp(74)) - Starting remoting
15/11/11 13:31:17 INFO [sparkYarnAM-akka.actor.default-dispatcher-3] (Slf4jLogger.scala:apply$mcV$sp(74)) - Remoting started; listening on addresses :[akka.tcp://[email protected]:34951]
15/11/11 13:31:17 INFO [main] (Logging.scala:logInfo(59)) - Successfully started service 'sparkYarnAM' on port 34951.
15/11/11 13:31:17 INFO [main] (Logging.scala:logInfo(59)) - Waiting for Spark driver to be reachable.
15/11/11 13:31:17 INFO [main] (Logging.scala:logInfo(59)) - Driver now available: 193.10.64.11:52828
15/11/11 13:31:17 INFO [sparkYarnAM-akka.actor.default-dispatcher-3] (Logging.scala:logInfo(59)) - Add WebUI Filter. AddWebUIFilter(org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,Map(PROXY_HOSTS -> bbc1.sics.se, PROXY_URI_BASES -> http://bbc1.sics.se:45000/proxy/application_1447234668707_0019),/proxy/application_1447234668707_0019)
15/11/11 13:31:17 INFO [main] (Logging.scala:logInfo(59)) - Registering the ApplicationMaster
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Will request 50 executor containers, each with 1 cores and 5632 MB memory including 512 MB overhead
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:18 INFO [main] (Logging.scala:logInfo(59)) - Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals
15/11/11 13:31:18 INFO [Reporter] (AMRMClientImpl.java:populateNMTokens(299)) - Received new token for : bbc7.sics.se:45007
15/11/11 13:31:18 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000002 for on host bbc7.sics.se
15/11/11 13:31:18 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc7.sics.se
15/11/11 13:31:18 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000003 for on host bbc7.sics.se
15/11/11 13:31:18 INFO [ContainerLauncher #0] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:18 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc7.sics.se
15/11/11 13:31:18 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000004 for on host bbc7.sics.se
15/11/11 13:31:18 INFO [ContainerLauncher #1] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:18 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc7.sics.se
15/11/11 13:31:18 INFO [ContainerLauncher #1] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:18 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000005 for on host bbc7.sics.se
15/11/11 13:31:18 INFO [ContainerLauncher #0] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:18 INFO [ContainerLauncher #2] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:18 INFO [ContainerLauncher #0] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:18 INFO [ContainerLauncher #1] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:18 INFO [ContainerLauncher #2] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:18 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc7.sics.se
15/11/11 13:31:18 INFO [ContainerLauncher #2] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:18 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000006 for on host bbc7.sics.se
15/11/11 13:31:18 INFO [ContainerLauncher #3] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:18 INFO [ContainerLauncher #3] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:18 INFO [ContainerLauncher #3] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:18 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc7.sics.se
15/11/11 13:31:18 INFO [ContainerLauncher #4] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:18 INFO [Reporter] (Logging.scala:logInfo(59)) - Received 5 containers from YARN, launching executors on 5 of them.
15/11/11 13:31:18 INFO [ContainerLauncher #4] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:18 INFO [ContainerLauncher #4] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:18 INFO [ContainerLauncher #0] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:18 INFO [ContainerLauncher #1] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:18 INFO [ContainerLauncher #4] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:18 INFO [ContainerLauncher #2] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:18 INFO [ContainerLauncher #3] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:19 INFO [ContainerLauncher #0] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [ContainerLauncher #2] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [ContainerLauncher #4] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [ContainerLauncher #1] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [ContainerLauncher #3] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [ContainerLauncher #3] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc7.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000005/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc7.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000005/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 4 --hostname bbc7.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [ContainerLauncher #2] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc7.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000004/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc7.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000004/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 3 --hostname bbc7.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [ContainerLauncher #0] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc7.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000002/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc7.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000002/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 1 --hostname bbc7.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [ContainerLauncher #1] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc7.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000003/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc7.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000003/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 2 --hostname bbc7.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [ContainerLauncher #4] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc7.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000006/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc7.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000006/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 5 --hostname bbc7.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [ContainerLauncher #3] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc7.sics.se:45007
15/11/11 13:31:19 INFO [ContainerLauncher #0] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc7.sics.se:45007
15/11/11 13:31:19 INFO [ContainerLauncher #2] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc7.sics.se:45007
15/11/11 13:31:19 INFO [ContainerLauncher #4] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc7.sics.se:45007
15/11/11 13:31:19 INFO [ContainerLauncher #1] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc7.sics.se:45007
15/11/11 13:31:19 INFO [Reporter] (AMRMClientImpl.java:populateNMTokens(299)) - Received new token for : bbc6.sics.se:45007
15/11/11 13:31:19 INFO [Reporter] (AMRMClientImpl.java:populateNMTokens(299)) - Received new token for : bbc5.sics.se:45007
15/11/11 13:31:19 INFO [Reporter] (AMRMClientImpl.java:populateNMTokens(299)) - Received new token for : bbc2.sics.se:45007
15/11/11 13:31:19 INFO [Reporter] (AMRMClientImpl.java:populateNMTokens(299)) - Received new token for : bbc3.sics.se:45007
15/11/11 13:31:19 INFO [Reporter] (AMRMClientImpl.java:populateNMTokens(299)) - Received new token for : bbc4.sics.se:45007
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000007 for on host bbc6.sics.se
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc6.sics.se
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000008 for on host bbc6.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #5] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc6.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #5] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:19 INFO [ContainerLauncher #5] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:19 INFO [ContainerLauncher #5] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000009 for on host bbc6.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #6] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:19 INFO [ContainerLauncher #6] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc6.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #6] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:19 INFO [ContainerLauncher #6] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000010 for on host bbc6.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #7] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:19 INFO [ContainerLauncher #7] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc6.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #7] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:19 INFO [ContainerLauncher #7] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000011 for on host bbc6.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #8] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:19 INFO [ContainerLauncher #8] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc6.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #8] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:19 INFO [ContainerLauncher #8] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000012 for on host bbc6.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #9] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc6.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #9] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:19 INFO [ContainerLauncher #9] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:19 INFO [ContainerLauncher #9] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000013 for on host bbc5.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #10] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:19 INFO [ContainerLauncher #6] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [ContainerLauncher #7] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [ContainerLauncher #10] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc5.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #10] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:19 INFO [ContainerLauncher #5] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [ContainerLauncher #8] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [ContainerLauncher #10] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:19 INFO [ContainerLauncher #9] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [ContainerLauncher #11] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000014 for on host bbc5.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #11] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:19 INFO [ContainerLauncher #11] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:19 INFO [ContainerLauncher #11] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc5.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #12] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000015 for on host bbc5.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #10] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [ContainerLauncher #12] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:19 INFO [ContainerLauncher #12] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:19 INFO [ContainerLauncher #12] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc5.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #11] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [ContainerLauncher #13] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000016 for on host bbc5.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #13] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:19 INFO [ContainerLauncher #13] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc5.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #13] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:19 INFO [ContainerLauncher #12] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000017 for on host bbc5.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #14] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:19 INFO [ContainerLauncher #14] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc5.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #14] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:19 INFO [ContainerLauncher #14] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:19 INFO [ContainerLauncher #15] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000018 for on host bbc5.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #15] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:19 INFO [ContainerLauncher #15] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc5.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #15] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000019 for on host bbc2.sics.se
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc2.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #14] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [ContainerLauncher #16] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:19 INFO [ContainerLauncher #13] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [ContainerLauncher #16] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000020 for on host bbc2.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #16] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:19 INFO [ContainerLauncher #17] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:19 INFO [ContainerLauncher #17] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:19 INFO [ContainerLauncher #17] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:19 INFO [ContainerLauncher #17] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:19 INFO [ContainerLauncher #7] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc6.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000009/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc6.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000009/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 8 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [ContainerLauncher #7] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc6.sics.se:45007
15/11/11 13:31:19 INFO [ContainerLauncher #6] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc6.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000008/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc6.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000008/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 7 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [ContainerLauncher #15] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [ContainerLauncher #17] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [ContainerLauncher #5] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc6.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000007/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc6.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000007/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 6 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc2.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #5] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc6.sics.se:45007
15/11/11 13:31:19 INFO [ContainerLauncher #16] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:19 INFO [ContainerLauncher #9] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc6.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000011/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc6.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000011/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 10 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [ContainerLauncher #18] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000021 for on host bbc2.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #6] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc6.sics.se:45007
15/11/11 13:31:19 INFO [ContainerLauncher #9] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc6.sics.se:45007
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc2.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #19] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:19 INFO [ContainerLauncher #8] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc6.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000010/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc6.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000010/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 9 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [ContainerLauncher #18] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000022 for on host bbc2.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #8] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc6.sics.se:45007
15/11/11 13:31:19 INFO [ContainerLauncher #18] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:19 INFO [ContainerLauncher #16] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [ContainerLauncher #19] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc2.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #10] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc6.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000012/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc6.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000012/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 11 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [ContainerLauncher #18] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:19 INFO [ContainerLauncher #20] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:19 INFO [ContainerLauncher #19] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:19 INFO [ContainerLauncher #11] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc5.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000013/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc5.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000013/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 12 --hostname bbc5.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [ContainerLauncher #19] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:19 INFO [ContainerLauncher #12] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc5.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000014/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc5.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000014/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 13 --hostname bbc5.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [ContainerLauncher #11] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc5.sics.se:45007
15/11/11 13:31:19 INFO [ContainerLauncher #20] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000023 for on host bbc2.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #10] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc6.sics.se:45007
15/11/11 13:31:19 INFO [ContainerLauncher #20] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:19 INFO [ContainerLauncher #12] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc5.sics.se:45007
15/11/11 13:31:19 INFO [ContainerLauncher #20] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc2.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #13] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc5.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000015/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc5.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000015/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 14 --hostname bbc5.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [ContainerLauncher #14] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc5.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000016/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc5.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000016/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 15 --hostname bbc5.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [ContainerLauncher #18] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [ContainerLauncher #13] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc5.sics.se:45007
15/11/11 13:31:19 INFO [ContainerLauncher #21] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000024 for on host bbc2.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #19] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [ContainerLauncher #15] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc5.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000017/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc5.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000017/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 16 --hostname bbc5.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [ContainerLauncher #21] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:19 INFO [ContainerLauncher #17] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc2.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000019/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc2.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000019/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 18 --hostname bbc2.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [ContainerLauncher #14] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc5.sics.se:45007
15/11/11 13:31:19 INFO [ContainerLauncher #17] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc2.sics.se:45007
15/11/11 13:31:19 INFO [ContainerLauncher #15] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc5.sics.se:45007
15/11/11 13:31:19 INFO [ContainerLauncher #21] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc2.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #20] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [ContainerLauncher #21] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:19 INFO [ContainerLauncher #22] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000025 for on host bbc3.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #22] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:19 INFO [ContainerLauncher #22] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:19 INFO [ContainerLauncher #22] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc3.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #21] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000026 for on host bbc3.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #16] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc5.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000018/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc5.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000018/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 17 --hostname bbc5.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc3.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #16] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc5.sics.se:45007
15/11/11 13:31:19 INFO [ContainerLauncher #23] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000027 for on host bbc3.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #24] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:19 INFO [ContainerLauncher #22] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc3.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #24] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:19 INFO [ContainerLauncher #23] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000028 for on host bbc3.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #24] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:19 INFO [ContainerLauncher #11] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc3.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #24] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:19 INFO [ContainerLauncher #23] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000029 for on host bbc3.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #0] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:19 INFO [ContainerLauncher #11] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:19 INFO [ContainerLauncher #0] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:19 INFO [ContainerLauncher #23] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:19 INFO [ContainerLauncher #0] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:19 INFO [ContainerLauncher #11] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc3.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #11] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:19 INFO [ContainerLauncher #0] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000030 for on host bbc3.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #1] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:19 INFO [ContainerLauncher #24] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [ContainerLauncher #1] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc3.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #18] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc2.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000020/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc2.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000020/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 19 --hostname bbc2.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [ContainerLauncher #3] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:19 INFO [ContainerLauncher #3] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:19 INFO [ContainerLauncher #3] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:19 INFO [ContainerLauncher #3] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:19 INFO [ContainerLauncher #0] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [ContainerLauncher #1] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:19 INFO [ContainerLauncher #19] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc2.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000021/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc2.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000021/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 20 --hostname bbc2.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [ContainerLauncher #1] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:19 INFO [ContainerLauncher #21] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc2.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000023/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc2.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000023/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 22 --hostname bbc2.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [ContainerLauncher #20] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc2.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000022/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc2.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000022/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 21 --hostname bbc2.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [ContainerLauncher #23] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [ContainerLauncher #18] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc2.sics.se:45007
15/11/11 13:31:19 INFO [ContainerLauncher #11] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000031 for on host bbc4.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #3] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [ContainerLauncher #20] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc2.sics.se:45007
15/11/11 13:31:19 INFO [ContainerLauncher #21] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc2.sics.se:45007
15/11/11 13:31:19 INFO [ContainerLauncher #19] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc2.sics.se:45007
15/11/11 13:31:19 INFO [ContainerLauncher #1] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc4.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #22] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc2.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000024/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc2.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000024/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 23 --hostname bbc2.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000032 for on host bbc4.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #2] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:19 INFO [ContainerLauncher #22] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc2.sics.se:45007
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc4.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #2] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000033 for on host bbc4.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #5] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:19 INFO [ContainerLauncher #2] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc4.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #5] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:19 INFO [ContainerLauncher #2] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000034 for on host bbc4.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #6] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:19 INFO [ContainerLauncher #6] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc4.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #6] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000035 for on host bbc4.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #7] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:19 INFO [ContainerLauncher #6] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc4.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #7] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000036 for on host bbc4.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #9] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:19 INFO [ContainerLauncher #5] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:19 INFO [ContainerLauncher #2] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [ContainerLauncher #7] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc4.sics.se
15/11/11 13:31:19 INFO [ContainerLauncher #24] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc3.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000026/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc3.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000026/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 25 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [Reporter] (Logging.scala:logInfo(59)) - Received 30 containers from YARN, launching executors on 30 of them.
15/11/11 13:31:19 INFO [ContainerLauncher #9] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:19 INFO [ContainerLauncher #24] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc3.sics.se:45007
15/11/11 13:31:19 INFO [ContainerLauncher #4] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:19 INFO [ContainerLauncher #7] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:19 INFO [ContainerLauncher #4] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:19 INFO [ContainerLauncher #6] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [ContainerLauncher #9] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:19 INFO [ContainerLauncher #4] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:19 INFO [ContainerLauncher #9] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:19 INFO [ContainerLauncher #4] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:19 INFO [ContainerLauncher #5] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:19 INFO [ContainerLauncher #0] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc3.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000028/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc3.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000028/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 27 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [ContainerLauncher #7] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [ContainerLauncher #0] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc3.sics.se:45007
15/11/11 13:31:19 INFO [ContainerLauncher #11] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc3.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000027/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc3.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000027/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 26 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [ContainerLauncher #3] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc3.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000030/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc3.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000030/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 29 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [ContainerLauncher #11] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc3.sics.se:45007
15/11/11 13:31:19 INFO [ContainerLauncher #9] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [ContainerLauncher #3] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc3.sics.se:45007
15/11/11 13:31:19 INFO [ContainerLauncher #4] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [ContainerLauncher #23] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc3.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000025/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc3.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000025/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 24 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [ContainerLauncher #1] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc3.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000029/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc3.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000029/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 28 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [ContainerLauncher #5] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:19 INFO [ContainerLauncher #23] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc3.sics.se:45007
15/11/11 13:31:19 INFO [ContainerLauncher #1] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc3.sics.se:45007
15/11/11 13:31:19 INFO [ContainerLauncher #2] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000031/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000031/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 30 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [ContainerLauncher #2] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc4.sics.se:45007
15/11/11 13:31:19 INFO [ContainerLauncher #7] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000034/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000034/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 33 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [ContainerLauncher #6] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000033/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000033/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 32 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [ContainerLauncher #7] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc4.sics.se:45007
15/11/11 13:31:19 INFO [ContainerLauncher #6] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc4.sics.se:45007
15/11/11 13:31:19 INFO [ContainerLauncher #5] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000032/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000032/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 31 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [ContainerLauncher #5] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc4.sics.se:45007
15/11/11 13:31:19 INFO [ContainerLauncher #9] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000035/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000035/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 34 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [ContainerLauncher #9] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc4.sics.se:45007
15/11/11 13:31:19 INFO [ContainerLauncher #4] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000036/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000036/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 35 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:19 INFO [ContainerLauncher #4] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc4.sics.se:45007
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000037 for on host bbc6.sics.se
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc6.sics.se
15/11/11 13:31:39 INFO [ContainerLauncher #8] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000038 for on host bbc6.sics.se
15/11/11 13:31:39 INFO [ContainerLauncher #8] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:39 INFO [ContainerLauncher #8] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:39 INFO [ContainerLauncher #8] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc6.sics.se
15/11/11 13:31:39 INFO [ContainerLauncher #12] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000039 for on host bbc6.sics.se
15/11/11 13:31:39 INFO [ContainerLauncher #12] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:39 INFO [ContainerLauncher #12] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:39 INFO [ContainerLauncher #12] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc6.sics.se
15/11/11 13:31:39 INFO [ContainerLauncher #8] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:39 INFO [ContainerLauncher #10] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000040 for on host bbc3.sics.se
15/11/11 13:31:39 INFO [ContainerLauncher #10] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:39 INFO [ContainerLauncher #10] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc3.sics.se
15/11/11 13:31:39 INFO [ContainerLauncher #12] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:39 INFO [ContainerLauncher #15] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:39 INFO [ContainerLauncher #15] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:39 INFO [ContainerLauncher #15] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:39 INFO [ContainerLauncher #15] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:39 INFO [ContainerLauncher #8] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc6.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000037/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc6.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000037/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 36 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:39 INFO [ContainerLauncher #10] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:39 INFO [ContainerLauncher #8] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc6.sics.se:45007
15/11/11 13:31:39 INFO [ContainerLauncher #12] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc6.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000038/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc6.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000038/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 37 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:39 INFO [ContainerLauncher #12] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc6.sics.se:45007
15/11/11 13:31:39 INFO [ContainerLauncher #15] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:39 INFO [ContainerLauncher #10] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:39 INFO [ContainerLauncher #15] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc3.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000040/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc3.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000040/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 39 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:39 INFO [ContainerLauncher #15] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc3.sics.se:45007
15/11/11 13:31:39 INFO [ContainerLauncher #10] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc6.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000039/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc6.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000039/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 38 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000041 for on host bbc4.sics.se
15/11/11 13:31:39 INFO [ContainerLauncher #10] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc6.sics.se:45007
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc4.sics.se
15/11/11 13:31:39 INFO [ContainerLauncher #13] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000042 for on host bbc4.sics.se
15/11/11 13:31:39 INFO [ContainerLauncher #13] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:39 INFO [ContainerLauncher #13] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:39 INFO [ContainerLauncher #13] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc4.sics.se
15/11/11 13:31:39 INFO [ContainerLauncher #14] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000043 for on host bbc4.sics.se
15/11/11 13:31:39 INFO [ContainerLauncher #14] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:39 INFO [ContainerLauncher #14] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:39 INFO [ContainerLauncher #13] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc4.sics.se
15/11/11 13:31:39 INFO [ContainerLauncher #14] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:39 INFO [ContainerLauncher #17] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:39 INFO [ContainerLauncher #17] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000044 for on host bbc7.sics.se
15/11/11 13:31:39 INFO [ContainerLauncher #17] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:39 INFO [ContainerLauncher #17] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc7.sics.se
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000045 for on host bbc2.sics.se
15/11/11 13:31:39 INFO [ContainerLauncher #16] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:39 INFO [ContainerLauncher #16] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:39 INFO [ContainerLauncher #14] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:39 INFO [ContainerLauncher #16] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc2.sics.se
15/11/11 13:31:39 INFO [ContainerLauncher #16] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:39 INFO [ContainerLauncher #18] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:39 INFO [ContainerLauncher #17] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:39 INFO [ContainerLauncher #18] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:39 INFO [ContainerLauncher #13] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000041/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000041/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 40 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000046 for on host bbc2.sics.se
15/11/11 13:31:39 INFO [ContainerLauncher #18] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:39 INFO [ContainerLauncher #13] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc4.sics.se:45007
15/11/11 13:31:39 INFO [ContainerLauncher #18] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc2.sics.se
15/11/11 13:31:39 INFO [ContainerLauncher #19] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000047 for on host bbc2.sics.se
15/11/11 13:31:39 INFO [ContainerLauncher #16] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc2.sics.se
15/11/11 13:31:39 INFO [ContainerLauncher #19] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:39 INFO [ContainerLauncher #19] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:39 INFO [ContainerLauncher #19] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:39 INFO [ContainerLauncher #24] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:39 INFO [ContainerLauncher #18] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:39 INFO [ContainerLauncher #24] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:39 INFO [ContainerLauncher #24] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000048 for on host bbc3.sics.se
15/11/11 13:31:39 INFO [ContainerLauncher #24] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:39 INFO [ContainerLauncher #14] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000042/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000042/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 41 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc3.sics.se
15/11/11 13:31:39 INFO [ContainerLauncher #20] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:39 INFO [ContainerLauncher #14] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc4.sics.se:45007
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000049 for on host bbc3.sics.se
15/11/11 13:31:39 INFO [ContainerLauncher #17] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000043/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000043/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 42 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:39 INFO [ContainerLauncher #19] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc3.sics.se
15/11/11 13:31:39 INFO [ContainerLauncher #20] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000050 for on host bbc3.sics.se
15/11/11 13:31:39 INFO [ContainerLauncher #24] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:39 INFO [ContainerLauncher #22] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:39 INFO [ContainerLauncher #17] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc4.sics.se:45007
15/11/11 13:31:39 INFO [ContainerLauncher #16] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc7.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000044/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc7.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000044/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 43 --hostname bbc7.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:39 INFO [ContainerLauncher #22] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:39 INFO [ContainerLauncher #20] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:39 INFO [ContainerLauncher #16] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc7.sics.se:45007
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc3.sics.se
15/11/11 13:31:39 INFO [ContainerLauncher #22] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:39 INFO [ContainerLauncher #21] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:39 INFO [ContainerLauncher #20] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:39 INFO [ContainerLauncher #21] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000051 for on host bbc3.sics.se
15/11/11 13:31:39 INFO [ContainerLauncher #22] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc3.sics.se
15/11/11 13:31:39 INFO [ContainerLauncher #3] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:39 INFO [ContainerLauncher #3] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:39 INFO [ContainerLauncher #3] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:39 INFO [ContainerLauncher #3] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Received 15 containers from YARN, launching executors on 15 of them.
15/11/11 13:31:39 INFO [ContainerLauncher #22] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:39 INFO [ContainerLauncher #21] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:39 INFO [ContainerLauncher #18] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc2.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000045/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc2.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000045/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 44 --hostname bbc2.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:39 INFO [ContainerLauncher #24] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc2.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000047/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc2.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000047/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 46 --hostname bbc2.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:39 INFO [ContainerLauncher #21] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:39 INFO [ContainerLauncher #19] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc2.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000046/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc2.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000046/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 45 --hostname bbc2.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:39 INFO [ContainerLauncher #24] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc2.sics.se:45007
15/11/11 13:31:39 INFO [ContainerLauncher #19] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc2.sics.se:45007
15/11/11 13:31:39 INFO [ContainerLauncher #18] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc2.sics.se:45007
15/11/11 13:31:39 INFO [ContainerLauncher #3] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:39 INFO [ContainerLauncher #20] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:39 INFO [ContainerLauncher #21] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000007 (state: COMPLETE, exit status: 143)
15/11/11 13:31:39 INFO [ContainerLauncher #22] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc3.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000049/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc3.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000049/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 48 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000007. Exit status: 143. Diagnostics: Container [pid=12225,containerID=container_1447234668707_0019_01_000007] is running beyond virtual memory limits. Current usage: 374.4 MB of 6 GB physical memory used; 6.0 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000007 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 12261 12225 12225 12225 (java) 896 151 6343491584 95536 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000007/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000007 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 6 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000007/__app__.jar
|- 12225 6410 12225 12225 (bash) 0 1 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000007/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000007 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 6 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000007/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000007/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000007/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:39 INFO [ContainerLauncher #22] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc3.sics.se:45007
15/11/11 13:31:39 INFO [ContainerLauncher #3] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc3.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000051/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc3.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000051/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 50 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000008 (state: COMPLETE, exit status: 143)
15/11/11 13:31:39 INFO [ContainerLauncher #3] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc3.sics.se:45007
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000008. Exit status: 143. Diagnostics: Container [pid=12229,containerID=container_1447234668707_0019_01_000008] is running beyond virtual memory limits. Current usage: 349.7 MB of 6 GB physical memory used; 6.0 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000008 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 12229 6410 12229 12229 (bash) 0 1 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000008/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000008 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 7 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000008/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000008/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000008/stderr
|- 12263 12229 12229 12229 (java) 845 155 6343168000 89214 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000008/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000008 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 7 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000008/__app__.jar
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000012 (state: COMPLETE, exit status: 143)
15/11/11 13:31:39 INFO [ContainerLauncher #21] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc3.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000050/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc3.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000050/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 49 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000012. Exit status: 143. Diagnostics: Container [pid=12227,containerID=container_1447234668707_0019_01_000012] is running beyond virtual memory limits. Current usage: 375.3 MB of 6 GB physical memory used; 6.0 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000012 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 12227 6410 12227 12227 (bash) 0 1 108650496 309 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000012/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000012 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 11 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000012/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000012/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000012/stderr
|- 12262 12227 12227 12227 (java) 875 159 6335455232 95775 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000012/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000012 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 11 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000012/__app__.jar
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:39 INFO [ContainerLauncher #21] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc3.sics.se:45007
15/11/11 13:31:39 INFO [ContainerLauncher #20] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc3.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000048/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc3.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000048/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 47 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000025 (state: COMPLETE, exit status: 143)
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000025. Exit status: 143. Diagnostics: Container [pid=44443,containerID=container_1447234668707_0019_01_000025] is running beyond virtual memory limits. Current usage: 270.9 MB of 6 GB physical memory used; 6.0 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000025 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 44443 37998 44443 44443 (bash) 1 0 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000025/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000025 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 24 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000025/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000025/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000025/stderr
|- 44475 44443 44443 44443 (java) 348 552 6338109440 69029 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000025/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000025 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 24 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000025/__app__.jar
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:39 INFO [ContainerLauncher #20] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc3.sics.se:45007
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000031 (state: COMPLETE, exit status: 143)
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000031. Exit status: 143. Diagnostics: Container [pid=22841,containerID=container_1447234668707_0019_01_000031] is running beyond virtual memory limits. Current usage: 357.5 MB of 6 GB physical memory used; 6.0 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000031 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 22875 22841 22841 22841 (java) 782 160 6337421312 91214 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000031/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000031 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 30 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000031/__app__.jar
|- 22841 18492 22841 22841 (bash) 1 0 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000031/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000031 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 30 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000031/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000031/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000031/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000032 (state: COMPLETE, exit status: 143)
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000032. Exit status: 143. Diagnostics: Container [pid=22853,containerID=container_1447234668707_0019_01_000032] is running beyond virtual memory limits. Current usage: 355.7 MB of 6 GB physical memory used; 6.0 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000032 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 22853 18492 22853 22853 (bash) 0 0 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000032/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000032 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 31 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000032/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000032/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000032/stderr
|- 22877 22853 22853 22853 (java) 802 149 6346764288 90749 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000032/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000032 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 31 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000032/__app__.jar
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000033 (state: COMPLETE, exit status: 143)
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000033. Exit status: 143. Diagnostics: Container [pid=22824,containerID=container_1447234668707_0019_01_000033] is running beyond virtual memory limits. Current usage: 351.4 MB of 6 GB physical memory used; 6.0 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000033 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 22860 22824 22824 22824 (java) 800 147 6337679360 89654 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000033/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000033 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 32 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000033/__app__.jar
|- 22824 18492 22824 22824 (bash) 0 1 108650496 309 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000033/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000033 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 32 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000033/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000033/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000033/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000005 (state: COMPLETE, exit status: 143)
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000005. Exit status: 143. Diagnostics: Container [pid=22909,containerID=container_1447234668707_0019_01_000005] is running beyond virtual memory limits. Current usage: 434.0 MB of 6 GB physical memory used; 6.0 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000005 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 22945 22909 22909 22909 (java) 782 116 6348283904 110798 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000005/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000005 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 4 --hostname bbc7.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000005/__app__.jar
|- 22909 16725 22909 22909 (bash) 0 1 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000005/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000005 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 4 --hostname bbc7.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000005/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000005/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000005/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000019 (state: COMPLETE, exit status: 143)
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000019. Exit status: 143. Diagnostics: Container [pid=26325,containerID=container_1447234668707_0019_01_000019] is running beyond virtual memory limits. Current usage: 320.5 MB of 6 GB physical memory used; 6.0 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000019 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 26357 26325 26325 26325 (java) 353 611 6344523776 81749 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000019/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000019 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 18 --hostname bbc2.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000019/__app__.jar
|- 26325 8680 26325 26325 (bash) 0 1 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000019/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000019 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 18 --hostname bbc2.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000019/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000019/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000019/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000020 (state: COMPLETE, exit status: 143)
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000020. Exit status: 143. Diagnostics: Container [pid=26322,containerID=container_1447234668707_0019_01_000020] is running beyond virtual memory limits. Current usage: 294.2 MB of 6 GB physical memory used; 6.0 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000020 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 26322 8680 26322 26322 (bash) 0 1 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000020/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000020 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 19 --hostname bbc2.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000020/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000020/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000020/stderr
|- 26360 26322 26322 26322 (java) 374 464 6337675264 75008 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000020/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000020 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 19 --hostname bbc2.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000020/__app__.jar
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000022 (state: COMPLETE, exit status: 143)
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000022. Exit status: 143. Diagnostics: Container [pid=26323,containerID=container_1447234668707_0019_01_000022] is running beyond virtual memory limits. Current usage: 326.7 MB of 6 GB physical memory used; 6.0 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000022 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 26361 26323 26323 26323 (java) 368 421 6346407936 83320 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000022/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000022 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 21 --hostname bbc2.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000022/__app__.jar
|- 26323 8680 26323 26323 (bash) 0 1 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000022/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000022 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 21 --hostname bbc2.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000022/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000022/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000022/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000026 (state: COMPLETE, exit status: 143)
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000026. Exit status: 143. Diagnostics: Container [pid=44445,containerID=container_1447234668707_0019_01_000026] is running beyond virtual memory limits. Current usage: 387.9 MB of 6 GB physical memory used; 6.0 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000026 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 44445 37998 44445 44445 (bash) 1 0 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000026/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000026 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 25 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000026/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000026/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000026/stderr
|- 44477 44445 44445 44445 (java) 450 574 6350274560 99005 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000026/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000026 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 25 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000026/__app__.jar
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000027 (state: COMPLETE, exit status: 143)
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000027. Exit status: 143. Diagnostics: Container [pid=44444,containerID=container_1447234668707_0019_01_000027] is running beyond virtual memory limits. Current usage: 395.0 MB of 6 GB physical memory used; 6.0 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000027 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 44444 37998 44444 44444 (bash) 0 1 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000027/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000027 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 26 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000027/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000027/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000027/stderr
|- 44478 44444 44444 44444 (java) 487 652 6350348288 100804 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000027/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000027 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 26 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000027/__app__.jar
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000028 (state: COMPLETE, exit status: 143)
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000028. Exit status: 143. Diagnostics: Container [pid=44442,containerID=container_1447234668707_0019_01_000028] is running beyond virtual memory limits. Current usage: 425.5 MB of 6 GB physical memory used; 6.0 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000028 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 44442 37998 44442 44442 (bash) 0 0 108650496 309 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000028/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000028 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 27 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000028/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000028/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000028/stderr
|- 44473 44442 44442 44442 (java) 497 596 6372835328 108608 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000028/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000028 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 27 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000028/__app__.jar
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000029 (state: COMPLETE, exit status: 143)
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000029. Exit status: 143. Diagnostics: Container [pid=44432,containerID=container_1447234668707_0019_01_000029] is running beyond virtual memory limits. Current usage: 375.3 MB of 6 GB physical memory used; 6.0 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000029 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 44432 37998 44432 44432 (bash) 0 1 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000029/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000029 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 28 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000029/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000029/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000029/stderr
|- 44472 44432 44432 44432 (java) 463 572 6348541952 95757 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000029/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000029 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 28 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000029/__app__.jar
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000030 (state: COMPLETE, exit status: 143)
15/11/11 13:31:39 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000030. Exit status: 143. Diagnostics: Container [pid=44433,containerID=container_1447234668707_0019_01_000030] is running beyond virtual memory limits. Current usage: 363.0 MB of 6 GB physical memory used; 6.0 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000030 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 44466 44433 44433 44433 (java) 450 620 6353719296 92610 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000030/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000030 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 29 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000030/__app__.jar
|- 44433 37998 44433 44433 (bash) 0 2 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000030/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000030 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 29 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000030/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000030/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000030/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Will request 16 executor containers, each with 1 cores and 5632 MB memory including 512 MB overhead
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000013 (state: COMPLETE, exit status: 143)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000013. Exit status: 143. Diagnostics: Container [pid=20687,containerID=container_1447234668707_0019_01_000013] is running beyond virtual memory limits. Current usage: 579.0 MB of 6 GB physical memory used; 6.5 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000013 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 20718 20687 20687 20687 (java) 1150 224 6341267456 137863 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000013/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000013 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 12 --hostname bbc5.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000013/__app__.jar
|- 21272 21263 21263 20687 (python) 104 0 245059584 4890 python -m pyspark.daemon
|- 20687 12168 20687 20687 (bash) 1 0 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000013/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000013 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 12 --hostname bbc5.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000013/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000013/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000013/stderr
|- 21263 20718 21263 20687 (python) 20 2 243159040 5159 python -m pyspark.daemon
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000014 (state: COMPLETE, exit status: 143)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000014. Exit status: 143. Diagnostics: Container [pid=20690,containerID=container_1447234668707_0019_01_000014] is running beyond virtual memory limits. Current usage: 442.3 MB of 6 GB physical memory used; 6.5 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000014 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 21249 20721 21249 20690 (python) 36 5 242991104 5139 python -m pyspark.daemon
|- 20690 12168 20690 20690 (bash) 1 0 108650496 309 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000014/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000014 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 13 --hostname bbc5.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000014/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000014/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000014/stderr
|- 21286 21249 21249 20690 (python) 52 1 244609024 4785 python -m pyspark.daemon
|- 20721 20690 20690 20690 (java) 894 161 6334070784 102985 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000014/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000014 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 13 --hostname bbc5.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000014/__app__.jar
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000015 (state: COMPLETE, exit status: 143)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000015. Exit status: 143. Diagnostics: Container [pid=20691,containerID=container_1447234668707_0019_01_000015] is running beyond virtual memory limits. Current usage: 480.7 MB of 6 GB physical memory used; 6.5 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000015 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 21245 20723 21245 20691 (python) 36 5 242991104 5134 python -m pyspark.daemon
|- 20691 12168 20691 20691 (bash) 1 0 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000015/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000015 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 14 --hostname bbc5.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000015/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000015/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000015/stderr
|- 21288 21245 21245 20691 (python) 63 0 245469184 4990 python -m pyspark.daemon
|- 20723 20691 20691 20691 (java) 993 171 6347431936 112615 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000015/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000015 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 14 --hostname bbc5.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000015/__app__.jar
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000016 (state: COMPLETE, exit status: 143)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000016. Exit status: 143. Diagnostics: Container [pid=20666,containerID=container_1447234668707_0019_01_000016] is running beyond virtual memory limits. Current usage: 557.9 MB of 6 GB physical memory used; 6.5 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000016 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 20720 20666 20666 20666 (java) 974 254 6345019392 132366 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000016/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000016 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 15 --hostname bbc5.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000016/__app__.jar
|- 21271 21256 21256 20666 (python) 86 0 245485568 5010 python -m pyspark.daemon
|- 20666 12168 20666 20666 (bash) 0 2 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000016/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000016 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 15 --hostname bbc5.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000016/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000016/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000016/stderr
|- 21256 20720 21256 20666 (python) 23 6 243040256 5130 python -m pyspark.daemon
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000017 (state: COMPLETE, exit status: 143)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000017. Exit status: 143. Diagnostics: Container [pid=20674,containerID=container_1447234668707_0019_01_000017] is running beyond virtual memory limits. Current usage: 556.5 MB of 6 GB physical memory used; 6.5 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000017 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 20719 20674 20674 20674 (java) 1006 208 6347960320 132064 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000017/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000017 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 16 --hostname bbc5.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000017/__app__.jar
|- 21270 21259 21259 20674 (python) 65 0 245256192 4929 python -m pyspark.daemon
|- 21259 20719 21259 20674 (python) 20 3 243159040 5159 python -m pyspark.daemon
|- 20674 12168 20674 20674 (bash) 0 1 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000017/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000017 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 16 --hostname bbc5.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000017/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000017/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000017/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000018 (state: COMPLETE, exit status: 143)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000018. Exit status: 143. Diagnostics: Container [pid=20689,containerID=container_1447234668707_0019_01_000018] is running beyond virtual memory limits. Current usage: 490.4 MB of 6 GB physical memory used; 6.5 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000018 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 20689 12168 20689 20689 (bash) 1 0 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000018/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000018 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 17 --hostname bbc5.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000018/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000018/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000018/stderr
|- 20722 20689 20689 20689 (java) 974 248 6340677632 115563 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000018/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000018 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 17 --hostname bbc5.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000018/__app__.jar
|- 21287 21247 21247 20689 (python) 55 5 243486720 4529 python -m pyspark.daemon
|- 21247 20722 21247 20689 (python) 38 5 243011584 5134 python -m pyspark.daemon
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000010 (state: COMPLETE, exit status: 143)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000010. Exit status: 143. Diagnostics: Container [pid=12226,containerID=container_1447234668707_0019_01_000010] is running beyond virtual memory limits. Current usage: 1.0 GB of 6 GB physical memory used; 6.5 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000010 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 12833 12819 12819 12226 (python) 219 1 245579776 5037 python -m pyspark.daemon
|- 12260 12226 12226 12226 (java) 1214 381 6334763008 257219 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000010/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000010 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 9 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000010/__app__.jar
|- 12819 12260 12819 12226 (python) 22 3 242995200 5132 python -m pyspark.daemon
|- 12226 6410 12226 12226 (bash) 0 1 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000010/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000010 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 9 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000010/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000010/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000010/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000034 (state: COMPLETE, exit status: 143)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000034. Exit status: 143. Diagnostics: Container [pid=22842,containerID=container_1447234668707_0019_01_000034] is running beyond virtual memory limits. Current usage: 568.7 MB of 6 GB physical memory used; 6.5 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000034 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 22876 22842 22842 22842 (java) 1114 250 6337945600 135102 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000034/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000034 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 33 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000034/__app__.jar
|- 23446 23433 23433 22842 (python) 87 0 245780480 5037 python -m pyspark.daemon
|- 23433 22876 23433 22842 (python) 41 6 242991104 5135 python -m pyspark.daemon
|- 22842 18492 22842 22842 (bash) 0 0 108650496 309 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000034/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000034 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 33 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000034/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000034/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000034/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000035 (state: COMPLETE, exit status: 143)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000035. Exit status: 143. Diagnostics: Container [pid=22821,containerID=container_1447234668707_0019_01_000035] is running beyond virtual memory limits. Current usage: 625.4 MB of 6 GB physical memory used; 6.5 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000035 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 23444 23437 23437 22821 (python) 105 0 245121024 4901 python -m pyspark.daemon
|- 23437 22859 23437 22821 (python) 43 5 242987008 5139 python -m pyspark.daemon
|- 22859 22821 22821 22821 (java) 1125 268 6344019968 149741 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000035/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000035 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 34 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000035/__app__.jar
|- 22821 18492 22821 22821 (bash) 0 1 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000035/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000035 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 34 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000035/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000035/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000035/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000002 (state: COMPLETE, exit status: 143)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000002. Exit status: 143. Diagnostics: Container [pid=22941,containerID=container_1447234668707_0019_01_000002] is running beyond virtual memory limits. Current usage: 1.1 GB of 6 GB physical memory used; 6.4 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000002 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 23386 22958 23386 22941 (python) 32 5 243269632 5186 python -m pyspark.daemon
|- 22941 16725 22941 22941 (bash) 0 0 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000002/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000002 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 1 --hostname bbc7.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000002/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000002/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000002/stderr
|- 23401 23386 23386 22941 (python) 270 1 246136832 5157 python -m pyspark.daemon
|- 22958 22941 22941 22941 (java) 1191 298 6321868800 269258 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000002/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000002 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 1 --hostname bbc7.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000002/__app__.jar
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000003 (state: COMPLETE, exit status: 143)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000003. Exit status: 143. Diagnostics: Container [pid=22933,containerID=container_1447234668707_0019_01_000003] is running beyond virtual memory limits. Current usage: 702.4 MB of 6 GB physical memory used; 6.5 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000003 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 23388 22957 23388 22933 (python) 30 4 242987008 5143 python -m pyspark.daemon
|- 23402 23388 23388 22933 (python) 272 5 246222848 5117 python -m pyspark.daemon
|- 22933 16725 22933 22933 (bash) 0 1 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000003/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000003 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 2 --hostname bbc7.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000003/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000003/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000003/stderr
|- 22957 22933 22933 22933 (java) 1155 189 6332014592 169235 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000003/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000003 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 2 --hostname bbc7.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000003/__app__.jar
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000004 (state: COMPLETE, exit status: 143)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000004. Exit status: 143. Diagnostics: Container [pid=22908,containerID=container_1447234668707_0019_01_000004] is running beyond virtual memory limits. Current usage: 1.1 GB of 6 GB physical memory used; 6.5 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000004 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 22944 22908 22908 22908 (java) 1205 283 6354268160 280560 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000004/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000004 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 3 --hostname bbc7.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000004/__app__.jar
|- 22908 16725 22908 22908 (bash) 0 1 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000004/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000004 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 3 --hostname bbc7.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000004/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000004/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000004/stderr
|- 23395 23384 23384 22908 (python) 244 1 245813248 5052 python -m pyspark.daemon
|- 23384 22944 23384 22908 (python) 28 4 243171328 5161 python -m pyspark.daemon
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000021 (state: COMPLETE, exit status: 143)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000021. Exit status: 143. Diagnostics: Container [pid=26321,containerID=container_1447234668707_0019_01_000021] is running beyond virtual memory limits. Current usage: 345.9 MB of 6 GB physical memory used; 6.0 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000021 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 26321 8680 26321 26321 (bash) 0 1 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000021/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000021 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 20 --hostname bbc2.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000021/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000021/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000021/stderr
|- 26358 26321 26321 26321 (java) 373 624 6343053312 88248 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000021/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000021 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 20 --hostname bbc2.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000021/__app__.jar
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000023 (state: COMPLETE, exit status: 143)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000023. Exit status: 143. Diagnostics: Container [pid=26324,containerID=container_1447234668707_0019_01_000023] is running beyond virtual memory limits. Current usage: 325.2 MB of 6 GB physical memory used; 6.0 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000023 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 26359 26324 26324 26324 (java) 298 617 6335565824 82939 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000023/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000023 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 22 --hostname bbc2.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000023/__app__.jar
|- 26324 8680 26324 26324 (bash) 0 1 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000023/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000023 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 22 --hostname bbc2.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000023/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000023/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000023/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000024 (state: COMPLETE, exit status: 143)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000024. Exit status: 143. Diagnostics: Container [pid=26326,containerID=container_1447234668707_0019_01_000024] is running beyond virtual memory limits. Current usage: 411.2 MB of 6 GB physical memory used; 6.2 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000024 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 27162 26362 26326 26326 (python) 15 3 230760448 4206 python -m pyspark.daemon
|- 26362 26326 26326 26326 (java) 391 618 6340714496 100743 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000024/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000024 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 23 --hostname bbc2.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000024/__app__.jar
|- 26326 8680 26326 26326 (bash) 0 1 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000024/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000024 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 23 --hostname bbc2.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000024/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000024/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000024/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Will request 15 executor containers, each with 1 cores and 5632 MB memory including 512 MB overhead
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:42 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:43 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000052 for on host bbc7.sics.se
15/11/11 13:31:43 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc7.sics.se
15/11/11 13:31:43 INFO [ContainerLauncher #0] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:43 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000053 for on host bbc7.sics.se
15/11/11 13:31:43 INFO [ContainerLauncher #0] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:43 INFO [ContainerLauncher #0] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:43 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc7.sics.se
15/11/11 13:31:43 INFO [ContainerLauncher #0] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:43 INFO [ContainerLauncher #1] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:43 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000054 for on host bbc7.sics.se
15/11/11 13:31:43 INFO [ContainerLauncher #1] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:43 INFO [ContainerLauncher #1] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:43 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc7.sics.se
15/11/11 13:31:43 INFO [ContainerLauncher #1] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:43 INFO [ContainerLauncher #11] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:43 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000055 for on host bbc6.sics.se
15/11/11 13:31:43 INFO [ContainerLauncher #11] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:43 INFO [ContainerLauncher #0] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:43 INFO [ContainerLauncher #11] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:43 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc6.sics.se
15/11/11 13:31:43 INFO [ContainerLauncher #11] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:43 INFO [ContainerLauncher #23] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:43 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000056 for on host bbc5.sics.se
15/11/11 13:31:43 INFO [ContainerLauncher #1] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:43 INFO [ContainerLauncher #23] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:43 INFO [ContainerLauncher #23] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:43 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc5.sics.se
15/11/11 13:31:43 INFO [ContainerLauncher #23] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:43 INFO [ContainerLauncher #6] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:43 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000057 for on host bbc5.sics.se
15/11/11 13:31:43 INFO [ContainerLauncher #6] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:43 INFO [ContainerLauncher #11] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:43 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc5.sics.se
15/11/11 13:31:43 INFO [ContainerLauncher #6] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:43 INFO [ContainerLauncher #2] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:43 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000058 for on host bbc5.sics.se
15/11/11 13:31:43 INFO [ContainerLauncher #6] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:43 INFO [ContainerLauncher #23] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:43 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc5.sics.se
15/11/11 13:31:43 INFO [ContainerLauncher #0] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc7.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000052/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc7.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000052/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 51 --hostname bbc7.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:43 INFO [ContainerLauncher #2] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:43 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000059 for on host bbc5.sics.se
15/11/11 13:31:43 INFO [ContainerLauncher #5] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:43 INFO [ContainerLauncher #0] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc7.sics.se:45007
15/11/11 13:31:43 INFO [ContainerLauncher #2] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:43 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc5.sics.se
15/11/11 13:31:43 INFO [ContainerLauncher #5] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:43 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000060 for on host bbc5.sics.se
15/11/11 13:31:43 INFO [ContainerLauncher #5] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:43 INFO [ContainerLauncher #5] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:43 INFO [ContainerLauncher #7] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:43 INFO [ContainerLauncher #6] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:43 INFO [ContainerLauncher #2] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:43 INFO [ContainerLauncher #7] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:43 INFO [ContainerLauncher #1] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc7.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000053/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc7.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000053/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 52 --hostname bbc7.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:43 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc5.sics.se
15/11/11 13:31:43 INFO [ContainerLauncher #7] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:43 INFO [ContainerLauncher #1] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc7.sics.se:45007
15/11/11 13:31:43 INFO [ContainerLauncher #5] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:43 INFO [ContainerLauncher #11] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc7.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000054/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc7.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000054/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 53 --hostname bbc7.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:43 INFO [ContainerLauncher #9] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:43 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000061 for on host bbc5.sics.se
15/11/11 13:31:43 INFO [ContainerLauncher #7] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:43 INFO [ContainerLauncher #2] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:43 INFO [ContainerLauncher #11] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc7.sics.se:45007
15/11/11 13:31:43 INFO [ContainerLauncher #9] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:43 INFO [ContainerLauncher #23] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc6.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000055/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc6.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000055/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 54 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:43 INFO [ContainerLauncher #9] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:43 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc5.sics.se
15/11/11 13:31:43 INFO [ContainerLauncher #23] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc6.sics.se:45007
15/11/11 13:31:43 INFO [ContainerLauncher #9] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:43 INFO [ContainerLauncher #6] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc5.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000056/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc5.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000056/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 55 --hostname bbc5.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:43 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000062 for on host bbc2.sics.se
15/11/11 13:31:43 INFO [ContainerLauncher #4] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:43 INFO [ContainerLauncher #6] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc5.sics.se:45007
15/11/11 13:31:43 INFO [ContainerLauncher #7] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:43 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc2.sics.se
15/11/11 13:31:43 INFO [ContainerLauncher #4] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:43 INFO [ContainerLauncher #12] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:43 INFO [ContainerLauncher #4] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:43 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000063 for on host bbc2.sics.se
15/11/11 13:31:43 INFO [ContainerLauncher #9] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:43 INFO [ContainerLauncher #5] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc5.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000058/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc5.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000058/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 57 --hostname bbc5.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:43 INFO [ContainerLauncher #4] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:43 INFO [ContainerLauncher #12] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:43 INFO [ContainerLauncher #5] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc5.sics.se:45007
15/11/11 13:31:43 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc2.sics.se
15/11/11 13:31:43 INFO [ContainerLauncher #12] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:43 INFO [ContainerLauncher #2] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc5.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000057/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc5.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000057/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 56 --hostname bbc5.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:43 INFO [ContainerLauncher #8] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:43 INFO [ContainerLauncher #12] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:43 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000064 for on host bbc2.sics.se
15/11/11 13:31:43 INFO [ContainerLauncher #4] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:43 INFO [ContainerLauncher #8] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:43 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc2.sics.se
15/11/11 13:31:43 INFO [ContainerLauncher #10] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:43 INFO [ContainerLauncher #2] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc5.sics.se:45007
15/11/11 13:31:43 INFO [ContainerLauncher #10] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:43 INFO [Reporter] (Logging.scala:logInfo(59)) - Received 13 containers from YARN, launching executors on 13 of them.
15/11/11 13:31:43 INFO [ContainerLauncher #8] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:43 INFO [ContainerLauncher #7] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc5.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000059/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc5.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000059/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 58 --hostname bbc5.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:43 INFO [ContainerLauncher #12] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:43 INFO [ContainerLauncher #10] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:43 INFO [ContainerLauncher #7] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc5.sics.se:45007
15/11/11 13:31:43 INFO [ContainerLauncher #8] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:43 INFO [ContainerLauncher #9] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc5.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000060/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc5.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000060/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 59 --hostname bbc5.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:43 INFO [ContainerLauncher #10] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:43 INFO [ContainerLauncher #9] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc5.sics.se:45007
15/11/11 13:31:43 INFO [ContainerLauncher #4] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc5.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000061/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc5.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000061/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 60 --hostname bbc5.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:43 INFO [ContainerLauncher #4] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc5.sics.se:45007
15/11/11 13:31:43 INFO [ContainerLauncher #10] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:43 INFO [ContainerLauncher #8] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:43 INFO [ContainerLauncher #12] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc2.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000062/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc2.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000062/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 61 --hostname bbc2.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:43 INFO [ContainerLauncher #12] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc2.sics.se:45007
15/11/11 13:31:43 INFO [ContainerLauncher #10] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc2.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000064/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc2.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000064/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 63 --hostname bbc2.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:43 INFO [ContainerLauncher #10] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc2.sics.se:45007
15/11/11 13:31:43 INFO [ContainerLauncher #8] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc2.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000063/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc2.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000063/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 62 --hostname bbc2.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:43 INFO [ContainerLauncher #8] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc2.sics.se:45007
15/11/11 13:31:44 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000065 for on host bbc3.sics.se
15/11/11 13:31:44 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc3.sics.se
15/11/11 13:31:44 INFO [ContainerLauncher #15] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:44 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000066 for on host bbc4.sics.se
15/11/11 13:31:44 INFO [ContainerLauncher #15] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:44 INFO [ContainerLauncher #15] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:44 INFO [ContainerLauncher #15] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:44 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc4.sics.se
15/11/11 13:31:44 INFO [ContainerLauncher #13] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:44 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000067 for on host bbc4.sics.se
15/11/11 13:31:44 INFO [ContainerLauncher #13] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:44 INFO [ContainerLauncher #13] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:44 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc4.sics.se
15/11/11 13:31:44 INFO [ContainerLauncher #13] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:44 INFO [ContainerLauncher #14] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:44 INFO [ContainerLauncher #15] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:44 INFO [ContainerLauncher #14] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:44 INFO [Reporter] (Logging.scala:logInfo(59)) - Received 3 containers from YARN, launching executors on 3 of them.
15/11/11 13:31:44 INFO [ContainerLauncher #14] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:44 INFO [ContainerLauncher #14] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:44 INFO [ContainerLauncher #13] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:44 INFO [ContainerLauncher #14] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:44 INFO [ContainerLauncher #15] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc3.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000065/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc3.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000065/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 64 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:44 INFO [ContainerLauncher #13] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000066/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000066/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 65 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:44 INFO [ContainerLauncher #15] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc3.sics.se:45007
15/11/11 13:31:44 INFO [ContainerLauncher #13] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc4.sics.se:45007
15/11/11 13:31:44 INFO [ContainerLauncher #14] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000067/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000067/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 66 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:44 INFO [ContainerLauncher #14] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc4.sics.se:45007
15/11/11 13:31:45 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000068 for on host bbc6.sics.se
15/11/11 13:31:45 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc6.sics.se
15/11/11 13:31:45 INFO [ContainerLauncher #16] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:45 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000069 for on host bbc6.sics.se
15/11/11 13:31:45 INFO [ContainerLauncher #16] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:45 INFO [ContainerLauncher #16] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:45 INFO [ContainerLauncher #16] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:45 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc6.sics.se
15/11/11 13:31:45 INFO [ContainerLauncher #17] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:45 INFO [ContainerLauncher #17] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:45 INFO [ContainerLauncher #17] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:45 INFO [ContainerLauncher #17] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:45 INFO [ContainerLauncher #16] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:45 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000070 for on host bbc4.sics.se
15/11/11 13:31:45 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc4.sics.se
15/11/11 13:31:45 INFO [ContainerLauncher #24] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:45 INFO [Reporter] (Logging.scala:logInfo(59)) - Received 3 containers from YARN, launching executors on 3 of them.
15/11/11 13:31:45 INFO [ContainerLauncher #17] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:45 INFO [ContainerLauncher #24] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:45 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000009 (state: COMPLETE, exit status: 143)
15/11/11 13:31:45 INFO [ContainerLauncher #24] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:45 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000009. Exit status: 143. Diagnostics: Container [pid=12228,containerID=container_1447234668707_0019_01_000009] is running beyond virtual memory limits. Current usage: 1.1 GB of 6 GB physical memory used; 6.5 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000009 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 12805 12259 12805 12228 (python) 29 3 243003392 5142 python -m pyspark.daemon
|- 12259 12228 12228 12228 (java) 1248 467 6335549440 280111 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000009/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000009 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 8 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000009/__app__.jar
|- 12822 12805 12805 12228 (python) 543 7 247214080 5422 python -m pyspark.daemon
|- 12228 6410 12228 12228 (bash) 1 1 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000009/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000009 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 8 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000009/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000009/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000009/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:45 INFO [ContainerLauncher #24] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:45 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000011 (state: COMPLETE, exit status: 143)
15/11/11 13:31:45 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000011. Exit status: 143. Diagnostics: Container [pid=12217,containerID=container_1447234668707_0019_01_000011] is running beyond virtual memory limits. Current usage: 1.1 GB of 6 GB physical memory used; 6.4 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000011 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 12258 12217 12217 12217 (java) 1319 428 6318616576 274670 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000011/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000011 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 10 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000011/__app__.jar
|- 12817 12258 12817 12217 (python) 23 3 242995200 5132 python -m pyspark.daemon
|- 12827 12817 12817 12217 (python) 522 2 245743616 5033 python -m pyspark.daemon
|- 12217 6410 12217 12217 (bash) 0 1 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000011/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000011 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 10 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000011/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000011/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000011/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:45 INFO [ContainerLauncher #16] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc6.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000068/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc6.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000068/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 67 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:45 INFO [ContainerLauncher #16] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc6.sics.se:45007
15/11/11 13:31:45 INFO [ContainerLauncher #24] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:45 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000036 (state: COMPLETE, exit status: 143)
15/11/11 13:31:45 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000036. Exit status: 143. Diagnostics: Container [pid=22822,containerID=container_1447234668707_0019_01_000036] is running beyond virtual memory limits. Current usage: 1.1 GB of 6 GB physical memory used; 6.5 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000036 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 22822 18492 22822 22822 (bash) 0 1 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000036/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000036 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 35 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000036/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000036/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000036/stderr
|- 23445 23429 23429 22822 (python) 420 1 245583872 5039 python -m pyspark.daemon
|- 22862 22822 22822 22822 (java) 1229 353 6345232384 276417 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000036/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000036 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 35 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000036/__app__.jar
|- 23429 22862 23429 22822 (python) 37 8 243159040 5158 python -m pyspark.daemon
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:45 INFO [ContainerLauncher #17] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc6.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000069/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc6.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000069/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 68 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:45 INFO [ContainerLauncher #17] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc6.sics.se:45007
15/11/11 13:31:45 INFO [ContainerLauncher #24] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000070/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000070/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 69 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:45 INFO [ContainerLauncher #24] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc4.sics.se:45007
15/11/11 13:31:48 INFO [Reporter] (Logging.scala:logInfo(59)) - Will request 3 executor containers, each with 1 cores and 5632 MB memory including 512 MB overhead
15/11/11 13:31:48 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:48 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:48 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:48 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000071 for on host bbc4.sics.se
15/11/11 13:31:48 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc4.sics.se
15/11/11 13:31:48 INFO [ContainerLauncher #19] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:48 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000072 for on host bbc4.sics.se
15/11/11 13:31:48 INFO [ContainerLauncher #19] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:48 INFO [ContainerLauncher #19] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:48 INFO [ContainerLauncher #19] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:48 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc4.sics.se
15/11/11 13:31:48 INFO [ContainerLauncher #18] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:48 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000073 for on host bbc6.sics.se
15/11/11 13:31:48 INFO [ContainerLauncher #18] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:48 INFO [ContainerLauncher #18] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:48 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc6.sics.se
15/11/11 13:31:48 INFO [ContainerLauncher #18] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:48 INFO [ContainerLauncher #19] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:48 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000074 for on host bbc3.sics.se
15/11/11 13:31:48 INFO [ContainerLauncher #3] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:48 INFO [ContainerLauncher #3] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:48 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc3.sics.se
15/11/11 13:31:48 INFO [ContainerLauncher #3] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:48 INFO [ContainerLauncher #22] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:48 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000075 for on host bbc3.sics.se
15/11/11 13:31:48 INFO [ContainerLauncher #3] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:48 INFO [ContainerLauncher #18] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:48 INFO [ContainerLauncher #22] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:48 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc3.sics.se
15/11/11 13:31:48 INFO [ContainerLauncher #22] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:48 INFO [ContainerLauncher #20] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:48 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000076 for on host bbc2.sics.se
15/11/11 13:31:48 INFO [ContainerLauncher #20] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:48 INFO [ContainerLauncher #22] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:48 INFO [ContainerLauncher #20] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:48 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc2.sics.se
15/11/11 13:31:48 INFO [ContainerLauncher #3] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:48 INFO [ContainerLauncher #20] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:48 INFO [ContainerLauncher #19] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000071/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000071/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 70 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:48 INFO [Reporter] (Logging.scala:logInfo(59)) - Received 6 containers from YARN, launching executors on 6 of them.
15/11/11 13:31:48 INFO [ContainerLauncher #21] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:48 INFO [ContainerLauncher #19] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc4.sics.se:45007
15/11/11 13:31:48 INFO [ContainerLauncher #22] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:48 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000041 (state: COMPLETE, exit status: 143)
15/11/11 13:31:48 INFO [ContainerLauncher #21] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:48 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000041. Exit status: 143. Diagnostics: Container [pid=23548,containerID=container_1447234668707_0019_01_000041] is running beyond virtual memory limits. Current usage: 632.1 MB of 6 GB physical memory used; 6.5 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000041 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 23548 18492 23548 23548 (bash) 0 0 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000041/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000041 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 40 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000041/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000041/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000041/stderr
|- 23986 23567 23986 23548 (python) 24 2 242991104 5140 python -m pyspark.daemon
|- 23567 23548 23548 23548 (java) 884 335 6344994816 151951 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000041/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000041 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 40 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000041/__app__.jar
|- 24000 23986 23986 23548 (python) 95 0 242991104 4405 python -m pyspark.daemon
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:48 INFO [ContainerLauncher #20] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:48 INFO [ContainerLauncher #21] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:48 INFO [ContainerLauncher #18] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000072/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000072/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 71 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:48 INFO [ContainerLauncher #21] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:48 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000043 (state: COMPLETE, exit status: 143)
15/11/11 13:31:48 INFO [ContainerLauncher #18] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc4.sics.se:45007
15/11/11 13:31:48 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000043. Exit status: 143. Diagnostics: Container [pid=23595,containerID=container_1447234668707_0019_01_000043] is running beyond virtual memory limits. Current usage: 676.4 MB of 6 GB physical memory used; 6.5 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000043 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 23993 23983 23983 23595 (python) 91 0 245510144 5003 python -m pyspark.daemon
|- 23983 23606 23983 23595 (python) 28 3 243154944 5158 python -m pyspark.daemon
|- 23606 23595 23595 23595 (java) 957 232 6336589824 162686 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000043/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000043 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 42 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000043/__app__.jar
|- 23595 18492 23595 23595 (bash) 0 0 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000043/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000043 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 42 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000043/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000043/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000043/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:48 INFO [ContainerLauncher #3] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc6.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000073/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc6.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000073/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 72 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:48 INFO [ContainerLauncher #21] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:48 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000037 (state: COMPLETE, exit status: 143)
15/11/11 13:31:48 INFO [ContainerLauncher #3] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc6.sics.se:45007
15/11/11 13:31:48 INFO [ContainerLauncher #22] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc3.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000074/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc3.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000074/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 73 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:48 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000037. Exit status: 143. Diagnostics: Container [pid=12924,containerID=container_1447234668707_0019_01_000037] is running beyond virtual memory limits. Current usage: 352.5 MB of 6 GB physical memory used; 6.0 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000037 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 12946 12924 12924 12924 (java) 673 126 6338228224 89934 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000037/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000037 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 36 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000037/__app__.jar
|- 12924 6410 12924 12924 (bash) 0 0 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000037/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000037 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 36 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000037/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000037/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000037/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:48 INFO [ContainerLauncher #22] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc3.sics.se:45007
15/11/11 13:31:48 INFO [ContainerLauncher #20] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc3.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000075/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc3.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000075/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 74 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:48 INFO [ContainerLauncher #20] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc3.sics.se:45007
15/11/11 13:31:48 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000040 (state: COMPLETE, exit status: 143)
15/11/11 13:31:48 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000040. Exit status: 143. Diagnostics: Container [pid=45171,containerID=container_1447234668707_0019_01_000040] is running beyond virtual memory limits. Current usage: 380.0 MB of 6 GB physical memory used; 6.0 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000040 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 45171 37998 45171 45171 (bash) 0 1 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000040/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000040 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 39 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000040/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000040/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000040/stderr
|- 45199 45171 45171 45171 (java) 424 335 6345609216 96962 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000040/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000040 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 39 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000040/__app__.jar
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:48 INFO [ContainerLauncher #21] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc2.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000076/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc2.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000076/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 75 --hostname bbc2.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:48 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000049 (state: COMPLETE, exit status: 143)
15/11/11 13:31:48 INFO [ContainerLauncher #21] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc2.sics.se:45007
15/11/11 13:31:48 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000049. Exit status: 143. Diagnostics: Container [pid=45286,containerID=container_1447234668707_0019_01_000049] is running beyond virtual memory limits. Current usage: 356.7 MB of 6 GB physical memory used; 6.0 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000049 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 45314 45286 45286 45286 (java) 438 370 6347743232 91011 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000049/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000049 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 48 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000049/__app__.jar
|- 45286 37998 45286 45286 (bash) 1 0 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000049/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000049 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 48 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000049/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000049/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000049/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:48 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000045 (state: COMPLETE, exit status: 143)
15/11/11 13:31:48 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000045. Exit status: 143. Diagnostics: Container [pid=27037,containerID=container_1447234668707_0019_01_000045] is running beyond virtual memory limits. Current usage: 367.0 MB of 6 GB physical memory used; 6.0 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000045 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 27037 8680 27037 27037 (bash) 0 0 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000045/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000045 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 44 --hostname bbc2.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000045/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000045/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000045/stderr
|- 27059 27037 27037 27037 (java) 433 289 6334164992 93636 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000045/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000045 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 44 --hostname bbc2.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000045/__app__.jar
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:51 INFO [Reporter] (Logging.scala:logInfo(59)) - Will request 6 executor containers, each with 1 cores and 5632 MB memory including 512 MB overhead
15/11/11 13:31:51 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:51 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:51 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:51 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:51 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:51 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:51 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000077 for on host bbc7.sics.se
15/11/11 13:31:51 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc7.sics.se
15/11/11 13:31:51 INFO [ContainerLauncher #0] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:51 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000078 for on host bbc2.sics.se
15/11/11 13:31:51 INFO [ContainerLauncher #0] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:51 INFO [ContainerLauncher #0] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:51 INFO [ContainerLauncher #0] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:51 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc2.sics.se
15/11/11 13:31:51 INFO [ContainerLauncher #1] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:51 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000079 for on host bbc4.sics.se
15/11/11 13:31:51 INFO [ContainerLauncher #1] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:51 INFO [ContainerLauncher #0] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:51 INFO [ContainerLauncher #1] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:51 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc4.sics.se
15/11/11 13:31:51 INFO [ContainerLauncher #1] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:51 INFO [ContainerLauncher #11] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:51 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000080 for on host bbc6.sics.se
15/11/11 13:31:51 INFO [ContainerLauncher #11] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:51 INFO [ContainerLauncher #11] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:51 INFO [ContainerLauncher #11] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:51 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc6.sics.se
15/11/11 13:31:51 INFO [ContainerLauncher #23] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:51 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000081 for on host bbc6.sics.se
15/11/11 13:31:51 INFO [ContainerLauncher #1] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:51 INFO [ContainerLauncher #23] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:51 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc6.sics.se
15/11/11 13:31:51 INFO [ContainerLauncher #11] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:51 INFO [ContainerLauncher #23] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:51 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000082 for on host bbc3.sics.se
15/11/11 13:31:51 INFO [ContainerLauncher #6] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:51 INFO [ContainerLauncher #23] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:51 INFO [ContainerLauncher #6] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:51 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc3.sics.se
15/11/11 13:31:51 INFO [ContainerLauncher #0] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc7.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000077/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc7.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000077/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 76 --hostname bbc7.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:51 INFO [ContainerLauncher #5] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:51 INFO [ContainerLauncher #6] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:51 INFO [ContainerLauncher #23] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:51 INFO [ContainerLauncher #5] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:51 INFO [ContainerLauncher #0] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc7.sics.se:45007
15/11/11 13:31:51 INFO [Reporter] (Logging.scala:logInfo(59)) - Received 6 containers from YARN, launching executors on 6 of them.
15/11/11 13:31:51 INFO [ContainerLauncher #5] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:51 INFO [ContainerLauncher #6] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:51 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000044 (state: COMPLETE, exit status: 143)
15/11/11 13:31:51 INFO [ContainerLauncher #5] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:51 INFO [ContainerLauncher #11] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000079/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000079/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 78 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:51 INFO [ContainerLauncher #1] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc2.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000078/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc2.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000078/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 77 --hostname bbc2.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:51 INFO [ContainerLauncher #11] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc4.sics.se:45007
15/11/11 13:31:51 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000044. Exit status: 143. Diagnostics: Container [pid=23454,containerID=container_1447234668707_0019_01_000044] is running beyond virtual memory limits. Current usage: 429.2 MB of 6 GB physical memory used; 6.0 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000044 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 23454 16725 23454 23454 (bash) 0 0 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000044/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000044 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 43 --hostname bbc7.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000044/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000044/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000044/stderr
|- 23461 23454 23454 23454 (java) 706 99 6340612096 109564 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000044/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000044 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 43 --hostname bbc7.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000044/__app__.jar
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:51 INFO [ContainerLauncher #1] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc2.sics.se:45007
15/11/11 13:31:51 INFO [ContainerLauncher #23] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc6.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000080/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc6.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000080/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 79 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:51 INFO [ContainerLauncher #23] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc6.sics.se:45007
15/11/11 13:31:51 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000046 (state: COMPLETE, exit status: 143)
15/11/11 13:31:51 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000046. Exit status: 143. Diagnostics: Container [pid=27047,containerID=container_1447234668707_0019_01_000046] is running beyond virtual memory limits. Current usage: 520.3 MB of 6 GB physical memory used; 6.5 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000046 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 27555 27550 27550 27047 (python) 74 1 246063104 5124 python -m pyspark.daemon
|- 27047 8680 27047 27047 (bash) 0 0 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000046/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000046 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 45 --hostname bbc2.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000046/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000046/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000046/stderr
|- 27550 27064 27550 27047 (python) 17 2 242929664 5134 python -m pyspark.daemon
|- 27064 27047 27047 27047 (java) 901 557 6351036416 122637 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000046/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000046 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 45 --hostname bbc2.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000046/__app__.jar
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:51 INFO [ContainerLauncher #6] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:51 INFO [ContainerLauncher #5] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:51 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000042 (state: COMPLETE, exit status: 143)
15/11/11 13:31:51 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000042. Exit status: 143. Diagnostics: Container [pid=23587,containerID=container_1447234668707_0019_01_000042] is running beyond virtual memory limits. Current usage: 1.1 GB of 6 GB physical memory used; 6.5 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000042 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 23601 23587 23587 23587 (java) 1057 349 6353063936 269988 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000042/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000042 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 41 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000042/__app__.jar
|- 24128 23601 24128 23587 (python) 18 1 242995200 5131 python -m pyspark.daemon
|- 24148 24128 24128 23587 (python) 216 0 245809152 5050 python -m pyspark.daemon
|- 23587 18492 23587 23587 (bash) 0 0 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000042/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000042 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 41 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000042/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000042/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000042/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:51 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000038 (state: COMPLETE, exit status: 143)
15/11/11 13:31:51 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000038. Exit status: 143. Diagnostics: Container [pid=12923,containerID=container_1447234668707_0019_01_000038] is running beyond virtual memory limits. Current usage: 730.3 MB of 6 GB physical memory used; 6.5 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000038 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 12944 12923 12923 12923 (java) 1075 318 6346887168 176423 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000038/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000038 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 37 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000038/__app__.jar
|- 13486 13473 13473 12923 (python) 153 1 245735424 5058 python -m pyspark.daemon
|- 12923 6410 12923 12923 (bash) 0 0 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000038/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000038 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 37 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000038/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000038/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000038/stderr
|- 13473 12944 13473 12923 (python) 21 3 243163136 5160 python -m pyspark.daemon
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:51 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000039 (state: COMPLETE, exit status: 143)
15/11/11 13:31:51 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000039. Exit status: 143. Diagnostics: Container [pid=12932,containerID=container_1447234668707_0019_01_000039] is running beyond virtual memory limits. Current usage: 1.0 GB of 6 GB physical memory used; 6.5 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000039 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 12945 12932 12932 12932 (java) 1065 271 6351097856 251980 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000039/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000039 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 38 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000039/__app__.jar
|- 13479 13456 13456 12932 (python) 183 1 245571584 5022 python -m pyspark.daemon
|- 13456 12945 13456 12932 (python) 18 3 242995200 5135 python -m pyspark.daemon
|- 12932 6410 12932 12932 (bash) 0 0 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000039/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000039 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 38 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000039/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000039/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000039/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:51 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000051 (state: COMPLETE, exit status: 143)
15/11/11 13:31:51 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000051. Exit status: 143. Diagnostics: Container [pid=45285,containerID=container_1447234668707_0019_01_000051] is running beyond virtual memory limits. Current usage: 420.4 MB of 6 GB physical memory used; 6.0 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000051 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 45307 45285 45285 45285 (java) 435 316 6363607040 107312 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000051/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000051 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 50 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000051/__app__.jar
|- 45285 37998 45285 45285 (bash) 0 0 108650496 309 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000051/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000051 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 50 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000051/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000051/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000051/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:51 INFO [ContainerLauncher #5] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc3.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000082/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc3.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000082/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 81 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:51 INFO [ContainerLauncher #5] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc3.sics.se:45007
15/11/11 13:31:51 INFO [ContainerLauncher #6] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc6.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000081/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc6.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000081/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 80 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:51 INFO [ContainerLauncher #6] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc6.sics.se:45007
15/11/11 13:31:54 INFO [Reporter] (Logging.scala:logInfo(59)) - Will request 6 executor containers, each with 1 cores and 5632 MB memory including 512 MB overhead
15/11/11 13:31:54 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:54 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:54 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:54 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:54 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:54 INFO [Reporter] (Logging.scala:logInfo(59)) - Container request (host: Any, capability: <memory:5632, vCores:1>)
15/11/11 13:31:54 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000083 for on host bbc7.sics.se
15/11/11 13:31:54 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc7.sics.se
15/11/11 13:31:54 INFO [ContainerLauncher #4] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:54 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000084 for on host bbc7.sics.se
15/11/11 13:31:54 INFO [ContainerLauncher #4] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:54 INFO [ContainerLauncher #4] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:54 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc7.sics.se
15/11/11 13:31:54 INFO [ContainerLauncher #4] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:54 INFO [ContainerLauncher #2] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:54 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000085 for on host bbc5.sics.se
15/11/11 13:31:54 INFO [ContainerLauncher #2] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:54 INFO [ContainerLauncher #2] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:54 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc5.sics.se
15/11/11 13:31:54 INFO [ContainerLauncher #2] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:54 INFO [ContainerLauncher #12] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:54 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000086 for on host bbc5.sics.se
15/11/11 13:31:54 INFO [ContainerLauncher #12] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:54 INFO [ContainerLauncher #4] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:54 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc5.sics.se
15/11/11 13:31:54 INFO [ContainerLauncher #12] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:55 INFO [ContainerLauncher #7] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:55 INFO [ContainerLauncher #12] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000087 for on host bbc3.sics.se
15/11/11 13:31:55 INFO [ContainerLauncher #2] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:55 INFO [ContainerLauncher #7] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc3.sics.se
15/11/11 13:31:55 INFO [ContainerLauncher #7] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:55 INFO [ContainerLauncher #10] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:55 INFO [ContainerLauncher #7] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000088 for on host bbc3.sics.se
15/11/11 13:31:55 INFO [ContainerLauncher #12] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:55 INFO [ContainerLauncher #10] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc3.sics.se
15/11/11 13:31:55 INFO [ContainerLauncher #10] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:55 INFO [ContainerLauncher #9] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000089 for on host bbc2.sics.se
15/11/11 13:31:55 INFO [ContainerLauncher #10] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:55 INFO [ContainerLauncher #7] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc2.sics.se
15/11/11 13:31:55 INFO [ContainerLauncher #9] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:55 INFO [ContainerLauncher #4] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc7.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000083/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc7.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000083/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 82 --hostname bbc7.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000090 for on host bbc4.sics.se
15/11/11 13:31:55 INFO [ContainerLauncher #8] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:55 INFO [ContainerLauncher #4] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc7.sics.se:45007
15/11/11 13:31:55 INFO [ContainerLauncher #2] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc7.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000084/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc7.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000084/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 83 --hostname bbc7.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:55 INFO [ContainerLauncher #9] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:55 INFO [ContainerLauncher #10] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc4.sics.se
15/11/11 13:31:55 INFO [ContainerLauncher #8] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000091 for on host bbc4.sics.se
15/11/11 13:31:55 INFO [ContainerLauncher #15] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:55 INFO [ContainerLauncher #12] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc5.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000085/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc5.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000085/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 84 --hostname bbc5.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:55 INFO [ContainerLauncher #2] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc7.sics.se:45007
15/11/11 13:31:55 INFO [ContainerLauncher #9] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:55 INFO [ContainerLauncher #12] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc5.sics.se:45007
15/11/11 13:31:55 INFO [ContainerLauncher #7] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc5.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000086/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc5.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000086/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 85 --hostname bbc5.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc4.sics.se
15/11/11 13:31:55 INFO [ContainerLauncher #15] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000092 for on host bbc6.sics.se
15/11/11 13:31:55 INFO [ContainerLauncher #8] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:55 INFO [ContainerLauncher #15] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:55 INFO [ContainerLauncher #13] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:55 INFO [ContainerLauncher #7] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc5.sics.se:45007
15/11/11 13:31:55 INFO [ContainerLauncher #9] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:55 INFO [ContainerLauncher #10] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc3.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000087/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc3.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000087/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 86 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:55 INFO [ContainerLauncher #13] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:55 INFO [ContainerLauncher #15] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc6.sics.se
15/11/11 13:31:55 INFO [ContainerLauncher #8] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000093 for on host bbc4.sics.se
15/11/11 13:31:55 INFO [ContainerLauncher #14] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:55 INFO [ContainerLauncher #10] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc3.sics.se:45007
15/11/11 13:31:55 INFO [ContainerLauncher #13] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:55 INFO [ContainerLauncher #13] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc4.sics.se
15/11/11 13:31:55 INFO [ContainerLauncher #14] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching container container_1447234668707_0019_01_000094 for on host bbc4.sics.se
15/11/11 13:31:55 INFO [ContainerLauncher #8] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:55 INFO [ContainerLauncher #16] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Launching ExecutorRunnable. driverUrl: akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler, executorHostname: bbc4.sics.se
15/11/11 13:31:55 INFO [ContainerLauncher #15] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Received 12 containers from YARN, launching executors on 12 of them.
15/11/11 13:31:55 INFO [ContainerLauncher #14] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:55 INFO [ContainerLauncher #9] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc3.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000088/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc3.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000088/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 87 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:55 INFO [ContainerLauncher #17] (Logging.scala:logInfo(59)) - Starting Executor Container
15/11/11 13:31:55 INFO [ContainerLauncher #16] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:55 INFO [ContainerLauncher #9] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc3.sics.se:45007
15/11/11 13:31:55 INFO [ContainerLauncher #17] (ContainerManagementProtocolProxy.java:<init>(78)) - yarn.client.max-nodemanagers-proxies : 500
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000053 (state: COMPLETE, exit status: 143)
15/11/11 13:31:55 INFO [ContainerLauncher #14] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:55 INFO [ContainerLauncher #13] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000053. Exit status: 143. Diagnostics: Container [pid=23675,containerID=container_1447234668707_0019_01_000053] is running beyond virtual memory limits. Current usage: 1.1 GB of 6 GB physical memory used; 6.5 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000053 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 23675 16725 23675 23675 (bash) 0 0 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000053/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000053 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 52 --hostname bbc7.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000053/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000053/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000053/stderr
|- 23938 23929 23929 23675 (python) 237 1 246149120 5133 python -m pyspark.daemon
|- 23929 23692 23929 23675 (python) 26 3 243007488 5144 python -m pyspark.daemon
|- 23692 23675 23675 23675 (java) 1065 159 6358634496 270635 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000053/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000053 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 52 --hostname bbc7.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000053/__app__.jar
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:55 INFO [ContainerLauncher #17] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:55 INFO [ContainerLauncher #16] (Logging.scala:logInfo(59)) - Setting up ContainerLaunchContext
15/11/11 13:31:55 INFO [ContainerLauncher #17] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:55 INFO [ContainerLauncher #8] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc2.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000089/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc2.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000089/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 88 --hostname bbc2.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000054 (state: COMPLETE, exit status: 143)
15/11/11 13:31:55 INFO [ContainerLauncher #16] (Logging.scala:logInfo(59)) - Preparing Local resources
15/11/11 13:31:55 INFO [ContainerLauncher #14] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:55 INFO [ContainerLauncher #8] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc2.sics.se:45007
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000054. Exit status: 143. Diagnostics: Container [pid=23670,containerID=container_1447234668707_0019_01_000054] is running beyond virtual memory limits. Current usage: 1.1 GB of 6 GB physical memory used; 6.5 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000054 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 23932 23925 23925 23670 (python) 248 5 246509568 5223 python -m pyspark.daemon
|- 23683 23670 23670 23670 (java) 1078 149 6349447168 279141 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000054/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000054 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 53 --hostname bbc7.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000054/__app__.jar
|- 23925 23683 23925 23670 (python) 24 3 243007488 5142 python -m pyspark.daemon
|- 23670 16725 23670 23670 (bash) 0 0 108650496 309 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000054/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000054 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 53 --hostname bbc7.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000054/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000054/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000054/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:55 INFO [ContainerLauncher #15] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000090/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000090/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 89 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:55 INFO [ContainerLauncher #17] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000059 (state: COMPLETE, exit status: 143)
15/11/11 13:31:55 INFO [ContainerLauncher #15] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc4.sics.se:45007
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000059. Exit status: 143. Diagnostics: Container [pid=21552,containerID=container_1447234668707_0019_01_000059] is running beyond virtual memory limits. Current usage: 1.0 GB of 6 GB physical memory used; 6.5 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000059 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 21552 12168 21552 21552 (bash) 0 1 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000059/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000059 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 58 --hostname bbc5.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000059/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000059/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000059/stderr
|- 21990 21984 21984 21552 (python) 190 1 245846016 5061 python -m pyspark.daemon
|- 21589 21552 21552 21552 (java) 1129 306 6350274560 264086 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000059/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000059 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 58 --hostname bbc5.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000059/__app__.jar
|- 21984 21589 21984 21552 (python) 24 4 242995200 5136 python -m pyspark.daemon
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:55 INFO [ContainerLauncher #16] (Logging.scala:logInfo(59)) - Prepared Local resources Map(__spark__.jar -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar" } size: 178609735 timestamp: 1447245068472 type: FILE visibility: PRIVATE, pyspark.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip" } size: 334702 timestamp: 1447245069115 type: FILE visibility: PRIVATE, py4j-0.8.2.1-src.zip -> resource { scheme: "hdfs" host: "bbc1.sics.se" port: 40001 file: "/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip" } size: 37562 timestamp: 1447245069250 type: FILE visibility: PRIVATE)
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000060 (state: COMPLETE, exit status: 143)
15/11/11 13:31:55 INFO [ContainerLauncher #13] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000091/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000091/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 90 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000060. Exit status: 143. Diagnostics: Container [pid=21553,containerID=container_1447234668707_0019_01_000060] is running beyond virtual memory limits. Current usage: 868.0 MB of 6 GB physical memory used; 6.5 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000060 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 21987 21587 21987 21553 (python) 19 3 242995200 5136 python -m pyspark.daemon
|- 21998 21987 21987 21553 (python) 182 1 245628928 5034 python -m pyspark.daemon
|- 21553 12168 21553 21553 (bash) 0 1 108650496 309 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000060/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000060 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 59 --hostname bbc5.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000060/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000060/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000060/stderr
|- 21587 21553 21553 21553 (java) 1082 258 6345170944 211720 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000060/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000060 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 59 --hostname bbc5.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000060/__app__.jar
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:55 INFO [ContainerLauncher #13] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc4.sics.se:45007
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000048 (state: COMPLETE, exit status: 143)
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000048. Exit status: 143. Diagnostics: Container [pid=45288,containerID=container_1447234668707_0019_01_000048] is running beyond virtual memory limits. Current usage: 545.8 MB of 6 GB physical memory used; 6.5 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000048 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 45888 45311 45888 45288 (python) 29 3 242929664 5129 python -m pyspark.daemon
|- 45288 37998 45288 45288 (bash) 0 0 108650496 309 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000048/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000048 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 47 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000048/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000048/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000048/stderr
|- 45930 45888 45888 45288 (python) 43 0 245022720 4900 python -m pyspark.daemon
|- 45311 45288 45288 45288 (java) 610 314 6359093248 129376 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000048/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000048 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 47 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000048/__app__.jar
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:55 INFO [ContainerLauncher #14] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc6.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000092/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc6.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000092/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 91 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000050 (state: COMPLETE, exit status: 143)
15/11/11 13:31:55 INFO [ContainerLauncher #17] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000094/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000094/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 93 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:55 INFO [ContainerLauncher #14] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc6.sics.se:45007
15/11/11 13:31:55 INFO [ContainerLauncher #16] (Logging.scala:logInfo(59)) -
===============================================================================
YARN executor launch context:
env:
CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
SPARK_LOG_URL_STDERR -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000093/word2vec/stderr?start=-4096
SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1447234668707_0019
SPARK_YARN_CACHE_FILES_FILE_SIZES -> 178609735,334702,37562
SPARK_USER -> word2vec
SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE,PRIVATE
SPARK_YARN_MODE -> true
SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1447245068472,1447245069115,1447245069250
PYTHONPATH -> {{PWD}}/pyspark.zip<CPS>{{PWD}}/py4j-0.8.2.1-src.zip
SPARK_LOG_URL_STDOUT -> http://bbc4.sics.se:45009/node/containerlogs/container_1447234668707_0019_01_000093/word2vec/stdout?start=-4096
SPARK_YARN_CACHE_FILES -> hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/spark-assembly-1.5.1-hadoop2.4.0.jar#__spark__.jar,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/pyspark.zip#pyspark.zip,hdfs://bbc1.sics.se:40001/user/word2vec/.sparkStaging/application_1447234668707_0019/py4j-0.8.2.1-src.zip#py4j-0.8.2.1-src.zip
command:
{{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 92 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000050. Exit status: 143. Diagnostics: Container [pid=45287,containerID=container_1447234668707_0019_01_000050] is running beyond virtual memory limits. Current usage: 445.1 MB of 6 GB physical memory used; 6.5 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000050 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 45936 45889 45889 45287 (python) 0 0 242909184 4247 python -m pyspark.daemon
|- 45287 37998 45287 45287 (bash) 0 1 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000050/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000050 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 49 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000050/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000050/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000050/stderr
|- 45308 45287 45287 45287 (java) 529 373 6349385728 104257 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000050/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000050 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 49 --hostname bbc3.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000050/__app__.jar
|- 45889 45308 45889 45287 (python) 28 3 242909184 5129 python -m pyspark.daemon
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:55 INFO [ContainerLauncher #17] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc4.sics.se:45007
15/11/11 13:31:55 INFO [ContainerLauncher #16] (ContainerManagementProtocolProxy.java:newProxy(212)) - Opening proxy : bbc4.sics.se:45007
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000047 (state: COMPLETE, exit status: 143)
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000047. Exit status: 143. Diagnostics: Container [pid=27036,containerID=container_1447234668707_0019_01_000047] is running beyond virtual memory limits. Current usage: 483.7 MB of 6 GB physical memory used; 6.5 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000047 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 27774 27062 27774 27036 (python) 17 3 242896896 5135 python -m pyspark.daemon
|- 27062 27036 27036 27036 (java) 510 348 6349549568 114019 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000047/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000047 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 46 --hostname bbc2.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000047/__app__.jar
|- 27036 8680 27036 27036 (bash) 0 1 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000047/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000047 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 46 --hostname bbc2.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000047/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000047/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000047/stderr
|- 27781 27774 27774 27036 (python) 4 0 242896896 4351 python -m pyspark.daemon
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000066 (state: COMPLETE, exit status: 143)
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000066. Exit status: 143. Diagnostics: Container [pid=23901,containerID=container_1447234668707_0019_01_000066] is running beyond virtual memory limits. Current usage: 1.1 GB of 6 GB physical memory used; 6.5 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000066 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 24377 24374 24374 23901 (python) 237 1 245723136 5058 python -m pyspark.daemon
|- 23914 23901 23901 23901 (java) 1097 380 6337060864 271985 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000066/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000066 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 65 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000066/__app__.jar
|- 23901 18492 23901 23901 (bash) 0 1 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000066/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000066 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 65 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000066/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000066/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000066/stderr
|- 24374 23914 24374 23901 (python) 18 2 242987008 5138 python -m pyspark.daemon
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000070 (state: COMPLETE, exit status: 143)
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000070. Exit status: 143. Diagnostics: Container [pid=24033,containerID=container_1447234668707_0019_01_000070] is running beyond virtual memory limits. Current usage: 451.0 MB of 6 GB physical memory used; 6.5 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000070 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 24033 18492 24033 24033 (bash) 0 0 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000070/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000070 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 69 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000070/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000070/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000070/stderr
|- 24520 24515 24515 24033 (python) 21 0 243150848 4421 python -m pyspark.daemon
|- 24515 24039 24515 24033 (python) 17 1 243150848 5157 python -m pyspark.daemon
|- 24039 24033 24033 24033 (java) 814 200 6351405056 105563 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000070/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000070 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 69 --hostname bbc4.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000070/__app__.jar
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000055 (state: COMPLETE, exit status: 143)
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Container marked as failed: container_1447234668707_0019_01_000055. Exit status: 143. Diagnostics: Container [pid=13179,containerID=container_1447234668707_0019_01_000055] is running beyond virtual memory limits. Current usage: 544.0 MB of 6 GB physical memory used; 6.5 GB of 6 GB virtual memory used. Killing container.
Dump of the process-tree for container_1447234668707_0019_01_000055 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 13801 13791 13791 13179 (python) 52 0 245100544 4869 python -m pyspark.daemon
|- 13791 13185 13791 13179 (python) 18 3 243003392 5138 python -m pyspark.daemon
|- 13185 13179 13179 13179 (java) 869 223 6349328384 128949 /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000055/tmp -Dspark.driver.port=52828 -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000055 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 54 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000055/__app__.jar
|- 13179 6410 13179 13179 (bash) 0 0 108650496 310 /bin/bash -c /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.75.x86_64/jre/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms5120m -Xmx5120m -Djava.io.tmpdir=/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000055/tmp '-Dspark.driver.port=52828' -Dspark.yarn.app.container.log.dir=/home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000055 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://[email protected]:52828/user/CoarseGrainedScheduler --executor-id 54 --hostname bbc6.sics.se --cores 1 --app-id application_1447234668707_0019 --user-class-path file:/tmp/hadoop-word2vec/nm-local-dir/usercache/word2vec/appcache/application_1447234668707_0019/container_1447234668707_0019_01_000055/__app__.jar 1> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000055/stdout 2> /home/word2vec/hop_distro/hadoop-2.4.0/logs/userlogs/application_1447234668707_0019/container_1447234668707_0019_01_000055/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000071 (state: COMPLETE, exit status: 0)
15/11/11 13:31:55 INFO [Reporter] (Logging.scala:logInfo(59)) - Completed container container_1447234668707_0019_01_000072 (state: COMPLETE, exit status: 0)
15/11/11 13:31:56 INFO [sparkYarnAM-akka.actor.default-dispatcher-3] (Logging.scala:logInfo(59)) - Driver terminated or disconnected! Shutting down. 193.10.64.11:52828
15/11/11 13:31:56 INFO [sparkYarnAM-akka.actor.default-dispatcher-3] (Logging.scala:logInfo(59)) - Final app status: SUCCEEDED, exitCode: 0
15/11/11 13:31:56 WARN [sparkYarnAM-akka.actor.default-dispatcher-3] (Slf4jLogger.scala:apply$mcV$sp(71)) - Association with remote system [akka.tcp://[email protected]:52828] has failed, address is now gated for [5000] ms. Reason: [Disassociated]
15/11/11 13:31:56 INFO [Thread-2] (Logging.scala:logInfo(59)) - Unregistering ApplicationMaster with SUCCEEDED
15/11/11 13:31:56 INFO [sparkYarnAM-akka.actor.default-dispatcher-3] (Logging.scala:logInfo(59)) - Driver terminated or disconnected! Shutting down. 193.10.64.11:52828
15/11/11 13:31:56 INFO [Thread-2] (AMRMClientImpl.java:unregisterApplicationMaster(321)) - Waiting for application to be successfully unregistered.
15/11/11 13:31:56 INFO [Thread-2] (Logging.scala:logInfo(59)) - Deleting staging directory .sparkStaging/application_1447234668707_0019
15/11/11 13:31:56 INFO [Thread-2] (Logging.scala:logInfo(59)) - Shutdown hook called
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment