Created
May 28, 2015 15:31
-
-
Save samos123/3300191684aee7fc8013 to your computer and use it in GitHub Desktop.
full stderr log of spark worker
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties | |
15/05/28 15:16:22 INFO CoarseGrainedExecutorBackend: Registered signal handlers for [TERM, HUP, INT] | |
15/05/28 15:16:23 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable | |
15/05/28 15:16:23 INFO SecurityManager: Changing view acls to: hdfs | |
15/05/28 15:16:23 INFO SecurityManager: Changing modify acls to: hdfs | |
15/05/28 15:16:23 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hdfs); users with modify permissions: Set(hdfs) | |
15/05/28 15:16:24 INFO Slf4jLogger: Slf4jLogger started | |
15/05/28 15:16:24 INFO Remoting: Starting remoting | |
15/05/28 15:16:24 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:33398] | |
15/05/28 15:16:24 INFO Utils: Successfully started service 'driverPropsFetcher' on port 33398. | |
15/05/28 15:16:24 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. | |
15/05/28 15:16:24 INFO SecurityManager: Changing view acls to: hdfs | |
15/05/28 15:16:24 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. | |
15/05/28 15:16:24 INFO SecurityManager: Changing modify acls to: hdfs | |
15/05/28 15:16:24 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hdfs); users with modify permissions: Set(hdfs) | |
15/05/28 15:16:25 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down. | |
15/05/28 15:16:25 INFO Slf4jLogger: Slf4jLogger started | |
15/05/28 15:16:25 INFO Remoting: Starting remoting | |
15/05/28 15:16:25 INFO Utils: Successfully started service 'sparkExecutor' on port 51237. | |
15/05/28 15:16:25 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:51237] | |
15/05/28 15:16:25 INFO AkkaUtils: Connecting to MapOutputTracker: akka.tcp://[email protected]:60014/user/MapOutputTracker | |
15/05/28 15:16:25 INFO AkkaUtils: Connecting to BlockManagerMaster: akka.tcp://[email protected]:60014/user/BlockManagerMaster | |
15/05/28 15:16:25 INFO DiskBlockManager: Created local directory at /tmp/spark-f6889f3a-e199-48bd-a35b-ea58db0023dc/spark-cac96f64-dfe1-45b5-9b76-78131ee9e973/spark-bd96f201-a3a5-4b13-92e4-aada3b0113f5/blockmgr-0347df50-7b6a-420a-afb1-8f1b64bd33d4 | |
15/05/28 15:16:25 INFO MemoryStore: MemoryStore started with capacity 267.3 MB | |
15/05/28 15:16:25 INFO AkkaUtils: Connecting to OutputCommitCoordinator: akka.tcp://[email protected]:60014/user/OutputCommitCoordinator | |
15/05/28 15:16:25 INFO CoarseGrainedExecutorBackend: Connecting to driver: akka.tcp://[email protected]:60014/user/CoarseGrainedScheduler | |
15/05/28 15:16:25 INFO WorkerWatcher: Connecting to worker akka.tcp://[email protected]:42140/user/Worker | |
15/05/28 15:16:25 INFO WorkerWatcher: Successfully connected to akka.tcp://[email protected]:42140/user/Worker | |
15/05/28 15:16:25 INFO CoarseGrainedExecutorBackend: Successfully registered with driver | |
15/05/28 15:16:25 INFO Executor: Starting executor ID 3 on host vagrant-ubuntu-trusty-64.localdomain | |
15/05/28 15:16:26 INFO NettyBlockTransferService: Server created on 56202 | |
15/05/28 15:16:26 INFO BlockManagerMaster: Trying to register BlockManager | |
15/05/28 15:16:26 INFO BlockManagerMaster: Registered BlockManager | |
15/05/28 15:16:26 INFO AkkaUtils: Connecting to HeartbeatReceiver: akka.tcp://[email protected]:60014/user/HeartbeatReceiver | |
15/05/28 15:16:26 INFO CoarseGrainedExecutorBackend: Got assigned task 2 | |
15/05/28 15:16:26 INFO Executor: Running task 0.1 in stage 1.0 (TID 2) | |
15/05/28 15:16:26 INFO Executor: Fetching http://10.0.2.15:35895/files/feature_extraction.py with timestamp 1432826169279 | |
15/05/28 15:16:26 INFO Utils: Fetching http://10.0.2.15:35895/files/feature_extraction.py to /tmp/spark-f6889f3a-e199-48bd-a35b-ea58db0023dc/spark-cac96f64-dfe1-45b5-9b76-78131ee9e973/spark-cc676135-ac37-4258-9b3e-472d27b4af23/fetchFileTemp2727067710467870873.tmp | |
15/05/28 15:16:26 INFO Utils: Copying /tmp/spark-f6889f3a-e199-48bd-a35b-ea58db0023dc/spark-cac96f64-dfe1-45b5-9b76-78131ee9e973/spark-cc676135-ac37-4258-9b3e-472d27b4af23/9665226111432826169279_cache to /usr/local/spark/work/app-20150528151609-0012/3/./feature_extraction.py | |
15/05/28 15:16:26 INFO TorrentBroadcast: Started reading broadcast variable 3 | |
15/05/28 15:16:26 INFO MemoryStore: ensureFreeSpace(4190) called with curMem=0, maxMem=280248975 | |
15/05/28 15:16:26 INFO MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 4.1 KB, free 267.3 MB) | |
15/05/28 15:16:26 INFO BlockManagerMaster: Updated info of block broadcast_3_piece0 | |
15/05/28 15:16:26 INFO TorrentBroadcast: Reading broadcast variable 3 took 209 ms | |
15/05/28 15:16:26 INFO MemoryStore: ensureFreeSpace(5584) called with curMem=4190, maxMem=280248975 | |
15/05/28 15:16:26 INFO MemoryStore: Block broadcast_3 stored as values in memory (estimated size 5.5 KB, free 267.3 MB) | |
15/05/28 15:16:27 INFO HadoopRDD: Input split: hdfs://192.168.33.10:9000/tmp/test-seq:0+11079925 | |
15/05/28 15:16:27 INFO TorrentBroadcast: Started reading broadcast variable 0 | |
15/05/28 15:16:27 INFO MemoryStore: ensureFreeSpace(36168) called with curMem=9774, maxMem=280248975 | |
15/05/28 15:16:27 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 35.3 KB, free 267.2 MB) | |
15/05/28 15:16:27 INFO BlockManagerMaster: Updated info of block broadcast_0_piece0 | |
15/05/28 15:16:27 INFO TorrentBroadcast: Reading broadcast variable 0 took 23 ms | |
15/05/28 15:16:27 INFO MemoryStore: ensureFreeSpace(345788) called with curMem=45942, maxMem=280248975 | |
15/05/28 15:16:27 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 337.7 KB, free 266.9 MB) | |
15/05/28 15:16:27 INFO deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id | |
15/05/28 15:16:27 INFO deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id | |
15/05/28 15:16:27 INFO deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap | |
15/05/28 15:16:27 INFO deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition | |
15/05/28 15:16:27 INFO deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id | |
15/05/28 15:16:29 INFO CodecPool: Got brand-new decompressor [.deflate] | |
15/05/28 15:16:30 ERROR Executor: Exception in task 0.1 in stage 1.0 (TID 2) | |
org.apache.spark.SparkException: Python worker exited unexpectedly (crashed) | |
at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:172) | |
at org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:176) | |
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:94) | |
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) | |
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) | |
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61) | |
at org.apache.spark.scheduler.Task.run(Task.scala:64) | |
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) | |
at java.lang.Thread.run(Thread.java:745) | |
Caused by: java.io.EOFException | |
at java.io.DataInputStream.readInt(DataInputStream.java:392) | |
at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:108) | |
... 10 more | |
15/05/28 15:16:30 INFO CoarseGrainedExecutorBackend: Got assigned task 3 | |
15/05/28 15:16:30 INFO Executor: Running task 1.0 in stage 1.0 (TID 3) | |
15/05/28 15:16:30 WARN PythonWorkerFactory: Failed to open socket to Python daemon: | |
java.net.ConnectException: Connection refused | |
at java.net.PlainSocketImpl.socketConnect(Native Method) | |
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) | |
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) | |
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) | |
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) | |
at java.net.Socket.connect(Socket.java:579) | |
at java.net.Socket.connect(Socket.java:528) | |
at java.net.Socket.<init>(Socket.java:425) | |
at java.net.Socket.<init>(Socket.java:241) | |
at org.apache.spark.api.python.PythonWorkerFactory.createSocket$1(PythonWorkerFactory.scala:75) | |
at org.apache.spark.api.python.PythonWorkerFactory.liftedTree1$1(PythonWorkerFactory.scala:90) | |
at org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:89) | |
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:62) | |
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:105) | |
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70) | |
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277) | |
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244) | |
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61) | |
at org.apache.spark.scheduler.Task.run(Task.scala:64) | |
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) | |
at java.lang.Thread.run(Thread.java:745) | |
15/05/28 15:16:30 WARN PythonWorkerFactory: Assuming that daemon unexpectedly quit, attempting to restart | |
15/05/28 15:16:31 INFO HadoopRDD: Input split: hdfs://192.168.33.10:9000/tmp/test-seq:11079925+11079925 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment