Created
September 6, 2016 19:25
-
-
Save ramhiser/a44246cdf726a23956789a994d448b7f to your computer and use it in GitHub Desktop.
Kubernetes + Spark Exception: java.net.UnknownHostException: metadata
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
16/09/06 19:00:49 INFO Master: Registered signal handlers for [TERM, HUP, INT] | |
16/09/06 19:00:50 INFO SecurityManager: Changing view acls to: root | |
16/09/06 19:00:50 INFO SecurityManager: Changing modify acls to: root | |
16/09/06 19:00:50 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root) | |
16/09/06 19:00:51 INFO Slf4jLogger: Slf4jLogger started | |
16/09/06 19:00:51 INFO Remoting: Starting remoting | |
16/09/06 19:00:51 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkMaster@spark-master:7077] | |
16/09/06 19:00:51 INFO Utils: Successfully started service 'sparkMaster' on port 7077. | |
16/09/06 19:00:51 INFO Master: Starting Spark master at spark://spark-master:7077 | |
16/09/06 19:00:51 INFO Master: Running Spark version 1.5.2 | |
16/09/06 19:00:59 INFO Utils: Successfully started service 'MasterUI' on port 8080. | |
16/09/06 19:00:59 INFO MasterWebUI: Started MasterWebUI at http://172.17.0.3:8080 | |
16/09/06 19:00:59 INFO Utils: Successfully started service on port 6066. | |
16/09/06 19:00:59 INFO StandaloneRestServer: Started REST server for submitting applications on port 6066 | |
16/09/06 19:00:59 INFO Master: I have been elected leader! New state: ALIVE | |
16/09/06 19:02:28 INFO Master: Registering worker 172.17.0.4:45873 with 1 cores, 6.8 GB RAM | |
16/09/06 19:02:28 INFO Master: Registering worker 172.17.0.5:36552 with 1 cores, 6.8 GB RAM | |
16/09/06 19:04:56 INFO Master: Registering app PySparkShell | |
16/09/06 19:04:56 INFO Master: Registered app PySparkShell with ID app-20160906190456-0000 | |
16/09/06 19:04:56 INFO Master: Launching executor app-20160906190456-0000/0 on worker worker-20160906190219-172.17.0.4-45873 | |
16/09/06 19:04:56 INFO Master: Launching executor app-20160906190456-0000/1 on worker worker-20160906190219-172.17.0.5-36552 | |
16/09/06 19:05:20 INFO Master: Received unregister request from application app-20160906190456-0000 | |
16/09/06 19:05:20 INFO Master: Removing app app-20160906190456-0000 | |
16/09/06 19:05:20 INFO Master: 172.17.0.6:40004 got disassociated, removing it. | |
16/09/06 19:05:21 WARN Master: Got status update for unknown executor app-20160906190456-0000/0 | |
16/09/06 19:05:21 WARN Master: Got status update for unknown executor app-20160906190456-0000/1 | |
16/09/06 19:08:04 INFO Master: Registering app Zeppelin | |
16/09/06 19:08:04 INFO Master: Registered app Zeppelin with ID app-20160906190804-0001 | |
16/09/06 19:08:04 INFO Master: Launching executor app-20160906190804-0001/0 on worker worker-20160906190219-172.17.0.4-45873 | |
16/09/06 19:08:04 INFO Master: Launching executor app-20160906190804-0001/1 on worker worker-20160906190219-172.17.0.5-36552 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
10.0.0.141 spark-master.spark-cluster.svc.cluster.local | |
16/09/06 19:02:15 INFO Worker: Registered signal handlers for [TERM, HUP, INT] | |
16/09/06 19:02:17 INFO SecurityManager: Changing view acls to: root | |
16/09/06 19:02:17 INFO SecurityManager: Changing modify acls to: root | |
16/09/06 19:02:17 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root) | |
16/09/06 19:02:18 INFO Slf4jLogger: Slf4jLogger started | |
16/09/06 19:02:18 INFO Remoting: Starting remoting | |
16/09/06 19:02:19 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:45873] | |
16/09/06 19:02:19 INFO Utils: Successfully started service 'sparkWorker' on port 45873. | |
16/09/06 19:02:19 INFO Worker: Starting Spark worker 172.17.0.4:45873 with 1 cores, 6.8 GB RAM | |
16/09/06 19:02:19 INFO Worker: Running Spark version 1.5.2 | |
16/09/06 19:02:19 INFO Worker: Spark home: /opt/spark | |
16/09/06 19:02:27 INFO Utils: Successfully started service 'WorkerUI' on port 8081. | |
16/09/06 19:02:27 INFO WorkerWebUI: Started WorkerWebUI at http://172.17.0.4:8081 | |
16/09/06 19:02:27 INFO Worker: Connecting to master spark-master:7077... | |
16/09/06 19:02:28 INFO Worker: Successfully registered with master spark://spark-master:7077 | |
16/09/06 19:04:56 INFO Worker: Asked to launch executor app-20160906190456-0000/0 for PySparkShell | |
16/09/06 19:04:56 INFO SecurityManager: Changing view acls to: root | |
16/09/06 19:04:56 INFO SecurityManager: Changing modify acls to: root | |
16/09/06 19:04:56 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root) | |
16/09/06 19:04:56 INFO ExecutorRunner: Launch command: "/usr/lib/jvm/java-8-openjdk-amd64/bin/java" "-cp" "/opt/spark/lib/gcs-connector-latest-hadoop2.jar:/opt/spark/conf/:/opt/spark/lib/spark-assembly-1.5.2-hadoop2.6.0.jar:/opt/spark/lib/datanucleus-api-jdo-3.2.6.jar:/opt/spark/lib/datanucleus-core-3.2.10.jar:/opt/spark/lib/datanucleus-rdbms-3.2.9.jar" "-Xms1024M" "-Xmx1024M" "-Dspark.driver.port=40004" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "--driver-url" "akka.tcp://[email protected]:40004/user/CoarseGrainedScheduler" "--executor-id" "0" "--hostname" "172.17.0.4" "--cores" "1" "--app-id" "app-20160906190456-0000" "--worker-url" "akka.tcp://[email protected]:45873/user/Worker" | |
16/09/06 19:05:20 INFO Worker: Asked to kill executor app-20160906190456-0000/0 | |
16/09/06 19:05:20 INFO ExecutorRunner: Runner thread for executor app-20160906190456-0000/0 interrupted | |
16/09/06 19:05:20 INFO ExecutorRunner: Killing process! | |
16/09/06 19:05:20 ERROR FileAppender: Error writing stream to file /opt/spark/work/app-20160906190456-0000/0/stderr | |
java.io.IOException: Stream closed | |
at java.io.BufferedInputStream.getBufIfOpen(BufferedInputStream.java:170) | |
at java.io.BufferedInputStream.read1(BufferedInputStream.java:283) | |
at java.io.BufferedInputStream.read(BufferedInputStream.java:345) | |
at java.io.FilterInputStream.read(FilterInputStream.java:107) | |
at org.apache.spark.util.logging.FileAppender.appendStreamToFile(FileAppender.scala:70) | |
at org.apache.spark.util.logging.FileAppender$$anon$1$$anonfun$run$1.apply$mcV$sp(FileAppender.scala:39) | |
at org.apache.spark.util.logging.FileAppender$$anon$1$$anonfun$run$1.apply(FileAppender.scala:39) | |
at org.apache.spark.util.logging.FileAppender$$anon$1$$anonfun$run$1.apply(FileAppender.scala:39) | |
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1699) | |
at org.apache.spark.util.logging.FileAppender$$anon$1.run(FileAppender.scala:38) | |
16/09/06 19:05:21 INFO Worker: Executor app-20160906190456-0000/0 finished with state KILLED exitStatus 143 | |
16/09/06 19:05:21 INFO Worker: Cleaning up local directories for application app-20160906190456-0000 | |
16/09/06 19:05:21 INFO ExternalShuffleBlockResolver: Application app-20160906190456-0000 removed, cleanupLocalDirs = true | |
16/09/06 19:08:04 INFO Worker: Asked to launch executor app-20160906190804-0001/0 for Zeppelin | |
16/09/06 19:08:04 INFO SecurityManager: Changing view acls to: root | |
16/09/06 19:08:04 INFO SecurityManager: Changing modify acls to: root | |
16/09/06 19:08:04 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root) | |
16/09/06 19:08:04 INFO ExecutorRunner: Launch command: "/usr/lib/jvm/java-8-openjdk-amd64/bin/java" "-cp" "/opt/spark/lib/gcs-connector-latest-hadoop2.jar:/opt/spark/conf/:/opt/spark/lib/spark-assembly-1.5.2-hadoop2.6.0.jar:/opt/spark/lib/datanucleus-api-jdo-3.2.6.jar:/opt/spark/lib/datanucleus-core-3.2.10.jar:/opt/spark/lib/datanucleus-rdbms-3.2.9.jar" "-Xms512M" "-Xmx512M" "-Dspark.driver.port=36891" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "--driver-url" "akka.tcp://[email protected]:36891/user/CoarseGrainedScheduler" "--executor-id" "0" "--hostname" "172.17.0.4" "--cores" "1" "--app-id" "app-20160906190804-0001" "--worker-url" "akka.tcp://[email protected]:45873/user/Worker" |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
10.0.0.141 spark-master.spark-cluster.svc.cluster.local | |
16/09/06 19:02:16 INFO Worker: Registered signal handlers for [TERM, HUP, INT] | |
16/09/06 19:02:17 INFO SecurityManager: Changing view acls to: root | |
16/09/06 19:02:17 INFO SecurityManager: Changing modify acls to: root | |
16/09/06 19:02:17 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root) | |
16/09/06 19:02:18 INFO Slf4jLogger: Slf4jLogger started | |
16/09/06 19:02:18 INFO Remoting: Starting remoting | |
16/09/06 19:02:19 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:36552] | |
16/09/06 19:02:19 INFO Utils: Successfully started service 'sparkWorker' on port 36552. | |
16/09/06 19:02:19 INFO Worker: Starting Spark worker 172.17.0.5:36552 with 1 cores, 6.8 GB RAM | |
16/09/06 19:02:19 INFO Worker: Running Spark version 1.5.2 | |
16/09/06 19:02:19 INFO Worker: Spark home: /opt/spark | |
16/09/06 19:02:27 INFO Utils: Successfully started service 'WorkerUI' on port 8081. | |
16/09/06 19:02:27 INFO WorkerWebUI: Started WorkerWebUI at http://172.17.0.5:8081 | |
16/09/06 19:02:27 INFO Worker: Connecting to master spark-master:7077... | |
16/09/06 19:02:28 INFO Worker: Successfully registered with master spark://spark-master:7077 | |
16/09/06 19:04:56 INFO Worker: Asked to launch executor app-20160906190456-0000/1 for PySparkShell | |
16/09/06 19:04:56 INFO SecurityManager: Changing view acls to: root | |
16/09/06 19:04:56 INFO SecurityManager: Changing modify acls to: root | |
16/09/06 19:04:56 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root) | |
16/09/06 19:04:56 INFO ExecutorRunner: Launch command: "/usr/lib/jvm/java-8-openjdk-amd64/bin/java" "-cp" "/opt/spark/lib/gcs-connector-latest-hadoop2.jar:/opt/spark/conf/:/opt/spark/lib/spark-assembly-1.5.2-hadoop2.6.0.jar:/opt/spark/lib/datanucleus-api-jdo-3.2.6.jar:/opt/spark/lib/datanucleus-core-3.2.10.jar:/opt/spark/lib/datanucleus-rdbms-3.2.9.jar" "-Xms1024M" "-Xmx1024M" "-Dspark.driver.port=40004" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "--driver-url" "akka.tcp://[email protected]:40004/user/CoarseGrainedScheduler" "--executor-id" "1" "--hostname" "172.17.0.5" "--cores" "1" "--app-id" "app-20160906190456-0000" "--worker-url" "akka.tcp://[email protected]:36552/user/Worker" | |
16/09/06 19:05:20 INFO Worker: Asked to kill executor app-20160906190456-0000/1 | |
16/09/06 19:05:20 INFO ExecutorRunner: Runner thread for executor app-20160906190456-0000/1 interrupted | |
16/09/06 19:05:20 INFO ExecutorRunner: Killing process! | |
16/09/06 19:05:20 ERROR FileAppender: Error writing stream to file /opt/spark/work/app-20160906190456-0000/1/stderr | |
java.io.IOException: Stream closed | |
at java.io.BufferedInputStream.getBufIfOpen(BufferedInputStream.java:170) | |
at java.io.BufferedInputStream.read1(BufferedInputStream.java:283) | |
at java.io.BufferedInputStream.read(BufferedInputStream.java:345) | |
at java.io.FilterInputStream.read(FilterInputStream.java:107) | |
at org.apache.spark.util.logging.FileAppender.appendStreamToFile(FileAppender.scala:70) | |
at org.apache.spark.util.logging.FileAppender$$anon$1$$anonfun$run$1.apply$mcV$sp(FileAppender.scala:39) | |
at org.apache.spark.util.logging.FileAppender$$anon$1$$anonfun$run$1.apply(FileAppender.scala:39) | |
at org.apache.spark.util.logging.FileAppender$$anon$1$$anonfun$run$1.apply(FileAppender.scala:39) | |
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1699) | |
at org.apache.spark.util.logging.FileAppender$$anon$1.run(FileAppender.scala:38) | |
16/09/06 19:05:21 INFO Worker: Executor app-20160906190456-0000/1 finished with state KILLED exitStatus 143 | |
16/09/06 19:05:21 INFO Worker: Cleaning up local directories for application app-20160906190456-0000 | |
16/09/06 19:05:21 INFO ExternalShuffleBlockResolver: Application app-20160906190456-0000 removed, cleanupLocalDirs = true | |
16/09/06 19:08:04 INFO Worker: Asked to launch executor app-20160906190804-0001/1 for Zeppelin | |
16/09/06 19:08:04 INFO SecurityManager: Changing view acls to: root | |
16/09/06 19:08:04 INFO SecurityManager: Changing modify acls to: root | |
16/09/06 19:08:04 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root) | |
16/09/06 19:08:04 INFO ExecutorRunner: Launch command: "/usr/lib/jvm/java-8-openjdk-amd64/bin/java" "-cp" "/opt/spark/lib/gcs-connector-latest-hadoop2.jar:/opt/spark/conf/:/opt/spark/lib/spark-assembly-1.5.2-hadoop2.6.0.jar:/opt/spark/lib/datanucleus-api-jdo-3.2.6.jar:/opt/spark/lib/datanucleus-core-3.2.10.jar:/opt/spark/lib/datanucleus-rdbms-3.2.9.jar" "-Xms512M" "-Xmx512M" "-Dspark.driver.port=36891" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "--driver-url" "akka.tcp://[email protected]:36891/user/CoarseGrainedScheduler" "--executor-id" "1" "--hostname" "172.17.0.5" "--cores" "1" "--app-id" "app-20160906190804-0001" "--worker-url" "akka.tcp://[email protected]:36552/user/Worker" |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
=== Launching Zeppelin under Docker === | |
Log dir doesn't exist, create /opt/zeppelin/logs | |
Pid dir doesn't exist, create /opt/zeppelin/run | |
SLF4J: Class path contains multiple SLF4J bindings. | |
SLF4J: Found binding in [jar:file:/opt/zeppelin-0.5.6-incubating-bin-all/lib/zeppelin-interpreter-0.5.6-incubating.jar!/org/slf4j/impl/StaticLoggerBinder.class] | |
SLF4J: Found binding in [jar:file:/opt/zeppelin-0.5.6-incubating-bin-all/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] | |
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. | |
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] | |
WARN [2016-09-06 19:03:14,276] ({main} ZeppelinConfiguration.java[create]:92) - Failed to load configuration, proceeding with a default | |
INFO [2016-09-06 19:03:14,701] ({main} ZeppelinServer.java[setupWebAppContext]:248) - ZeppelinServer Webapp path: /opt/zeppelin/webapps | |
INFO [2016-09-06 19:03:14,713] ({main} ZeppelinServer.java[main]:108) - Starting zeppelin server | |
INFO [2016-09-06 19:03:14,715] ({main} Server.java[doStart]:272) - jetty-8.1.14.v20131031 | |
INFO [2016-09-06 19:03:14,939] ({main} InterpreterFactory.java[init]:115) - Reading /opt/zeppelin/interpreter/sh | |
INFO [2016-09-06 19:03:14,974] ({main} InterpreterFactory.java[init]:132) - Interpreter sh.sh found. class=org.apache.zeppelin.shell.ShellInterpreter | |
INFO [2016-09-06 19:03:14,982] ({main} InterpreterFactory.java[init]:115) - Reading /opt/zeppelin/interpreter/ignite | |
INFO [2016-09-06 19:03:15,063] ({main} InterpreterFactory.java[init]:132) - Interpreter ignite.ignite found. class=org.apache.zeppelin.ignite.IgniteInterpreter | |
INFO [2016-09-06 19:03:15,064] ({main} InterpreterFactory.java[init]:132) - Interpreter ignite.ignitesql found. class=org.apache.zeppelin.ignite.IgniteSqlInterpreter | |
INFO [2016-09-06 19:03:15,066] ({main} InterpreterFactory.java[init]:115) - Reading /opt/zeppelin/interpreter/psql | |
INFO [2016-09-06 19:03:15,085] ({main} InterpreterFactory.java[init]:132) - Interpreter psql.sql found. class=org.apache.zeppelin.postgresql.PostgreSqlInterpreter | |
INFO [2016-09-06 19:03:15,086] ({main} InterpreterFactory.java[init]:115) - Reading /opt/zeppelin/interpreter/hive | |
INFO [2016-09-06 19:03:15,215] ({main} InterpreterFactory.java[init]:132) - Interpreter hive.hql found. class=org.apache.zeppelin.hive.HiveInterpreter | |
INFO [2016-09-06 19:03:15,220] ({main} InterpreterFactory.java[init]:115) - Reading /opt/zeppelin/interpreter/kylin | |
INFO [2016-09-06 19:03:15,236] ({main} InterpreterFactory.java[init]:132) - Interpreter kylin.kylin found. class=org.apache.zeppelin.kylin.KylinInterpreter | |
INFO [2016-09-06 19:03:15,236] ({main} InterpreterFactory.java[init]:115) - Reading /opt/zeppelin/interpreter/spark | |
INFO [2016-09-06 19:03:15,325] ({main} InterpreterFactory.java[init]:132) - Interpreter spark.spark found. class=org.apache.zeppelin.spark.SparkInterpreter | |
INFO [2016-09-06 19:03:15,327] ({main} InterpreterFactory.java[init]:132) - Interpreter spark.pyspark found. class=org.apache.zeppelin.spark.PySparkInterpreter | |
INFO [2016-09-06 19:03:15,327] ({main} InterpreterFactory.java[init]:132) - Interpreter spark.sql found. class=org.apache.zeppelin.spark.SparkSqlInterpreter | |
INFO [2016-09-06 19:03:15,331] ({main} InterpreterFactory.java[init]:132) - Interpreter spark.dep found. class=org.apache.zeppelin.spark.DepInterpreter | |
INFO [2016-09-06 19:03:15,333] ({main} InterpreterFactory.java[init]:115) - Reading /opt/zeppelin/interpreter/cassandra | |
INFO [2016-09-06 19:03:15,379] ({main} CassandraInterpreter.java[<clinit>]:154) - Bootstrapping Cassandra Interpreter | |
INFO [2016-09-06 19:03:15,379] ({main} InterpreterFactory.java[init]:132) - Interpreter cassandra.cassandra found. class=org.apache.zeppelin.cassandra.CassandraInterpreter | |
INFO [2016-09-06 19:03:15,380] ({main} InterpreterFactory.java[init]:115) - Reading /opt/zeppelin/interpreter/phoenix | |
INFO [2016-09-06 19:03:15,527] ({main} InterpreterFactory.java[init]:132) - Interpreter phoenix.sql found. class=org.apache.zeppelin.phoenix.PhoenixInterpreter | |
INFO [2016-09-06 19:03:15,529] ({main} InterpreterFactory.java[init]:115) - Reading /opt/zeppelin/interpreter/flink | |
INFO [2016-09-06 19:03:15,675] ({main} InterpreterFactory.java[init]:132) - Interpreter flink.flink found. class=org.apache.zeppelin.flink.FlinkInterpreter | |
INFO [2016-09-06 19:03:15,677] ({main} InterpreterFactory.java[init]:115) - Reading /opt/zeppelin/interpreter/tajo | |
INFO [2016-09-06 19:03:15,715] ({main} InterpreterFactory.java[init]:132) - Interpreter tajo.tql found. class=org.apache.zeppelin.tajo.TajoInterpreter | |
INFO [2016-09-06 19:03:15,718] ({main} InterpreterFactory.java[init]:115) - Reading /opt/zeppelin/interpreter/lens | |
INFO [2016-09-06 19:03:15,848] ({main} InterpreterFactory.java[init]:132) - Interpreter lens.lens found. class=org.apache.zeppelin.lens.LensInterpreter | |
INFO [2016-09-06 19:03:15,849] ({main} InterpreterFactory.java[init]:115) - Reading /opt/zeppelin/interpreter/angular | |
INFO [2016-09-06 19:03:15,856] ({main} InterpreterFactory.java[init]:132) - Interpreter angular.angular found. class=org.apache.zeppelin.angular.AngularInterpreter | |
INFO [2016-09-06 19:03:15,858] ({main} InterpreterFactory.java[init]:115) - Reading /opt/zeppelin/interpreter/elasticsearch | |
INFO [2016-09-06 19:03:15,924] ({main} InterpreterFactory.java[init]:132) - Interpreter elasticsearch.elasticsearch found. class=org.apache.zeppelin.elasticsearch.ElasticsearchInterpreter | |
INFO [2016-09-06 19:03:15,924] ({main} InterpreterFactory.java[init]:115) - Reading /opt/zeppelin/interpreter/md | |
INFO [2016-09-06 19:03:15,929] ({main} InterpreterFactory.java[init]:132) - Interpreter md.md found. class=org.apache.zeppelin.markdown.Markdown | |
INFO [2016-09-06 19:03:16,017] ({main} InterpreterFactory.java[init]:190) - Interpreter setting group tajo : id=2BW99591N, name=tajo | |
INFO [2016-09-06 19:03:16,018] ({main} InterpreterFactory.java[init]:193) - className = org.apache.zeppelin.tajo.TajoInterpreter | |
INFO [2016-09-06 19:03:16,018] ({main} InterpreterFactory.java[init]:190) - Interpreter setting group flink : id=2BU3UW1SN, name=flink | |
INFO [2016-09-06 19:03:16,018] ({main} InterpreterFactory.java[init]:193) - className = org.apache.zeppelin.flink.FlinkInterpreter | |
INFO [2016-09-06 19:03:16,018] ({main} InterpreterFactory.java[init]:190) - Interpreter setting group spark : id=2BVWB9NGS, name=spark | |
INFO [2016-09-06 19:03:16,018] ({main} InterpreterFactory.java[init]:193) - className = org.apache.zeppelin.spark.SparkInterpreter | |
INFO [2016-09-06 19:03:16,018] ({main} InterpreterFactory.java[init]:193) - className = org.apache.zeppelin.spark.PySparkInterpreter | |
INFO [2016-09-06 19:03:16,018] ({main} InterpreterFactory.java[init]:193) - className = org.apache.zeppelin.spark.SparkSqlInterpreter | |
INFO [2016-09-06 19:03:16,018] ({main} InterpreterFactory.java[init]:193) - className = org.apache.zeppelin.spark.DepInterpreter | |
INFO [2016-09-06 19:03:16,019] ({main} InterpreterFactory.java[init]:190) - Interpreter setting group phoenix : id=2BV7S457B, name=phoenix | |
INFO [2016-09-06 19:03:16,019] ({main} InterpreterFactory.java[init]:193) - className = org.apache.zeppelin.phoenix.PhoenixInterpreter | |
INFO [2016-09-06 19:03:16,019] ({main} InterpreterFactory.java[init]:190) - Interpreter setting group sh : id=2BU7732UM, name=sh | |
INFO [2016-09-06 19:03:16,019] ({main} InterpreterFactory.java[init]:193) - className = org.apache.zeppelin.shell.ShellInterpreter | |
INFO [2016-09-06 19:03:16,024] ({main} InterpreterFactory.java[init]:190) - Interpreter setting group elasticsearch : id=2BX9PHEFA, name=elasticsearch | |
INFO [2016-09-06 19:03:16,024] ({main} InterpreterFactory.java[init]:193) - className = org.apache.zeppelin.elasticsearch.ElasticsearchInterpreter | |
INFO [2016-09-06 19:03:16,024] ({main} InterpreterFactory.java[init]:190) - Interpreter setting group hive : id=2BV2HTRR7, name=hive | |
INFO [2016-09-06 19:03:16,024] ({main} InterpreterFactory.java[init]:193) - className = org.apache.zeppelin.hive.HiveInterpreter | |
INFO [2016-09-06 19:03:16,024] ({main} InterpreterFactory.java[init]:190) - Interpreter setting group lens : id=2BW44M2ZY, name=lens | |
INFO [2016-09-06 19:03:16,025] ({main} InterpreterFactory.java[init]:193) - className = org.apache.zeppelin.lens.LensInterpreter | |
INFO [2016-09-06 19:03:16,026] ({main} InterpreterFactory.java[init]:190) - Interpreter setting group kylin : id=2BUZDZ4KK, name=kylin | |
INFO [2016-09-06 19:03:16,027] ({main} InterpreterFactory.java[init]:193) - className = org.apache.zeppelin.kylin.KylinInterpreter | |
INFO [2016-09-06 19:03:16,027] ({main} InterpreterFactory.java[init]:190) - Interpreter setting group ignite : id=2BU6GC4TH, name=ignite | |
INFO [2016-09-06 19:03:16,027] ({main} InterpreterFactory.java[init]:193) - className = org.apache.zeppelin.ignite.IgniteInterpreter | |
INFO [2016-09-06 19:03:16,027] ({main} InterpreterFactory.java[init]:193) - className = org.apache.zeppelin.ignite.IgniteSqlInterpreter | |
INFO [2016-09-06 19:03:16,027] ({main} InterpreterFactory.java[init]:190) - Interpreter setting group cassandra : id=2BWXQ34DX, name=cassandra | |
INFO [2016-09-06 19:03:16,028] ({main} InterpreterFactory.java[init]:193) - className = org.apache.zeppelin.cassandra.CassandraInterpreter | |
INFO [2016-09-06 19:03:16,029] ({main} InterpreterFactory.java[init]:190) - Interpreter setting group md : id=2BXWWXE81, name=md | |
INFO [2016-09-06 19:03:16,029] ({main} InterpreterFactory.java[init]:193) - className = org.apache.zeppelin.markdown.Markdown | |
INFO [2016-09-06 19:03:16,029] ({main} InterpreterFactory.java[init]:190) - Interpreter setting group psql : id=2BUYXR53X, name=psql | |
INFO [2016-09-06 19:03:16,029] ({main} InterpreterFactory.java[init]:193) - className = org.apache.zeppelin.postgresql.PostgreSqlInterpreter | |
INFO [2016-09-06 19:03:16,029] ({main} InterpreterFactory.java[init]:190) - Interpreter setting group angular : id=2BUY25YAH, name=angular | |
INFO [2016-09-06 19:03:16,030] ({main} InterpreterFactory.java[init]:193) - className = org.apache.zeppelin.angular.AngularInterpreter | |
INFO [2016-09-06 19:03:16,062] ({main} VfsLog.java[info]:138) - Using "/tmp/vfs_cache" as temporary files store. | |
INFO [2016-09-06 19:03:16,396] ({main} StdSchedulerFactory.java[instantiate]:1184) - Using default implementation for ThreadExecutor | |
INFO [2016-09-06 19:03:16,403] ({main} SimpleThreadPool.java[initialize]:268) - Job execution threads will use class loader of thread: main | |
INFO [2016-09-06 19:03:16,439] ({main} SchedulerSignalerImpl.java[<init>]:61) - Initialized Scheduler Signaller of type: class org.quartz.core.SchedulerSignalerImpl | |
INFO [2016-09-06 19:03:16,439] ({main} QuartzScheduler.java[<init>]:240) - Quartz Scheduler v.2.2.1 created. | |
INFO [2016-09-06 19:03:16,440] ({main} RAMJobStore.java[initialize]:155) - RAMJobStore initialized. | |
INFO [2016-09-06 19:03:16,442] ({main} QuartzScheduler.java[initialize]:305) - Scheduler meta-data: Quartz Scheduler (v2.2.1) 'DefaultQuartzScheduler' with instanceId 'NON_CLUSTERED' | |
Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally. | |
NOT STARTED. | |
Currently in standby mode. | |
Number of jobs executed: 0 | |
Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 10 threads. | |
Using job-store 'org.quartz.simpl.RAMJobStore' - which does not support persistence. and is not clustered. | |
INFO [2016-09-06 19:03:16,443] ({main} StdSchedulerFactory.java[instantiate]:1339) - Quartz scheduler 'DefaultQuartzScheduler' initialized from default resource file in Quartz package: 'quartz.properties' | |
INFO [2016-09-06 19:03:16,443] ({main} StdSchedulerFactory.java[instantiate]:1343) - Quartz scheduler version: 2.2.1 | |
INFO [2016-09-06 19:03:16,444] ({main} QuartzScheduler.java[start]:575) - Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED started. | |
INFO [2016-09-06 19:03:16,546] ({main} Notebook.java[<init>]:107) - Notebook indexing started... | |
INFO [2016-09-06 19:03:16,752] ({main} LuceneSearch.java[addIndexDocs]:285) - Indexing 1 notebooks took 201ms | |
INFO [2016-09-06 19:03:16,752] ({main} Notebook.java[<init>]:109) - Notebook indexing finished: 1 indexed in 0s | |
INFO [2016-09-06 19:03:16,911] ({main} ServerImpl.java[initDestination]:94) - Setting the server's publish address to be / | |
INFO [2016-09-06 19:03:17,009] ({main} WebInfConfiguration.java[unpack]:478) - Extract jar:file:/opt/zeppelin-0.5.6-incubating-bin-all/zeppelin-web-0.5.6-incubating.war!/ to /opt/zeppelin-0.5.6-incubating-bin-all/webapps/webapp | |
INFO [2016-09-06 19:03:17,198] ({main} StandardDescriptorProcessor.java[visitServlet]:284) - NO JSP Support for /, did not find org.apache.jasper.servlet.JspServlet | |
Sep 06, 2016 7:03:17 PM com.sun.jersey.api.core.PackagesResourceConfig init | |
INFO: Scanning for root resource and provider classes in the packages: | |
org.apache.zeppelin.rest | |
Sep 06, 2016 7:03:17 PM com.sun.jersey.api.core.ScanningResourceConfig logClasses | |
INFO: Root resource classes found: | |
class org.apache.zeppelin.rest.NotebookRestApi | |
class org.apache.zeppelin.rest.ZeppelinRestApi | |
class org.apache.zeppelin.rest.InterpreterRestApi | |
Sep 06, 2016 7:03:17 PM com.sun.jersey.api.core.ScanningResourceConfig init | |
INFO: No provider classes found. | |
Sep 06, 2016 7:03:17 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate | |
INFO: Initiating Jersey application, version 'Jersey: 1.13 06/29/2012 05:14 PM' | |
Sep 06, 2016 7:03:18 PM com.sun.jersey.spi.inject.Errors processErrorMessages | |
WARNING: The following warnings have been detected with resource and/or provider classes: | |
WARNING: A HTTP GET method, public javax.ws.rs.core.Response org.apache.zeppelin.rest.InterpreterRestApi.listInterpreter(java.lang.String), should not consume any entity. | |
WARNING: A sub-resource method, public javax.ws.rs.core.Response org.apache.zeppelin.rest.NotebookRestApi.createNote(java.lang.String) throws java.io.IOException, with URI template, "/", is treated as a resource method | |
WARNING: A sub-resource method, public javax.ws.rs.core.Response org.apache.zeppelin.rest.NotebookRestApi.getNotebookList() throws java.io.IOException, with URI template, "/", is treated as a resource method | |
INFO [2016-09-06 19:03:18,105] ({main} AbstractConnector.java[doStart]:338) - Started [email protected]:8080 | |
INFO [2016-09-06 19:03:18,106] ({main} ZeppelinServer.java[main]:115) - Done, zeppelin server started | |
INFO [2016-09-06 19:07:14,391] ({qtp143110009-32} NotebookServer.java[onOpen]:88) - New connection from 127.0.0.1 : 54780 | |
INFO [2016-09-06 19:07:47,651] ({pool-1-thread-2} SchedulerFactory.java[jobStarted]:129) - Job paragraph_1473188863858_394785041 started by scheduler remoteinterpreter_622442934 | |
INFO [2016-09-06 19:07:47,655] ({pool-1-thread-2} Paragraph.java[jobRun]:190) - run paragraph 20160906-190743_1786211851 using pyspark org.apache.zeppelin.interpreter.LazyOpenInterpreter@722f55df | |
INFO [2016-09-06 19:07:47,666] ({pool-1-thread-2} RemoteInterpreterProcess.java[reference]:108) - Run interpreter process [/opt/zeppelin/bin/interpreter.sh, -d, /opt/zeppelin/interpreter/spark, -p, 41875] | |
SLF4J: Class path contains multiple SLF4J bindings. | |
SLF4J: Found binding in [jar:file:/opt/zeppelin-0.5.6-incubating-bin-all/interpreter/spark/zeppelin-spark-0.5.6-incubating.jar!/org/slf4j/impl/StaticLoggerBinder.class] | |
SLF4J: Found binding in [jar:file:/opt/spark-1.5.2-bin-hadoop2.6/lib/spark-assembly-1.5.2-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] | |
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. | |
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] | |
INFO [2016-09-06 19:07:48,761] ({pool-1-thread-2} RemoteInterpreter.java[init]:137) - Create remote interpreter org.apache.zeppelin.spark.SparkInterpreter | |
INFO [2016-09-06 19:07:48,806] ({pool-1-thread-2} RemoteInterpreter.java[init]:137) - Create remote interpreter org.apache.zeppelin.spark.PySparkInterpreter | |
INFO [2016-09-06 19:07:48,816] ({pool-1-thread-2} RemoteInterpreter.java[init]:137) - Create remote interpreter org.apache.zeppelin.spark.SparkSqlInterpreter | |
INFO [2016-09-06 19:07:48,817] ({pool-1-thread-2} RemoteInterpreter.java[init]:137) - Create remote interpreter org.apache.zeppelin.spark.DepInterpreter | |
------ Create new SparkContext spark://spark-master:7077 ------- | |
16/09/06 19:08:09 WARN General: Plugin (Bundle) "org.datanucleus.store.rdbms" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/spark/lib/datanucleus-rdbms-3.2.9.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/spark-1.5.2-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar." | |
16/09/06 19:08:09 WARN General: Plugin (Bundle) "org.datanucleus.api.jdo" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/spark/lib/datanucleus-api-jdo-3.2.6.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/spark-1.5.2-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar." | |
16/09/06 19:08:09 WARN General: Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/spark/lib/datanucleus-core-3.2.10.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/spark-1.5.2-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar." | |
16/09/06 19:08:20 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0 | |
16/09/06 19:08:20 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException | |
16/09/06 19:08:22 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable | |
16/09/06 19:08:22 WARN General: Plugin (Bundle) "org.datanucleus.store.rdbms" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/spark/lib/datanucleus-rdbms-3.2.9.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/spark-1.5.2-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar." | |
16/09/06 19:08:22 WARN General: Plugin (Bundle) "org.datanucleus.api.jdo" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/spark/lib/datanucleus-api-jdo-3.2.6.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/spark-1.5.2-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar." | |
16/09/06 19:08:22 WARN General: Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/spark/lib/datanucleus-core-3.2.10.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/spark-1.5.2-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar." | |
16/09/06 19:08:26 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0 | |
16/09/06 19:08:27 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException | |
16/09/06 19:08:34 WARN HttpTransport: exception thrown while executing request | |
java.net.UnknownHostException: metadata | |
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184) | |
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) | |
at java.net.Socket.connect(Socket.java:589) | |
at sun.net.NetworkClient.doConnect(NetworkClient.java:175) | |
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432) | |
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527) | |
at sun.net.www.http.HttpClient.<init>(HttpClient.java:211) | |
at sun.net.www.http.HttpClient.New(HttpClient.java:308) | |
at sun.net.www.http.HttpClient.New(HttpClient.java:326) | |
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1169) | |
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1105) | |
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:999) | |
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:933) | |
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:93) | |
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:972) | |
at com.google.cloud.hadoop.util.CredentialFactory$ComputeCredentialWithRetry.executeRefreshToken(CredentialFactory.java:142) | |
at com.google.api.client.auth.oauth2.Credential.refreshToken(Credential.java:489) | |
at com.google.cloud.hadoop.util.CredentialFactory.getCredentialFromMetadataServiceAccount(CredentialFactory.java:189) | |
at com.google.cloud.hadoop.util.CredentialConfiguration.getCredential(CredentialConfiguration.java:71) | |
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.configure(GoogleHadoopFileSystemBase.java:1571) | |
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:783) | |
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:746) | |
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596) | |
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91) | |
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630) | |
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612) | |
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) | |
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) | |
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:256) | |
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228) | |
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313) | |
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:207) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) | |
at scala.Option.getOrElse(Option.scala:120) | |
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) | |
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) | |
at scala.Option.getOrElse(Option.scala:120) | |
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) | |
at org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:58) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) | |
at scala.Option.getOrElse(Option.scala:120) | |
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) | |
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1921) | |
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:909) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108) | |
at org.apache.spark.rdd.RDD.withScope(RDD.scala:310) | |
at org.apache.spark.rdd.RDD.collect(RDD.scala:908) | |
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:405) | |
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala) | |
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) | |
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.lang.reflect.Method.invoke(Method.java:497) | |
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231) | |
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381) | |
at py4j.Gateway.invoke(Gateway.java:259) | |
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133) | |
at py4j.commands.CallCommand.execute(CallCommand.java:79) | |
at py4j.GatewayConnection.run(GatewayConnection.java:209) | |
at java.lang.Thread.run(Thread.java:745) | |
16/09/06 19:08:34 WARN HttpTransport: exception thrown while executing request | |
java.net.UnknownHostException: metadata | |
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184) | |
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) | |
at java.net.Socket.connect(Socket.java:589) | |
at sun.net.NetworkClient.doConnect(NetworkClient.java:175) | |
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432) | |
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527) | |
at sun.net.www.http.HttpClient.<init>(HttpClient.java:211) | |
at sun.net.www.http.HttpClient.New(HttpClient.java:308) | |
at sun.net.www.http.HttpClient.New(HttpClient.java:326) | |
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1169) | |
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1105) | |
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:999) | |
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:933) | |
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:93) | |
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:972) | |
at com.google.cloud.hadoop.util.CredentialFactory$ComputeCredentialWithRetry.executeRefreshToken(CredentialFactory.java:142) | |
at com.google.api.client.auth.oauth2.Credential.refreshToken(Credential.java:489) | |
at com.google.cloud.hadoop.util.CredentialFactory.getCredentialFromMetadataServiceAccount(CredentialFactory.java:189) | |
at com.google.cloud.hadoop.util.CredentialConfiguration.getCredential(CredentialConfiguration.java:71) | |
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.configure(GoogleHadoopFileSystemBase.java:1571) | |
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:783) | |
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:746) | |
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596) | |
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91) | |
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630) | |
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612) | |
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) | |
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) | |
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:256) | |
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228) | |
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313) | |
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:207) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) | |
at scala.Option.getOrElse(Option.scala:120) | |
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) | |
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) | |
at scala.Option.getOrElse(Option.scala:120) | |
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) | |
at org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:58) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) | |
at scala.Option.getOrElse(Option.scala:120) | |
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) | |
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1921) | |
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:909) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108) | |
at org.apache.spark.rdd.RDD.withScope(RDD.scala:310) | |
at org.apache.spark.rdd.RDD.collect(RDD.scala:908) | |
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:405) | |
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala) | |
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) | |
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.lang.reflect.Method.invoke(Method.java:497) | |
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231) | |
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381) | |
at py4j.Gateway.invoke(Gateway.java:259) | |
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133) | |
at py4j.commands.CallCommand.execute(CallCommand.java:79) | |
at py4j.GatewayConnection.run(GatewayConnection.java:209) | |
at java.lang.Thread.run(Thread.java:745) | |
16/09/06 19:08:36 WARN HttpTransport: exception thrown while executing request | |
java.net.UnknownHostException: metadata | |
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184) | |
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) | |
at java.net.Socket.connect(Socket.java:589) | |
at sun.net.NetworkClient.doConnect(NetworkClient.java:175) | |
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432) | |
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527) | |
at sun.net.www.http.HttpClient.<init>(HttpClient.java:211) | |
at sun.net.www.http.HttpClient.New(HttpClient.java:308) | |
at sun.net.www.http.HttpClient.New(HttpClient.java:326) | |
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1169) | |
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1105) | |
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:999) | |
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:933) | |
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:93) | |
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:972) | |
at com.google.cloud.hadoop.util.CredentialFactory$ComputeCredentialWithRetry.executeRefreshToken(CredentialFactory.java:142) | |
at com.google.api.client.auth.oauth2.Credential.refreshToken(Credential.java:489) | |
at com.google.cloud.hadoop.util.CredentialFactory.getCredentialFromMetadataServiceAccount(CredentialFactory.java:189) | |
at com.google.cloud.hadoop.util.CredentialConfiguration.getCredential(CredentialConfiguration.java:71) | |
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.configure(GoogleHadoopFileSystemBase.java:1571) | |
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:783) | |
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:746) | |
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596) | |
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91) | |
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630) | |
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612) | |
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) | |
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) | |
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:256) | |
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228) | |
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313) | |
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:207) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) | |
at scala.Option.getOrElse(Option.scala:120) | |
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) | |
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) | |
at scala.Option.getOrElse(Option.scala:120) | |
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) | |
at org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:58) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) | |
at scala.Option.getOrElse(Option.scala:120) | |
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) | |
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1921) | |
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:909) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108) | |
at org.apache.spark.rdd.RDD.withScope(RDD.scala:310) | |
at org.apache.spark.rdd.RDD.collect(RDD.scala:908) | |
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:405) | |
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala) | |
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) | |
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.lang.reflect.Method.invoke(Method.java:497) | |
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231) | |
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381) | |
at py4j.Gateway.invoke(Gateway.java:259) | |
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133) | |
at py4j.commands.CallCommand.execute(CallCommand.java:79) | |
at py4j.GatewayConnection.run(GatewayConnection.java:209) | |
at java.lang.Thread.run(Thread.java:745) | |
16/09/06 19:08:37 WARN HttpTransport: exception thrown while executing request | |
java.net.UnknownHostException: metadata | |
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184) | |
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) | |
at java.net.Socket.connect(Socket.java:589) | |
at sun.net.NetworkClient.doConnect(NetworkClient.java:175) | |
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432) | |
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527) | |
at sun.net.www.http.HttpClient.<init>(HttpClient.java:211) | |
at sun.net.www.http.HttpClient.New(HttpClient.java:308) | |
at sun.net.www.http.HttpClient.New(HttpClient.java:326) | |
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1169) | |
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1105) | |
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:999) | |
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:933) | |
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:93) | |
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:972) | |
at com.google.cloud.hadoop.util.CredentialFactory$ComputeCredentialWithRetry.executeRefreshToken(CredentialFactory.java:142) | |
at com.google.api.client.auth.oauth2.Credential.refreshToken(Credential.java:489) | |
at com.google.cloud.hadoop.util.CredentialFactory.getCredentialFromMetadataServiceAccount(CredentialFactory.java:189) | |
at com.google.cloud.hadoop.util.CredentialConfiguration.getCredential(CredentialConfiguration.java:71) | |
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.configure(GoogleHadoopFileSystemBase.java:1571) | |
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:783) | |
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:746) | |
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596) | |
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91) | |
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630) | |
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612) | |
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) | |
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) | |
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:256) | |
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228) | |
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313) | |
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:207) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) | |
at scala.Option.getOrElse(Option.scala:120) | |
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) | |
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) | |
at scala.Option.getOrElse(Option.scala:120) | |
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) | |
at org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:58) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) | |
at scala.Option.getOrElse(Option.scala:120) | |
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) | |
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1921) | |
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:909) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108) | |
at org.apache.spark.rdd.RDD.withScope(RDD.scala:310) | |
at org.apache.spark.rdd.RDD.collect(RDD.scala:908) | |
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:405) | |
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala) | |
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) | |
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.lang.reflect.Method.invoke(Method.java:497) | |
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231) | |
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381) | |
at py4j.Gateway.invoke(Gateway.java:259) | |
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133) | |
at py4j.commands.CallCommand.execute(CallCommand.java:79) | |
at py4j.GatewayConnection.run(GatewayConnection.java:209) | |
at java.lang.Thread.run(Thread.java:745) | |
16/09/06 19:08:39 WARN HttpTransport: exception thrown while executing request | |
java.net.UnknownHostException: metadata | |
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184) | |
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) | |
at java.net.Socket.connect(Socket.java:589) | |
at sun.net.NetworkClient.doConnect(NetworkClient.java:175) | |
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432) | |
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527) | |
at sun.net.www.http.HttpClient.<init>(HttpClient.java:211) | |
at sun.net.www.http.HttpClient.New(HttpClient.java:308) | |
at sun.net.www.http.HttpClient.New(HttpClient.java:326) | |
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1169) | |
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1105) | |
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:999) | |
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:933) | |
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:93) | |
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:972) | |
at com.google.cloud.hadoop.util.CredentialFactory$ComputeCredentialWithRetry.executeRefreshToken(CredentialFactory.java:142) | |
at com.google.api.client.auth.oauth2.Credential.refreshToken(Credential.java:489) | |
at com.google.cloud.hadoop.util.CredentialFactory.getCredentialFromMetadataServiceAccount(CredentialFactory.java:189) | |
at com.google.cloud.hadoop.util.CredentialConfiguration.getCredential(CredentialConfiguration.java:71) | |
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.configure(GoogleHadoopFileSystemBase.java:1571) | |
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:783) | |
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:746) | |
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596) | |
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91) | |
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630) | |
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612) | |
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) | |
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) | |
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:256) | |
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228) | |
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313) | |
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:207) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) | |
at scala.Option.getOrElse(Option.scala:120) | |
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) | |
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) | |
at scala.Option.getOrElse(Option.scala:120) | |
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) | |
at org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:58) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) | |
at scala.Option.getOrElse(Option.scala:120) | |
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) | |
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1921) | |
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:909) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108) | |
at org.apache.spark.rdd.RDD.withScope(RDD.scala:310) | |
at org.apache.spark.rdd.RDD.collect(RDD.scala:908) | |
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:405) | |
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala) | |
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) | |
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.lang.reflect.Method.invoke(Method.java:497) | |
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231) | |
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381) | |
at py4j.Gateway.invoke(Gateway.java:259) | |
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133) | |
at py4j.commands.CallCommand.execute(CallCommand.java:79) | |
at py4j.GatewayConnection.run(GatewayConnection.java:209) | |
at java.lang.Thread.run(Thread.java:745) | |
16/09/06 19:08:42 WARN HttpTransport: exception thrown while executing request | |
java.net.UnknownHostException: metadata | |
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184) | |
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) | |
at java.net.Socket.connect(Socket.java:589) | |
at sun.net.NetworkClient.doConnect(NetworkClient.java:175) | |
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432) | |
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527) | |
at sun.net.www.http.HttpClient.<init>(HttpClient.java:211) | |
at sun.net.www.http.HttpClient.New(HttpClient.java:308) | |
at sun.net.www.http.HttpClient.New(HttpClient.java:326) | |
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1169) | |
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1105) | |
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:999) | |
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:933) | |
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:93) | |
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:972) | |
at com.google.cloud.hadoop.util.CredentialFactory$ComputeCredentialWithRetry.executeRefreshToken(CredentialFactory.java:142) | |
at com.google.api.client.auth.oauth2.Credential.refreshToken(Credential.java:489) | |
at com.google.cloud.hadoop.util.CredentialFactory.getCredentialFromMetadataServiceAccount(CredentialFactory.java:189) | |
at com.google.cloud.hadoop.util.CredentialConfiguration.getCredential(CredentialConfiguration.java:71) | |
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.configure(GoogleHadoopFileSystemBase.java:1571) | |
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:783) | |
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:746) | |
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596) | |
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91) | |
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630) | |
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612) | |
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) | |
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) | |
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:256) | |
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228) | |
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313) | |
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:207) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) | |
at scala.Option.getOrElse(Option.scala:120) | |
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) | |
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) | |
at scala.Option.getOrElse(Option.scala:120) | |
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) | |
at org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:58) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) | |
at scala.Option.getOrElse(Option.scala:120) | |
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) | |
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1921) | |
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:909) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108) | |
at org.apache.spark.rdd.RDD.withScope(RDD.scala:310) | |
at org.apache.spark.rdd.RDD.collect(RDD.scala:908) | |
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:405) | |
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala) | |
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) | |
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.lang.reflect.Method.invoke(Method.java:497) | |
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231) | |
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381) | |
at py4j.Gateway.invoke(Gateway.java:259) | |
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133) | |
at py4j.commands.CallCommand.execute(CallCommand.java:79) | |
at py4j.GatewayConnection.run(GatewayConnection.java:209) | |
at java.lang.Thread.run(Thread.java:745) | |
16/09/06 19:08:46 WARN HttpTransport: exception thrown while executing request | |
java.net.UnknownHostException: metadata | |
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184) | |
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) | |
at java.net.Socket.connect(Socket.java:589) | |
at sun.net.NetworkClient.doConnect(NetworkClient.java:175) | |
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432) | |
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527) | |
at sun.net.www.http.HttpClient.<init>(HttpClient.java:211) | |
at sun.net.www.http.HttpClient.New(HttpClient.java:308) | |
at sun.net.www.http.HttpClient.New(HttpClient.java:326) | |
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1169) | |
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1105) | |
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:999) | |
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:933) | |
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:93) | |
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:972) | |
at com.google.cloud.hadoop.util.CredentialFactory$ComputeCredentialWithRetry.executeRefreshToken(CredentialFactory.java:142) | |
at com.google.api.client.auth.oauth2.Credential.refreshToken(Credential.java:489) | |
at com.google.cloud.hadoop.util.CredentialFactory.getCredentialFromMetadataServiceAccount(CredentialFactory.java:189) | |
at com.google.cloud.hadoop.util.CredentialConfiguration.getCredential(CredentialConfiguration.java:71) | |
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.configure(GoogleHadoopFileSystemBase.java:1571) | |
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:783) | |
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:746) | |
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596) | |
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91) | |
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630) | |
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612) | |
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) | |
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) | |
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:256) | |
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228) | |
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313) | |
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:207) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) | |
at scala.Option.getOrElse(Option.scala:120) | |
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) | |
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) | |
at scala.Option.getOrElse(Option.scala:120) | |
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) | |
at org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:58) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) | |
at scala.Option.getOrElse(Option.scala:120) | |
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) | |
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1921) | |
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:909) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108) | |
at org.apache.spark.rdd.RDD.withScope(RDD.scala:310) | |
at org.apache.spark.rdd.RDD.collect(RDD.scala:908) | |
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:405) | |
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala) | |
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) | |
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.lang.reflect.Method.invoke(Method.java:497) | |
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231) | |
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381) | |
at py4j.Gateway.invoke(Gateway.java:259) | |
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133) | |
at py4j.commands.CallCommand.execute(CallCommand.java:79) | |
at py4j.GatewayConnection.run(GatewayConnection.java:209) | |
at java.lang.Thread.run(Thread.java:745) | |
16/09/06 19:08:56 WARN HttpTransport: exception thrown while executing request | |
java.net.UnknownHostException: metadata | |
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184) | |
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) | |
at java.net.Socket.connect(Socket.java:589) | |
at sun.net.NetworkClient.doConnect(NetworkClient.java:175) | |
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432) | |
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527) | |
at sun.net.www.http.HttpClient.<init>(HttpClient.java:211) | |
at sun.net.www.http.HttpClient.New(HttpClient.java:308) | |
at sun.net.www.http.HttpClient.New(HttpClient.java:326) | |
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1169) | |
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1105) | |
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:999) | |
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:933) | |
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:93) | |
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:972) | |
at com.google.cloud.hadoop.util.CredentialFactory$ComputeCredentialWithRetry.executeRefreshToken(CredentialFactory.java:142) | |
at com.google.api.client.auth.oauth2.Credential.refreshToken(Credential.java:489) | |
at com.google.cloud.hadoop.util.CredentialFactory.getCredentialFromMetadataServiceAccount(CredentialFactory.java:189) | |
at com.google.cloud.hadoop.util.CredentialConfiguration.getCredential(CredentialConfiguration.java:71) | |
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.configure(GoogleHadoopFileSystemBase.java:1571) | |
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:783) | |
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:746) | |
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596) | |
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91) | |
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630) | |
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612) | |
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) | |
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) | |
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:256) | |
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228) | |
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313) | |
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:207) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) | |
at scala.Option.getOrElse(Option.scala:120) | |
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) | |
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) | |
at scala.Option.getOrElse(Option.scala:120) | |
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) | |
at org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:58) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) | |
at scala.Option.getOrElse(Option.scala:120) | |
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) | |
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1921) | |
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:909) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108) | |
at org.apache.spark.rdd.RDD.withScope(RDD.scala:310) | |
at org.apache.spark.rdd.RDD.collect(RDD.scala:908) | |
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:405) | |
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala) | |
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) | |
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.lang.reflect.Method.invoke(Method.java:497) | |
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231) | |
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381) | |
at py4j.Gateway.invoke(Gateway.java:259) | |
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133) | |
at py4j.commands.CallCommand.execute(CallCommand.java:79) | |
at py4j.GatewayConnection.run(GatewayConnection.java:209) | |
at java.lang.Thread.run(Thread.java:745) | |
16/09/06 19:09:02 WARN HttpTransport: exception thrown while executing request | |
java.net.UnknownHostException: metadata | |
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184) | |
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) | |
at java.net.Socket.connect(Socket.java:589) | |
at sun.net.NetworkClient.doConnect(NetworkClient.java:175) | |
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432) | |
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527) | |
at sun.net.www.http.HttpClient.<init>(HttpClient.java:211) | |
at sun.net.www.http.HttpClient.New(HttpClient.java:308) | |
at sun.net.www.http.HttpClient.New(HttpClient.java:326) | |
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1169) | |
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1105) | |
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:999) | |
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:933) | |
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:93) | |
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:972) | |
at com.google.cloud.hadoop.util.CredentialFactory$ComputeCredentialWithRetry.executeRefreshToken(CredentialFactory.java:142) | |
at com.google.api.client.auth.oauth2.Credential.refreshToken(Credential.java:489) | |
at com.google.cloud.hadoop.util.CredentialFactory.getCredentialFromMetadataServiceAccount(CredentialFactory.java:189) | |
at com.google.cloud.hadoop.util.CredentialConfiguration.getCredential(CredentialConfiguration.java:71) | |
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.configure(GoogleHadoopFileSystemBase.java:1571) | |
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:783) | |
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:746) | |
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596) | |
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91) | |
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630) | |
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612) | |
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) | |
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) | |
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:256) | |
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228) | |
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313) | |
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:207) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) | |
at scala.Option.getOrElse(Option.scala:120) | |
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) | |
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) | |
at scala.Option.getOrElse(Option.scala:120) | |
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) | |
at org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:58) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) | |
at scala.Option.getOrElse(Option.scala:120) | |
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) | |
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1921) | |
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:909) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108) | |
at org.apache.spark.rdd.RDD.withScope(RDD.scala:310) | |
at org.apache.spark.rdd.RDD.collect(RDD.scala:908) | |
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:405) | |
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala) | |
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) | |
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.lang.reflect.Method.invoke(Method.java:497) | |
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231) | |
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381) | |
at py4j.Gateway.invoke(Gateway.java:259) | |
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133) | |
at py4j.commands.CallCommand.execute(CallCommand.java:79) | |
at py4j.GatewayConnection.run(GatewayConnection.java:209) | |
at java.lang.Thread.run(Thread.java:745) | |
16/09/06 19:09:28 WARN HttpTransport: exception thrown while executing request | |
java.net.UnknownHostException: metadata | |
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184) | |
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) | |
at java.net.Socket.connect(Socket.java:589) | |
at sun.net.NetworkClient.doConnect(NetworkClient.java:175) | |
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432) | |
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527) | |
at sun.net.www.http.HttpClient.<init>(HttpClient.java:211) | |
at sun.net.www.http.HttpClient.New(HttpClient.java:308) | |
at sun.net.www.http.HttpClient.New(HttpClient.java:326) | |
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1169) | |
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1105) | |
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:999) | |
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:933) | |
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:93) | |
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:972) | |
at com.google.cloud.hadoop.util.CredentialFactory$ComputeCredentialWithRetry.executeRefreshToken(CredentialFactory.java:142) | |
at com.google.api.client.auth.oauth2.Credential.refreshToken(Credential.java:489) | |
at com.google.cloud.hadoop.util.CredentialFactory.getCredentialFromMetadataServiceAccount(CredentialFactory.java:189) | |
at com.google.cloud.hadoop.util.CredentialConfiguration.getCredential(CredentialConfiguration.java:71) | |
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.configure(GoogleHadoopFileSystemBase.java:1571) | |
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:783) | |
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:746) | |
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596) | |
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91) | |
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630) | |
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612) | |
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) | |
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) | |
at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:256) | |
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228) | |
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:313) | |
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:207) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) | |
at scala.Option.getOrElse(Option.scala:120) | |
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) | |
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) | |
at scala.Option.getOrElse(Option.scala:120) | |
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) | |
at org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:58) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239) | |
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237) | |
at scala.Option.getOrElse(Option.scala:120) | |
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237) | |
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1921) | |
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:909) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108) | |
at org.apache.spark.rdd.RDD.withScope(RDD.scala:310) | |
at org.apache.spark.rdd.RDD.collect(RDD.scala:908) | |
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:405) | |
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala) | |
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) | |
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.lang.reflect.Method.invoke(Method.java:497) | |
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231) | |
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381) | |
at py4j.Gateway.invoke(Gateway.java:259) | |
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133) | |
at py4j.commands.CallCommand.execute(CallCommand.java:79) | |
at py4j.GatewayConnection.run(GatewayConnection.java:209) | |
at java.lang.Thread.run(Thread.java:745) | |
INFO [2016-09-06 19:09:28,839] ({pool-1-thread-2} NotebookServer.java[afterStatusChange]:771) - Job 20160906-190743_1786211851 is finished | |
INFO [2016-09-06 19:09:28,878] ({pool-1-thread-2} SchedulerFactory.java[jobFinished]:135) - Job paragraph_1473188863858_394785041 finished by scheduler remoteinterpreter_622442934 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment