Created
June 29, 2015 17:29
-
-
Save ankurcha/437a9e138de14898f5ce to your computer and use it in GitHub Desktop.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
ubuntu@i-644498b2-mesos-slave-us-east-1e:~/spark-1.5.0-SNAPSHOT-bin-2.2.0$ bin/spark-shell --verbose | |
Using properties file: /home/ubuntu/spark-1.5.0-SNAPSHOT-bin-2.2.0/conf/spark-defaults.conf | |
Adding default property: spark.serializer=org.apache.spark.serializer.KryoSerializer | |
Adding default property: spark.driver.memory=5g | |
Adding default property: spark.mesos.constraints=zone:us-east-1a | |
Adding default property: spark.master=mesos://zk://10.96.239.120:2181,10.96.248.254:2181,10.96.218.65:2181/mesos_qa | |
Adding default property: spark.executor.uri=http://com.brightcove.rna.repo.dev.s3.amazonaws.com/spark-1.5.0-SNAPSHOT-bin-2.2.0.tgz | |
Parsed arguments: | |
master mesos://zk://10.96.239.120:2181,10.96.248.254:2181,10.96.218.65:2181/mesos_qa | |
deployMode null | |
executorMemory null | |
executorCores null | |
totalExecutorCores null | |
propertiesFile /home/ubuntu/spark-1.5.0-SNAPSHOT-bin-2.2.0/conf/spark-defaults.conf | |
driverMemory 5g | |
driverCores null | |
driverExtraClassPath null | |
driverExtraLibraryPath null | |
driverExtraJavaOptions null | |
supervise false | |
queue null | |
numExecutors null | |
files null | |
pyFiles null | |
archives null | |
mainClass org.apache.spark.repl.Main | |
primaryResource spark-shell | |
name org.apache.spark.repl.Main | |
childArgs [] | |
jars null | |
packages null | |
repositories null | |
verbose true | |
Spark properties used, including those specified through | |
--conf and those from the properties file /home/ubuntu/spark-1.5.0-SNAPSHOT-bin-2.2.0/conf/spark-defaults.conf: | |
spark.driver.memory -> 5g | |
spark.serializer -> org.apache.spark.serializer.KryoSerializer | |
spark.mesos.constraints -> zone:us-east-1a | |
spark.executor.uri -> http://com.brightcove.rna.repo.dev.s3.amazonaws.com/spark-1.5.0-SNAPSHOT-bin-2.2.0.tgz | |
spark.master -> mesos://zk://10.96.239.120:2181,10.96.248.254:2181,10.96.218.65:2181/mesos_qa | |
Main class: | |
org.apache.spark.repl.Main | |
Arguments: | |
System properties: | |
spark.driver.memory -> 5g | |
SPARK_SUBMIT -> true | |
spark.mesos.constraints -> zone:us-east-1a | |
spark.serializer -> org.apache.spark.serializer.KryoSerializer | |
spark.app.name -> org.apache.spark.repl.Main | |
spark.executor.uri -> http://com.brightcove.rna.repo.dev.s3.amazonaws.com/spark-1.5.0-SNAPSHOT-bin-2.2.0.tgz | |
spark.jars -> | |
spark.master -> mesos://zk://10.96.239.120:2181,10.96.248.254:2181,10.96.218.65:2181/mesos_qa | |
Classpath elements: | |
15/06/29 17:26:46 DEBUG MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Rate of successful kerberos logins and latency (milliseconds)]) | |
15/06/29 17:26:46 DEBUG MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Rate of failed kerberos logins and latency (milliseconds)]) | |
15/06/29 17:26:46 DEBUG MetricsSystemImpl: UgiMetrics, User and group related metrics | |
15/06/29 17:26:46 DEBUG KerberosName: Kerberos krb5 configuration not found, setting default realm to empty | |
15/06/29 17:26:46 DEBUG Groups: Creating new Groups object | |
15/06/29 17:26:46 DEBUG NativeCodeLoader: Trying to load the custom-built native-hadoop library... | |
15/06/29 17:26:46 DEBUG NativeCodeLoader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError: no hadoop in java.library.path | |
15/06/29 17:26:46 DEBUG NativeCodeLoader: java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib | |
15/06/29 17:26:46 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable | |
15/06/29 17:26:46 DEBUG JniBasedUnixGroupsMappingWithFallback: Falling back to shell based | |
15/06/29 17:26:46 DEBUG JniBasedUnixGroupsMappingWithFallback: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping | |
15/06/29 17:26:46 DEBUG Groups: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000 | |
15/06/29 17:26:46 DEBUG UserGroupInformation: hadoop login | |
15/06/29 17:26:46 DEBUG UserGroupInformation: hadoop login commit | |
15/06/29 17:26:46 DEBUG UserGroupInformation: using local user:UnixPrincipal: ubuntu | |
15/06/29 17:26:46 DEBUG UserGroupInformation: UGI loginUser:ubuntu (auth:SIMPLE) | |
15/06/29 17:26:46 INFO SecurityManager: Changing view acls to: ubuntu | |
15/06/29 17:26:46 INFO SecurityManager: Changing modify acls to: ubuntu | |
15/06/29 17:26:46 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ubuntu); users with modify permissions: Set(ubuntu) | |
15/06/29 17:26:46 DEBUG SecurityManager: SSLConfiguration for file server: SSLOptions{enabled=false, keyStore=None, keyStorePassword=None, trustStore=None, trustStorePassword=None, protocol=None, enabledAlgorithms=Set()} | |
15/06/29 17:26:46 DEBUG SecurityManager: SSLConfiguration for Akka: SSLOptions{enabled=false, keyStore=None, keyStorePassword=None, trustStore=None, trustStorePassword=None, protocol=None, enabledAlgorithms=Set()} | |
15/06/29 17:26:46 INFO HttpServer: Starting HTTP Server | |
15/06/29 17:26:46 DEBUG HttpServer: HttpServer is not using security | |
15/06/29 17:26:46 INFO Utils: Successfully started service 'HTTP class server' on port 44333. | |
15/06/29 17:26:48 DEBUG SparkILoop: Clearing 6 thunks. | |
Welcome to | |
____ __ | |
/ __/__ ___ _____/ /__ | |
_\ \/ _ \/ _ `/ __/ '_/ | |
/___/ .__/\_,_/_/ /_/\_\ version 1.5.0-SNAPSHOT | |
/_/ | |
Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_31) | |
Type in expressions to have them evaluated. | |
Type :help for more information. | |
15/06/29 17:26:49 INFO SparkContext: Running Spark version 1.5.0-SNAPSHOT | |
15/06/29 17:26:49 INFO SecurityManager: Changing view acls to: ubuntu | |
15/06/29 17:26:49 INFO SecurityManager: Changing modify acls to: ubuntu | |
15/06/29 17:26:49 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ubuntu); users with modify permissions: Set(ubuntu) | |
15/06/29 17:26:49 DEBUG SecurityManager: SSLConfiguration for file server: SSLOptions{enabled=false, keyStore=None, keyStorePassword=None, trustStore=None, trustStorePassword=None, protocol=None, enabledAlgorithms=Set()} | |
15/06/29 17:26:49 DEBUG SecurityManager: SSLConfiguration for Akka: SSLOptions{enabled=false, keyStore=None, keyStorePassword=None, trustStore=None, trustStorePassword=None, protocol=None, enabledAlgorithms=Set()} | |
15/06/29 17:26:49 DEBUG AkkaUtils: In createActorSystem, requireCookie is: off | |
15/06/29 17:26:50 INFO Slf4jLogger: Slf4jLogger started | |
15/06/29 17:26:50 INFO Remoting: Starting remoting | |
15/06/29 17:26:50 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:34400] | |
15/06/29 17:26:50 INFO Utils: Successfully started service 'sparkDriver' on port 34400. | |
15/06/29 17:26:50 DEBUG SparkEnv: Using serializer: class org.apache.spark.serializer.KryoSerializer | |
15/06/29 17:26:50 INFO SparkEnv: Registering MapOutputTracker | |
15/06/29 17:26:50 INFO SparkEnv: Registering BlockManagerMaster | |
15/06/29 17:26:50 INFO DiskBlockManager: Created local directory at /tmp/spark-c16aa01d-2a6b-4dc5-94c2-ee4ed2d4dc59/blockmgr-7f4cf256-d5e1-467d-baee-020ed3e995e0 | |
15/06/29 17:26:50 INFO MemoryStore: MemoryStore started with capacity 2.6 GB | |
15/06/29 17:26:50 INFO HttpFileServer: HTTP File server directory is /tmp/spark-c16aa01d-2a6b-4dc5-94c2-ee4ed2d4dc59/httpd-20ffbc50-fac2-4477-9c1f-e1ad4efbf27b | |
15/06/29 17:26:50 INFO HttpServer: Starting HTTP Server | |
15/06/29 17:26:50 DEBUG HttpServer: HttpServer is not using security | |
15/06/29 17:26:50 INFO Utils: Successfully started service 'HTTP file server' on port 48416. | |
15/06/29 17:26:50 DEBUG HttpFileServer: HTTP file server started at: http://10.96.220.40:48416 | |
15/06/29 17:26:50 INFO SparkEnv: Registering OutputCommitCoordinator | |
15/06/29 17:26:50 INFO Utils: Successfully started service 'SparkUI' on port 4040. | |
15/06/29 17:26:50 INFO SparkUI: Started SparkUI at http://10.96.220.40:4040 | |
15/06/29 17:26:50 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] received message AkkaMessage(ExpireDeadHosts,false) from Actor[akka://sparkDriver/deadLetters] | |
15/06/29 17:26:50 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: Received RPC message: AkkaMessage(ExpireDeadHosts,false) | |
15/06/29 17:26:50 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] handled message (2.670282 ms) AkkaMessage(ExpireDeadHosts,false) from Actor[akka://sparkDriver/deadLetters] | |
15/06/29 17:26:50 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set. | |
15/06/29 17:26:50 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] received message AkkaMessage(TaskSchedulerIsSet,false) from Actor[akka://sparkDriver/deadLetters] | |
15/06/29 17:26:50 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: Received RPC message: AkkaMessage(TaskSchedulerIsSet,false) | |
15/06/29 17:26:50 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] handled message (0.148854 ms) AkkaMessage(TaskSchedulerIsSet,false) from Actor[akka://sparkDriver/deadLetters] | |
I0629 17:26:50.887359 26246 sched.cpp:157] Version: 0.22.1 | |
2015-06-29 17:26:50,887:26145(0x7fba402af700):ZOO_INFO@log_env@712: Client environment:zookeeper.version=zookeeper C client 3.4.5 | |
2015-06-29 17:26:50,889:26145(0x7fba402af700):ZOO_INFO@log_env@716: Client environment:host.name=i-644498b2-mesos-slave-us-east-1e.gri | |
2015-06-29 17:26:50,889:26145(0x7fba402af700):ZOO_INFO@log_env@723: Client environment:os.name=Linux | |
2015-06-29 17:26:50,889:26145(0x7fba402af700):ZOO_INFO@log_env@724: Client environment:os.arch=3.13.0-44-generic | |
2015-06-29 17:26:50,889:26145(0x7fba402af700):ZOO_INFO@log_env@725: Client environment:os.version=#73-Ubuntu SMP Tue Dec 16 00:22:43 UTC 2014 | |
2015-06-29 17:26:50,889:26145(0x7fba402af700):ZOO_INFO@log_env@733: Client environment:user.name=ubuntu | |
2015-06-29 17:26:50,889:26145(0x7fba402af700):ZOO_INFO@log_env@741: Client environment:user.home=/home/ubuntu | |
2015-06-29 17:26:50,889:26145(0x7fba402af700):ZOO_INFO@log_env@753: Client environment:user.dir=/home/ubuntu/spark-1.5.0-SNAPSHOT-bin-2.2.0 | |
2015-06-29 17:26:50,889:26145(0x7fba402af700):ZOO_INFO@zookeeper_init@786: Initiating client connection, host=10.96.239.120:2181,10.96.248.254:2181,10.96.218.65:2181 sessionTimeout=10000 watcher=0x7fbaa8e35a60 sessionId=0 sessionPasswd=<null> context=0x7fbabc00ae40 flags=0 | |
2015-06-29 17:26:50,890:26145(0x7fba3eaac700):ZOO_INFO@check_events@1703: initiated connection to server [10.96.248.254:2181] | |
2015-06-29 17:26:50,894:26145(0x7fba3eaac700):ZOO_INFO@check_events@1750: session establishment complete on server [10.96.248.254:2181], sessionId=0x24cc67f8bf902d1, negotiated timeout=10000 | |
I0629 17:26:50.894212 26254 group.cpp:313] Group process (group(1)@10.96.220.40:45129) connected to ZooKeeper | |
I0629 17:26:50.894258 26254 group.cpp:790] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0) | |
I0629 17:26:50.894282 26254 group.cpp:385] Trying to create path '/mesos_qa' in ZooKeeper | |
I0629 17:26:50.896148 26254 detector.cpp:138] Detected a new leader: (id='10') | |
I0629 17:26:50.896263 26249 group.cpp:659] Trying to get '/mesos_qa/info_0000000010' in ZooKeeper | |
I0629 17:26:50.897270 26249 detector.cpp:452] A new leading master ([email protected]:5050) is detected | |
I0629 17:26:50.897341 26249 sched.cpp:254] New master detected at [email protected]:5050 | |
I0629 17:26:50.897488 26249 sched.cpp:264] No credentials provided. Attempting to register without authentication | |
I0629 17:26:50.899215 26249 sched.cpp:448] Framework registered with 20150527-043701-1104830474-5050-24651-0007 | |
15/06/29 17:26:50 INFO MesosSchedulerBackend: Registered as framework ID 20150527-043701-1104830474-5050-24651-0007 | |
15/06/29 17:26:50 DEBUG InternalLoggerFactory: Using SLF4J as the default logging framework | |
15/06/29 17:26:50 DEBUG PlatformDependent0: java.nio.Buffer.address: available | |
15/06/29 17:26:50 DEBUG PlatformDependent0: sun.misc.Unsafe.theUnsafe: available | |
15/06/29 17:26:50 DEBUG PlatformDependent0: sun.misc.Unsafe.copyMemory: available | |
15/06/29 17:26:50 DEBUG PlatformDependent0: java.nio.Bits.unaligned: true | |
15/06/29 17:26:50 DEBUG PlatformDependent: Java version: 8 | |
15/06/29 17:26:50 DEBUG PlatformDependent: -Dio.netty.noUnsafe: false | |
15/06/29 17:26:50 DEBUG PlatformDependent: sun.misc.Unsafe: available | |
15/06/29 17:26:50 DEBUG PlatformDependent: -Dio.netty.noJavassist: false | |
15/06/29 17:26:50 DEBUG PlatformDependent: Javassist: unavailable | |
15/06/29 17:26:50 DEBUG PlatformDependent: You don't have Javassist in your class path or you don't have enough permission to load dynamically generated classes. Please check the configuration for better performance. | |
15/06/29 17:26:50 DEBUG PlatformDependent: -Dio.netty.tmpdir: /tmp (java.io.tmpdir) | |
15/06/29 17:26:50 DEBUG PlatformDependent: -Dio.netty.bitMode: 64 (sun.arch.data.model) | |
15/06/29 17:26:50 DEBUG PlatformDependent: -Dio.netty.noPreferDirect: false | |
15/06/29 17:26:50 DEBUG MultithreadEventLoopGroup: -Dio.netty.eventLoopThreads: 8 | |
15/06/29 17:26:50 DEBUG NioEventLoop: -Dio.netty.noKeySetOptimization: false | |
15/06/29 17:26:50 DEBUG NioEventLoop: -Dio.netty.selectorAutoRebuildThreshold: 512 | |
15/06/29 17:26:50 DEBUG PooledByteBufAllocator: -Dio.netty.allocator.numHeapArenas: 4 | |
15/06/29 17:26:50 DEBUG PooledByteBufAllocator: -Dio.netty.allocator.numDirectArenas: 4 | |
15/06/29 17:26:51 DEBUG PooledByteBufAllocator: -Dio.netty.allocator.pageSize: 8192 | |
15/06/29 17:26:51 DEBUG PooledByteBufAllocator: -Dio.netty.allocator.maxOrder: 11 | |
15/06/29 17:26:51 DEBUG PooledByteBufAllocator: -Dio.netty.allocator.chunkSize: 16777216 | |
15/06/29 17:26:51 DEBUG PooledByteBufAllocator: -Dio.netty.allocator.tinyCacheSize: 512 | |
15/06/29 17:26:51 DEBUG PooledByteBufAllocator: -Dio.netty.allocator.smallCacheSize: 256 | |
15/06/29 17:26:51 DEBUG PooledByteBufAllocator: -Dio.netty.allocator.normalCacheSize: 64 | |
15/06/29 17:26:51 DEBUG PooledByteBufAllocator: -Dio.netty.allocator.maxCachedBufferCapacity: 32768 | |
15/06/29 17:26:51 DEBUG PooledByteBufAllocator: -Dio.netty.allocator.cacheTrimInterval: 8192 | |
15/06/29 17:26:51 DEBUG MesosSchedulerBackend: Declining offer: value: "20150527-043701-1104830474-5050-24651-O879405" | |
with attributes: Map(zone -> Set(us-east-1d), region -> Set(us-east-1)) mem: 10945.0 cpu: 2.5 | |
15/06/29 17:26:51 DEBUG MesosSchedulerBackend: Accepting offer: value: "20150527-043701-1104830474-5050-24651-O879406" | |
with attributes: Map(zone -> Set(us-east-1a), region -> Set(us-east-1)) mem: 11967.0 cpu: 3.0 | |
15/06/29 17:26:51 DEBUG MesosSchedulerBackend: Declining offer: value: "20150527-043701-1104830474-5050-24651-O879407" | |
with attributes: Map(zone -> Set(us-east-1e), region -> Set(us-east-1)) mem: 12671.0 cpu: 2.5 | |
15/06/29 17:26:51 DEBUG ThreadLocalRandom: -Dio.netty.initialSeedUniquifier: 0x8a07c592b78b0926 (took 1 ms) | |
15/06/29 17:26:51 DEBUG ByteBufUtil: -Dio.netty.allocator.type: unpooled | |
15/06/29 17:26:51 DEBUG ByteBufUtil: -Dio.netty.threadLocalDirectBufferSize: 65536 | |
15/06/29 17:26:51 DEBUG NetUtil: Loopback interface: lo (lo, 0:0:0:0:0:0:0:1%lo) | |
15/06/29 17:26:51 DEBUG NetUtil: /proc/sys/net/core/somaxconn: 128 | |
15/06/29 17:26:51 DEBUG TransportServer: Shuffle server started on port :54064 | |
15/06/29 17:26:51 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 54064. | |
15/06/29 17:26:51 INFO NettyBlockTransferService: Server created on 54064 | |
15/06/29 17:26:51 INFO BlockManagerMaster: Trying to register BlockManager | |
15/06/29 17:26:51 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] received message AkkaMessage(RegisterBlockManager(BlockManagerId(driver, 10.96.220.40, 54064),2778495713,AkkaRpcEndpointRef(Actor[akka://sparkDriver/user/BlockManagerEndpoint1#990609938])),true) from Actor[akka://sparkDriver/temp/$a] | |
15/06/29 17:26:51 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: Received RPC message: AkkaMessage(RegisterBlockManager(BlockManagerId(driver, 10.96.220.40, 54064),2778495713,AkkaRpcEndpointRef(Actor[akka://sparkDriver/user/BlockManagerEndpoint1#990609938])),true) | |
15/06/29 17:26:51 INFO BlockManagerMasterEndpoint: Registering block manager 10.96.220.40:54064 with 2.6 GB RAM, BlockManagerId(driver, 10.96.220.40, 54064) | |
15/06/29 17:26:51 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] handled message (7.32545 ms) AkkaMessage(RegisterBlockManager(BlockManagerId(driver, 10.96.220.40, 54064),2778495713,AkkaRpcEndpointRef(Actor[akka://sparkDriver/user/BlockManagerEndpoint1#990609938])),true) from Actor[akka://sparkDriver/temp/$a] | |
15/06/29 17:26:51 INFO BlockManagerMaster: Registered BlockManager | |
15/06/29 17:26:51 INFO SparkILoop: Created spark context.. | |
Spark context available as sc. | |
15/06/29 17:26:51 INFO SparkILoop: Created sql context.. | |
SQL context available as sqlContext. | |
scala> val NUM_SAMPLES = 100000 | |
NUM_SAMPLES: Int = 100000 | |
scala> val count = sc.parallelize(1 to NUM_SAMPLES).map{i => | |
| val x = Math.random() | |
| val y = Math.random() | |
| if (x*x + y*y < 1) 1 else 0 | |
| }.reduce(_ + _) | |
15/06/29 17:26:56 DEBUG MesosSchedulerBackend: Accepting offer: value: "20150527-043701-1104830474-5050-24651-O879411" | |
with attributes: Map(zone -> Set(us-east-1a), region -> Set(us-east-1)) mem: 11967.0 cpu: 3.0 | |
15/06/29 17:26:56 DEBUG MesosSchedulerBackend: Declining offer: value: "20150527-043701-1104830474-5050-24651-O879412" | |
with attributes: Map(zone -> Set(us-east-1e), region -> Set(us-east-1)) mem: 12671.0 cpu: 2.5 | |
15/06/29 17:26:56 DEBUG MesosSchedulerBackend: Declining offer: value: "20150527-043701-1104830474-5050-24651-O879413" | |
with attributes: Map(zone -> Set(us-east-1d), region -> Set(us-east-1)) mem: 10945.0 cpu: 2.5 | |
15/06/29 17:26:56 DEBUG ClosureCleaner: +++ Cleaning closure <function1> ($line19.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$1}) +++ | |
15/06/29 17:26:56 DEBUG ClosureCleaner: + declared fields: 1 | |
15/06/29 17:26:56 DEBUG ClosureCleaner: public static final long $line19.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$1.serialVersionUID | |
15/06/29 17:26:56 DEBUG ClosureCleaner: + declared methods: 3 | |
15/06/29 17:26:56 DEBUG ClosureCleaner: public int $line19.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply$mcII$sp(int) | |
15/06/29 17:26:56 DEBUG ClosureCleaner: public final int $line19.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(int) | |
15/06/29 17:26:56 DEBUG ClosureCleaner: public final java.lang.Object $line19.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(java.lang.Object) | |
15/06/29 17:26:56 DEBUG ClosureCleaner: + inner classes: 0 | |
15/06/29 17:26:56 DEBUG ClosureCleaner: + outer classes: 0 | |
15/06/29 17:26:56 DEBUG ClosureCleaner: + outer objects: 0 | |
15/06/29 17:26:56 DEBUG ClosureCleaner: + populating accessed fields because this is the starting closure | |
15/06/29 17:26:56 DEBUG ClosureCleaner: + fields accessed by starting closure: 0 | |
15/06/29 17:26:56 DEBUG ClosureCleaner: + there are no enclosing objects! | |
15/06/29 17:26:56 DEBUG ClosureCleaner: +++ closure <function1> ($line19.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$1) is now cleaned +++ | |
15/06/29 17:26:56 DEBUG ClosureCleaner: +++ Cleaning closure <function2> ($line19.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$2}) +++ | |
15/06/29 17:26:56 DEBUG ClosureCleaner: + declared fields: 1 | |
15/06/29 17:26:56 DEBUG ClosureCleaner: public static final long $line19.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$2.serialVersionUID | |
15/06/29 17:26:56 DEBUG ClosureCleaner: + declared methods: 3 | |
15/06/29 17:26:56 DEBUG ClosureCleaner: public int $line19.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply$mcIII$sp(int,int) | |
15/06/29 17:26:56 DEBUG ClosureCleaner: public final int $line19.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(int,int) | |
15/06/29 17:26:56 DEBUG ClosureCleaner: public final java.lang.Object $line19.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(java.lang.Object,java.lang.Object) | |
15/06/29 17:26:56 DEBUG ClosureCleaner: + inner classes: 0 | |
15/06/29 17:26:56 DEBUG ClosureCleaner: + outer classes: 0 | |
15/06/29 17:26:56 DEBUG ClosureCleaner: + outer objects: 0 | |
15/06/29 17:26:56 DEBUG ClosureCleaner: + populating accessed fields because this is the starting closure | |
15/06/29 17:26:56 DEBUG ClosureCleaner: + fields accessed by starting closure: 0 | |
15/06/29 17:26:56 DEBUG ClosureCleaner: + there are no enclosing objects! | |
15/06/29 17:26:56 DEBUG ClosureCleaner: +++ closure <function2> ($line19.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$2) is now cleaned +++ | |
15/06/29 17:26:56 DEBUG ClosureCleaner: +++ Cleaning closure <function2> (org.apache.spark.SparkContext$$anonfun$36}) +++ | |
15/06/29 17:26:56 DEBUG ClosureCleaner: + declared fields: 2 | |
15/06/29 17:26:56 DEBUG ClosureCleaner: public static final long org.apache.spark.SparkContext$$anonfun$36.serialVersionUID | |
15/06/29 17:26:56 DEBUG ClosureCleaner: private final scala.Function1 org.apache.spark.SparkContext$$anonfun$36.processPartition$1 | |
15/06/29 17:26:56 DEBUG ClosureCleaner: + declared methods: 2 | |
15/06/29 17:26:56 DEBUG ClosureCleaner: public final java.lang.Object org.apache.spark.SparkContext$$anonfun$36.apply(java.lang.Object,java.lang.Object) | |
15/06/29 17:26:56 DEBUG ClosureCleaner: public final java.lang.Object org.apache.spark.SparkContext$$anonfun$36.apply(org.apache.spark.TaskContext,scala.collection.Iterator) | |
15/06/29 17:26:56 DEBUG ClosureCleaner: + inner classes: 0 | |
15/06/29 17:26:56 DEBUG ClosureCleaner: + outer classes: 0 | |
15/06/29 17:26:56 DEBUG ClosureCleaner: + outer objects: 0 | |
15/06/29 17:26:56 DEBUG ClosureCleaner: + populating accessed fields because this is the starting closure | |
15/06/29 17:26:56 DEBUG ClosureCleaner: + fields accessed by starting closure: 0 | |
15/06/29 17:26:56 DEBUG ClosureCleaner: + there are no enclosing objects! | |
15/06/29 17:26:56 DEBUG ClosureCleaner: +++ closure <function2> (org.apache.spark.SparkContext$$anonfun$36) is now cleaned +++ | |
15/06/29 17:26:56 INFO SparkContext: Starting job: reduce at <console>:27 | |
15/06/29 17:26:56 INFO DAGScheduler: Got job 0 (reduce at <console>:27) with 8 output partitions (allowLocal=false) | |
15/06/29 17:26:56 INFO DAGScheduler: Final stage: ResultStage 0(reduce at <console>:27) | |
15/06/29 17:26:56 INFO DAGScheduler: Parents of final stage: List() | |
15/06/29 17:26:56 INFO DAGScheduler: Missing parents: List() | |
15/06/29 17:26:56 DEBUG DAGScheduler: submitStage(ResultStage 0) | |
15/06/29 17:26:56 DEBUG DAGScheduler: missing: List() | |
15/06/29 17:26:56 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at <console>:23), which has no missing parents | |
15/06/29 17:26:56 DEBUG DAGScheduler: submitMissingTasks(ResultStage 0) | |
15/06/29 17:26:57 INFO MemoryStore: ensureFreeSpace(1928) called with curMem=0, maxMem=2778495713 | |
15/06/29 17:26:57 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1928.0 B, free 2.6 GB) | |
15/06/29 17:26:57 DEBUG BlockManager: Put block broadcast_0 locally took 78 ms | |
15/06/29 17:26:57 DEBUG BlockManager: Putting block broadcast_0 without replication took 79 ms | |
15/06/29 17:26:57 INFO MemoryStore: ensureFreeSpace(1192) called with curMem=1928, maxMem=2778495713 | |
15/06/29 17:26:57 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1192.0 B, free 2.6 GB) | |
15/06/29 17:26:57 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] received message AkkaMessage(UpdateBlockInfo(BlockManagerId(driver, 10.96.220.40, 54064),broadcast_0_piece0,StorageLevel(false, true, false, false, 1),1192,0,0),true) from Actor[akka://sparkDriver/temp/$b] | |
15/06/29 17:26:57 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: Received RPC message: AkkaMessage(UpdateBlockInfo(BlockManagerId(driver, 10.96.220.40, 54064),broadcast_0_piece0,StorageLevel(false, true, false, false, 1),1192,0,0),true) | |
15/06/29 17:26:57 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 10.96.220.40:54064 (size: 1192.0 B, free: 2.6 GB) | |
15/06/29 17:26:57 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] handled message (1.274662 ms) AkkaMessage(UpdateBlockInfo(BlockManagerId(driver, 10.96.220.40, 54064),broadcast_0_piece0,StorageLevel(false, true, false, false, 1),1192,0,0),true) from Actor[akka://sparkDriver/temp/$b] | |
15/06/29 17:26:57 DEBUG BlockManagerMaster: Updated info of block broadcast_0_piece0 | |
15/06/29 17:26:57 DEBUG BlockManager: Told master about block broadcast_0_piece0 | |
15/06/29 17:26:57 DEBUG BlockManager: Put block broadcast_0_piece0 locally took 4 ms | |
15/06/29 17:26:57 DEBUG BlockManager: Putting block broadcast_0_piece0 without replication took 5 ms | |
15/06/29 17:26:57 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:895 | |
15/06/29 17:26:57 INFO DAGScheduler: Submitting 8 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at <console>:23) | |
15/06/29 17:26:57 DEBUG DAGScheduler: New pending tasks: Set(ResultTask(0, 3), ResultTask(0, 0), ResultTask(0, 2), ResultTask(0, 7), ResultTask(0, 6), ResultTask(0, 5), ResultTask(0, 4), ResultTask(0, 1)) | |
15/06/29 17:26:57 INFO TaskSchedulerImpl: Adding task set 0.0 with 8 tasks | |
15/06/29 17:26:57 DEBUG TaskSetManager: Epoch for TaskSet 0.0: 0 | |
15/06/29 17:26:57 DEBUG TaskSetManager: Valid locality levels for TaskSet 0.0: NO_PREF, ANY | |
15/06/29 17:26:57 DEBUG MesosSchedulerBackend: Declining offer: value: "20150527-043701-1104830474-5050-24651-O879414" | |
with attributes: Map(zone -> Set(us-east-1d), region -> Set(us-east-1)) mem: 10945.0 cpu: 2.5 | |
15/06/29 17:26:57 DEBUG MesosSchedulerBackend: Accepting offer: value: "20150527-043701-1104830474-5050-24651-O879415" | |
with attributes: Map(zone -> Set(us-east-1a), region -> Set(us-east-1)) mem: 11967.0 cpu: 3.0 | |
15/06/29 17:26:57 DEBUG MesosSchedulerBackend: Declining offer: value: "20150527-043701-1104830474-5050-24651-O879416" | |
with attributes: Map(zone -> Set(us-east-1e), region -> Set(us-east-1)) mem: 12671.0 cpu: 2.5 | |
15/06/29 17:26:57 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0, runningTasks: 0 | |
15/06/29 17:26:57 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, i-59475489-mesos-slave-us-east-1a.gri, PROCESS_LOCAL, 1265 bytes) | |
15/06/29 17:26:57 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, i-59475489-mesos-slave-us-east-1a.gri, PROCESS_LOCAL, 1267 bytes) | |
15/06/29 17:26:57 DEBUG MesosTaskLaunchData: ByteBuffer size: [1269] | |
15/06/29 17:26:57 DEBUG MesosTaskLaunchData: ByteBuffer size: [1271] | |
15/06/29 17:26:59 DEBUG MesosSchedulerBackend: Declining offer: value: "20150527-043701-1104830474-5050-24651-O879417" | |
with attributes: Map(zone -> Set(us-east-1a), region -> Set(us-east-1)) mem: 11071.0 cpu: 0.0 | |
15/06/29 17:26:59 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0, runningTasks: 2 | |
15/06/29 17:27:03 DEBUG MesosSchedulerBackend: Declining offer: value: "20150527-043701-1104830474-5050-24651-O879421" | |
with attributes: Map(zone -> Set(us-east-1e), region -> Set(us-east-1)) mem: 12671.0 cpu: 2.5 | |
15/06/29 17:27:03 DEBUG MesosSchedulerBackend: Declining offer: value: "20150527-043701-1104830474-5050-24651-O879422" | |
with attributes: Map(zone -> Set(us-east-1d), region -> Set(us-east-1)) mem: 10945.0 cpu: 2.5 | |
15/06/29 17:27:03 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0, runningTasks: 2 | |
15/06/29 17:27:04 DEBUG MesosSchedulerBackend: Declining offer: value: "20150527-043701-1104830474-5050-24651-O879423" | |
with attributes: Map(zone -> Set(us-east-1a), region -> Set(us-east-1)) mem: 11071.0 cpu: 0.0 | |
15/06/29 17:27:04 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0, runningTasks: 2 | |
15/06/29 17:27:05 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] received message Associated [akka.tcp://[email protected]:34400] <- [akka.tcp://[email protected]:38353] from Actor[akka://sparkDriver/deadLetters] | |
15/06/29 17:27:05 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] handled message (1.093479 ms) Associated [akka.tcp://[email protected]:34400] <- [akka.tcp://[email protected]:38353] from Actor[akka://sparkDriver/deadLetters] | |
15/06/29 17:27:05 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] received message Associated [akka.tcp://[email protected]:34400] <- [akka.tcp://[email protected]:38353] from Actor[akka://sparkDriver/deadLetters] | |
15/06/29 17:27:05 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] received message Associated [akka.tcp://[email protected]:34400] <- [akka.tcp://[email protected]:38353] from Actor[akka://sparkDriver/deadLetters] | |
15/06/29 17:27:05 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] handled message (0.022573 ms) Associated [akka.tcp://[email protected]:34400] <- [akka.tcp://[email protected]:38353] from Actor[akka://sparkDriver/deadLetters] | |
15/06/29 17:27:05 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] handled message (0.022517 ms) Associated [akka.tcp://[email protected]:34400] <- [akka.tcp://[email protected]:38353] from Actor[akka://sparkDriver/deadLetters] | |
15/06/29 17:27:05 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] received message Associated [akka.tcp://[email protected]:34400] <- [akka.tcp://[email protected]:38353] from Actor[akka://sparkDriver/deadLetters] | |
15/06/29 17:27:05 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] received message Associated [akka.tcp://[email protected]:34400] <- [akka.tcp://[email protected]:38353] from Actor[akka://sparkDriver/deadLetters] | |
15/06/29 17:27:05 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] handled message (0.022377 ms) Associated [akka.tcp://[email protected]:34400] <- [akka.tcp://[email protected]:38353] from Actor[akka://sparkDriver/deadLetters] | |
15/06/29 17:27:05 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] handled message (0.016973 ms) Associated [akka.tcp://[email protected]:34400] <- [akka.tcp://[email protected]:38353] from Actor[akka://sparkDriver/deadLetters] | |
15/06/29 17:27:05 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] received message AkkaMessage(RegisterBlockManager(BlockManagerId(20150527-043701-1104830474-5050-24651-S8, i-59475489-mesos-slave-us-east-1a.gri, 39452),278019440,AkkaRpcEndpointRef(Actor[akka.tcp://[email protected]:38353/user/BlockManagerEndpoint1#991547798])),true) from Actor[akka.tcp://[email protected]:38353/temp/$d] | |
15/06/29 17:27:05 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: Received RPC message: AkkaMessage(RegisterBlockManager(BlockManagerId(20150527-043701-1104830474-5050-24651-S8, i-59475489-mesos-slave-us-east-1a.gri, 39452),278019440,AkkaRpcEndpointRef(Actor[akka.tcp://[email protected]:38353/user/BlockManagerEndpoint1#991547798])),true) | |
15/06/29 17:27:05 INFO BlockManagerMasterEndpoint: Registering block manager i-59475489-mesos-slave-us-east-1a.gri:39452 with 265.1 MB RAM, BlockManagerId(20150527-043701-1104830474-5050-24651-S8, i-59475489-mesos-slave-us-east-1a.gri, 39452) | |
15/06/29 17:27:05 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] handled message (1.017316 ms) AkkaMessage(RegisterBlockManager(BlockManagerId(20150527-043701-1104830474-5050-24651-S8, i-59475489-mesos-slave-us-east-1a.gri, 39452),278019440,AkkaRpcEndpointRef(Actor[akka.tcp://[email protected]:38353/user/BlockManagerEndpoint1#991547798])),true) from Actor[akka.tcp://[email protected]:38353/temp/$d] | |
15/06/29 17:27:06 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] received message AkkaMessage(GetLocations(broadcast_0_piece0),true) from Actor[akka.tcp://[email protected]:38353/temp/$f] | |
15/06/29 17:27:06 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: Received RPC message: AkkaMessage(GetLocations(broadcast_0_piece0),true) | |
15/06/29 17:27:06 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] handled message (0.372423 ms) AkkaMessage(GetLocations(broadcast_0_piece0),true) from Actor[akka.tcp://[email protected]:38353/temp/$f] | |
15/06/29 17:27:06 DEBUG ResourceLeakDetector: -Dio.netty.leakDetectionLevel: simple | |
15/06/29 17:27:06 DEBUG Recycler: -Dio.netty.recycler.maxCapacity.default: 262144 | |
15/06/29 17:27:06 DEBUG BlockManager: Level for block broadcast_0_piece0 is StorageLevel(true, true, false, false, 1) | |
15/06/29 17:27:06 DEBUG BlockManager: Getting block broadcast_0_piece0 from memory | |
15/06/29 17:27:06 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] received message AkkaMessage(UpdateBlockInfo(BlockManagerId(20150527-043701-1104830474-5050-24651-S8, i-59475489-mesos-slave-us-east-1a.gri, 39452),broadcast_0_piece0,StorageLevel(false, true, false, false, 1),1192,0,0),true) from Actor[akka.tcp://[email protected]:38353/temp/$g] | |
15/06/29 17:27:06 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: Received RPC message: AkkaMessage(UpdateBlockInfo(BlockManagerId(20150527-043701-1104830474-5050-24651-S8, i-59475489-mesos-slave-us-east-1a.gri, 39452),broadcast_0_piece0,StorageLevel(false, true, false, false, 1),1192,0,0),true) | |
15/06/29 17:27:06 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on i-59475489-mesos-slave-us-east-1a.gri:39452 (size: 1192.0 B, free: 265.1 MB) | |
15/06/29 17:27:06 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] handled message (0.913742 ms) AkkaMessage(UpdateBlockInfo(BlockManagerId(20150527-043701-1104830474-5050-24651-S8, i-59475489-mesos-slave-us-east-1a.gri, 39452),broadcast_0_piece0,StorageLevel(false, true, false, false, 1),1192,0,0),true) from Actor[akka.tcp://[email protected]:38353/temp/$g] | |
15/06/29 17:27:06 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 9516 ms on i-59475489-mesos-slave-us-east-1a.gri (1/8) | |
15/06/29 17:27:06 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 9493 ms on i-59475489-mesos-slave-us-east-1a.gri (2/8) | |
15/06/29 17:27:07 DEBUG MesosSchedulerBackend: Accepting offer: value: "20150527-043701-1104830474-5050-24651-O879426" | |
with attributes: Map(zone -> Set(us-east-1a), region -> Set(us-east-1)) mem: 11071.0 cpu: 2.0 | |
15/06/29 17:27:07 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0, runningTasks: 0 | |
15/06/29 17:27:07 INFO TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, i-59475489-mesos-slave-us-east-1a.gri, PROCESS_LOCAL, 1267 bytes) | |
15/06/29 17:27:07 INFO TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, i-59475489-mesos-slave-us-east-1a.gri, PROCESS_LOCAL, 1267 bytes) | |
15/06/29 17:27:07 DEBUG MesosTaskLaunchData: ByteBuffer size: [1271] | |
15/06/29 17:27:07 DEBUG MesosTaskLaunchData: ByteBuffer size: [1271] | |
15/06/29 17:27:07 INFO TaskSetManager: Finished task 2.0 in stage 0.0 (TID 2) in 59 ms on i-59475489-mesos-slave-us-east-1a.gri (3/8) | |
15/06/29 17:27:07 INFO TaskSetManager: Finished task 3.0 in stage 0.0 (TID 3) in 55 ms on i-59475489-mesos-slave-us-east-1a.gri (4/8) | |
15/06/29 17:27:08 DEBUG MesosSchedulerBackend: Accepting offer: value: "20150527-043701-1104830474-5050-24651-O879427" | |
with attributes: Map(zone -> Set(us-east-1a), region -> Set(us-east-1)) mem: 11071.0 cpu: 2.0 | |
15/06/29 17:27:08 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0, runningTasks: 0 | |
15/06/29 17:27:08 INFO TaskSetManager: Starting task 4.0 in stage 0.0 (TID 4, i-59475489-mesos-slave-us-east-1a.gri, PROCESS_LOCAL, 1267 bytes) | |
15/06/29 17:27:08 INFO TaskSetManager: Starting task 5.0 in stage 0.0 (TID 5, i-59475489-mesos-slave-us-east-1a.gri, PROCESS_LOCAL, 1267 bytes) | |
15/06/29 17:27:08 DEBUG MesosTaskLaunchData: ByteBuffer size: [1271] | |
15/06/29 17:27:08 DEBUG MesosTaskLaunchData: ByteBuffer size: [1271] | |
15/06/29 17:27:08 INFO TaskSetManager: Finished task 4.0 in stage 0.0 (TID 4) in 41 ms on i-59475489-mesos-slave-us-east-1a.gri (5/8) | |
15/06/29 17:27:08 INFO TaskSetManager: Finished task 5.0 in stage 0.0 (TID 5) in 39 ms on i-59475489-mesos-slave-us-east-1a.gri (6/8) | |
15/06/29 17:27:09 DEBUG MesosSchedulerBackend: Declining offer: value: "20150527-043701-1104830474-5050-24651-O879429" | |
with attributes: Map(zone -> Set(us-east-1e), region -> Set(us-east-1)) mem: 12671.0 cpu: 2.5 | |
15/06/29 17:27:09 DEBUG MesosSchedulerBackend: Declining offer: value: "20150527-043701-1104830474-5050-24651-O879430" | |
with attributes: Map(zone -> Set(us-east-1d), region -> Set(us-east-1)) mem: 10945.0 cpu: 2.5 | |
15/06/29 17:27:09 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0, runningTasks: 0 | |
15/06/29 17:27:10 DEBUG MesosSchedulerBackend: Accepting offer: value: "20150527-043701-1104830474-5050-24651-O879431" | |
with attributes: Map(zone -> Set(us-east-1a), region -> Set(us-east-1)) mem: 11071.0 cpu: 2.0 | |
15/06/29 17:27:10 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_0, runningTasks: 0 | |
15/06/29 17:27:10 INFO TaskSetManager: Starting task 6.0 in stage 0.0 (TID 6, i-59475489-mesos-slave-us-east-1a.gri, PROCESS_LOCAL, 1267 bytes) | |
15/06/29 17:27:10 INFO TaskSetManager: Starting task 7.0 in stage 0.0 (TID 7, i-59475489-mesos-slave-us-east-1a.gri, PROCESS_LOCAL, 1234 bytes) | |
15/06/29 17:27:10 DEBUG MesosTaskLaunchData: ByteBuffer size: [1271] | |
15/06/29 17:27:10 DEBUG MesosTaskLaunchData: ByteBuffer size: [1238] | |
15/06/29 17:27:10 INFO TaskSetManager: Finished task 6.0 in stage 0.0 (TID 6) in 43 ms on i-59475489-mesos-slave-us-east-1a.gri (7/8) | |
15/06/29 17:27:10 INFO TaskSetManager: Finished task 7.0 in stage 0.0 (TID 7) in 39 ms on i-59475489-mesos-slave-us-east-1a.gri (8/8) | |
15/06/29 17:27:10 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool | |
15/06/29 17:27:10 INFO DAGScheduler: ResultStage 0 (reduce at <console>:27) finished in 13.104 s | |
15/06/29 17:27:10 DEBUG DAGScheduler: After removal of stage 0, remaining stages = 0 | |
15/06/29 17:27:10 INFO DAGScheduler: Job 0 finished: reduce at <console>:27, took 13.427555 s | |
count: Int = 78526 | |
scala> println("Pi is roughly " + 4.0 * count / NUM_SAMPLES)15/06/29 17:27:11 DEBUG MesosSchedulerBackend: Accepting offer: value: "20150527-043701-1104830474-5050-24651-O879432" | |
with attributes: Map(zone -> Set(us-east-1a), region -> Set(us-east-1)) mem: 11071.0 cpu: 2.0 | |
15/06/29 17:27:14 DEBUG MesosSchedulerBackend: Declining offer: value: "20150527-043701-1104830474-5050-24651-O879435" | |
with attributes: Map(zone -> Set(us-east-1e), region -> Set(us-east-1)) mem: 12671.0 cpu: 2.5 | |
15/06/29 17:27:15 DEBUG MesosSchedulerBackend: Declining offer: value: "20150527-043701-1104830474-5050-24651-O879436" | |
with attributes: Map(zone -> Set(us-east-1d), region -> Set(us-east-1)) mem: 10945.0 cpu: 2.5 | |
15/06/29 17:27:16 DEBUG MesosSchedulerBackend: Accepting offer: value: "20150527-043701-1104830474-5050-24651-O879438" | |
with attributes: Map(zone -> Set(us-east-1a), region -> Set(us-east-1)) mem: 11071.0 cpu: 2.0 | |
15/06/29 17:27:19 DEBUG MesosSchedulerBackend: Declining offer: value: "20150527-043701-1104830474-5050-24651-O879441" | |
with attributes: Map(zone -> Set(us-east-1e), region -> Set(us-east-1)) mem: 12671.0 cpu: 2.5 | |
15/06/29 17:27:20 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] received message AkkaMessage(Heartbeat(20150527-043701-1104830474-5050-24651-S8,[Lscala.Tuple2;@2453a1bd,BlockManagerId(20150527-043701-1104830474-5050-24651-S8, i-59475489-mesos-slave-us-east-1a.gri, 39452)),true) from Actor[akka.tcp://[email protected]:38353/temp/$h] | |
15/06/29 17:27:20 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: Received RPC message: AkkaMessage(Heartbeat(20150527-043701-1104830474-5050-24651-S8,[Lscala.Tuple2;@2453a1bd,BlockManagerId(20150527-043701-1104830474-5050-24651-S8, i-59475489-mesos-slave-us-east-1a.gri, 39452)),true) | |
15/06/29 17:27:20 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] handled message (1.068721 ms) AkkaMessage(Heartbeat(20150527-043701-1104830474-5050-24651-S8,[Lscala.Tuple2;@2453a1bd,BlockManagerId(20150527-043701-1104830474-5050-24651-S8, i-59475489-mesos-slave-us-east-1a.gri, 39452)),true) from Actor[akka.tcp://[email protected]:38353/temp/$h] | |
15/06/29 17:27:20 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] received message AkkaMessage(BlockManagerHeartbeat(BlockManagerId(20150527-043701-1104830474-5050-24651-S8, i-59475489-mesos-slave-us-east-1a.gri, 39452)),true) from Actor[akka://sparkDriver/temp/$c] | |
15/06/29 17:27:20 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: Received RPC message: AkkaMessage(BlockManagerHeartbeat(BlockManagerId(20150527-043701-1104830474-5050-24651-S8, i-59475489-mesos-slave-us-east-1a.gri, 39452)),true) | |
15/06/29 17:27:20 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] handled message (4.934086 ms) AkkaMessage(BlockManagerHeartbeat(BlockManagerId(20150527-043701-1104830474-5050-24651-S8, i-59475489-mesos-slave-us-east-1a.gri, 39452)),true) from Actor[akka://sparkDriver/temp/$c] | |
15/06/29 17:27:20 DEBUG MesosSchedulerBackend: Declining offer: value: "20150527-043701-1104830474-5050-24651-O879442" | |
with attributes: Map(zone -> Set(us-east-1d), region -> Set(us-east-1)) mem: 10945.0 cpu: 2.5 | |
15/06/29 17:27:21 DEBUG MesosSchedulerBackend: Accepting offer: value: "20150527-043701-1104830474-5050-24651-O879443" | |
with attributes: Map(zone -> Set(us-east-1a), region -> Set(us-east-1)) mem: 11071.0 cpu: 2.0 | |
15/06/29 17:27:24 DEBUG MesosSchedulerBackend: Declining offer: value: "20150527-043701-1104830474-5050-24651-O879445" | |
with attributes: Map(zone -> Set(us-east-1e), region -> Set(us-east-1)) mem: 12671.0 cpu: 2.5 | |
15/06/29 17:27:25 DEBUG MesosSchedulerBackend: Declining offer: value: "20150527-043701-1104830474-5050-24651-O879448" | |
with attributes: Map(zone -> Set(us-east-1d), region -> Set(us-east-1)) mem: 10945.0 cpu: 2.5 | |
15/06/29 17:27:26 DEBUG MesosSchedulerBackend: Accepting offer: value: "20150527-043701-1104830474-5050-24651-O879449" | |
with attributes: Map(zone -> Set(us-east-1a), region -> Set(us-east-1)) mem: 11071.0 cpu: 2.0 | |
15/06/29 17:27:29 DEBUG MesosSchedulerBackend: Declining offer: value: "20150527-043701-1104830474-5050-24651-O879451" | |
with attributes: Map(zone -> Set(us-east-1e), region -> Set(us-east-1)) mem: 12671.0 cpu: 2.5 | |
15/06/29 17:27:30 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] received message AkkaMessage(Heartbeat(20150527-043701-1104830474-5050-24651-S8,[Lscala.Tuple2;@707b84c5,BlockManagerId(20150527-043701-1104830474-5050-24651-S8, i-59475489-mesos-slave-us-east-1a.gri, 39452)),true) from Actor[akka.tcp://[email protected]:38353/temp/$i] | |
15/06/29 17:27:30 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: Received RPC message: AkkaMessage(Heartbeat(20150527-043701-1104830474-5050-24651-S8,[Lscala.Tuple2;@707b84c5,BlockManagerId(20150527-043701-1104830474-5050-24651-S8, i-59475489-mesos-slave-us-east-1a.gri, 39452)),true) | |
15/06/29 17:27:30 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] handled message (0.393327 ms) AkkaMessage(Heartbeat(20150527-043701-1104830474-5050-24651-S8,[Lscala.Tuple2;@707b84c5,BlockManagerId(20150527-043701-1104830474-5050-24651-S8, i-59475489-mesos-slave-us-east-1a.gri, 39452)),true) from Actor[akka.tcp://[email protected]:38353/temp/$i] | |
15/06/29 17:27:30 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] received message AkkaMessage(BlockManagerHeartbeat(BlockManagerId(20150527-043701-1104830474-5050-24651-S8, i-59475489-mesos-slave-us-east-1a.gri, 39452)),true) from Actor[akka://sparkDriver/temp/$d] | |
15/06/29 17:27:30 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: Received RPC message: AkkaMessage(BlockManagerHeartbeat(BlockManagerId(20150527-043701-1104830474-5050-24651-S8, i-59475489-mesos-slave-us-east-1a.gri, 39452)),true) | |
15/06/29 17:27:30 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] handled message (0.33224 ms) AkkaMessage(BlockManagerHeartbeat(BlockManagerId(20150527-043701-1104830474-5050-24651-S8, i-59475489-mesos-slave-us-east-1a.gri, 39452)),true) from Actor[akka://sparkDriver/temp/$d] | |
15/06/29 17:27:30 DEBUG MesosSchedulerBackend: Declining offer: value: "20150527-043701-1104830474-5050-24651-O879452" | |
with attributes: Map(zone -> Set(us-east-1d), region -> Set(us-east-1)) mem: 10945.0 cpu: 2.5 | |
15/06/29 17:27:31 DEBUG MesosSchedulerBackend: Accepting offer: value: "20150527-043701-1104830474-5050-24651-O879453" | |
with attributes: Map(zone -> Set(us-east-1a), region -> Set(us-east-1)) mem: 11071.0 cpu: 2.0 | |
15/06/29 17:27:33 INFO SparkContext: Invoking stop() from shutdown hook | |
15/06/29 17:27:34 INFO SparkUI: Stopped Spark web UI at http://10.96.220.40:4040 | |
15/06/29 17:27:34 INFO DAGScheduler: Stopping DAGScheduler | |
I0629 17:27:34.033906 26299 sched.cpp:1589] Asked to stop the driver | |
I0629 17:27:34.034015 26254 sched.cpp:831] Stopping framework '20150527-043701-1104830474-5050-24651-0007' | |
15/06/29 17:27:34 INFO MesosSchedulerBackend: driver.run() returned with code DRIVER_STOPPED | |
15/06/29 17:27:34 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] received message AkkaMessage(StopMapOutputTracker,true) from Actor[akka://sparkDriver/temp/$e] | |
15/06/29 17:27:34 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: Received RPC message: AkkaMessage(StopMapOutputTracker,true) | |
15/06/29 17:27:34 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! | |
15/06/29 17:27:34 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] handled message (1.522278 ms) AkkaMessage(StopMapOutputTracker,true) from Actor[akka://sparkDriver/temp/$e] | |
15/06/29 17:27:34 INFO Utils: path = /tmp/spark-c16aa01d-2a6b-4dc5-94c2-ee4ed2d4dc59/blockmgr-7f4cf256-d5e1-467d-baee-020ed3e995e0, already present as root for deletion. | |
15/06/29 17:27:34 INFO MemoryStore: MemoryStore cleared | |
15/06/29 17:27:34 INFO BlockManager: BlockManager stopped | |
15/06/29 17:27:34 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] received message AkkaMessage(StopBlockManagerMaster,true) from Actor[akka://sparkDriver/temp/$f] | |
15/06/29 17:27:34 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: Received RPC message: AkkaMessage(StopBlockManagerMaster,true) | |
15/06/29 17:27:34 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] handled message (1.156781 ms) AkkaMessage(StopBlockManagerMaster,true) from Actor[akka://sparkDriver/temp/$f] | |
15/06/29 17:27:34 INFO BlockManagerMaster: BlockManagerMaster stopped | |
15/06/29 17:27:34 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] received message AkkaMessage(StopCoordinator,false) from Actor[akka://sparkDriver/deadLetters] | |
15/06/29 17:27:34 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: Received RPC message: AkkaMessage(StopCoordinator,false) | |
15/06/29 17:27:34 INFO SparkContext: Successfully stopped SparkContext | |
15/06/29 17:27:34 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! | |
15/06/29 17:27:34 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] handled message (1.039368 ms) AkkaMessage(StopCoordinator,false) from Actor[akka://sparkDriver/deadLetters] | |
15/06/29 17:27:34 INFO Utils: Shutdown hook called | |
15/06/29 17:27:34 INFO Utils: Deleting directory /tmp/spark-4213a9aa-0f75-406b-bd26-82a3cb11b19e | |
15/06/29 17:27:34 INFO Utils: Deleting directory /tmp/spark-c16aa01d-2a6b-4dc5-94c2-ee4ed2d4dc59 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment