Created
July 8, 2013 17:55
-
-
Save shivaram/5950949 to your computer and use it in GitHub Desktop.
output from sbt test
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
[info] ReplSuite: | |
[info] - simple foreach with accumulator | |
[info] - external vars | |
[warn] /home/shivaram/projects/spark/core/src/test/scala/spark/FileServerSuite.scala:49: method toURL in class File is deprecated: see corresponding Javadoc for more information. | |
[warn] val partitionSumsWithSplit = nums.mapPartitionsWithSplit { | |
[warn] ^ | |
[info] - external classes | |
[info] - external functions | |
[warn] Note: /home/shivaram/projects/spark/streaming/src/test/java/spark/streaming/JavaAPISuite.java uses unchecked or unsafe operations. | |
[warn] Note: Recompile with -Xlint:unchecked for details. | |
[info] - external functions that access vars | |
[info] - broadcast vars | |
[info] - interacting with files | |
[info] - local-cluster mode | |
[info] Passed: : Total 8, Failed 0, Errors 0, Passed 8, Skipped 0 | |
[info] BagelSuite: | |
[info] - halting by voting | |
[info] - halting by message silence | |
[warn] two warnings found | |
[info] - large number of iterations | |
[warn] Note: /home/shivaram/projects/spark/core/src/test/scala/spark/JavaAPISuite.java uses unchecked or unsafe operations. | |
[warn] Note: Recompile with -Xlint:unchecked for details. | |
[info] - using non-default persistence level | |
[info] Passed: : Total 4, Failed 0, Errors 0, Passed 4, Skipped 0 | |
[info] LogisticRegressionSuite: | |
[info] - logistic regression | |
[info] ALSSuite: | |
[info] - rank-1 matrices | |
[info] - rank-2 matrices | |
[info] RidgeRegressionSuite: | |
[info] - multi-collinear variables | |
[info] KMeansSuite: | |
[info] - single cluster | |
[info] - glom | |
[info] - mapPartitions | |
[info] - groupByKey | |
[info] - reduceByKey | |
[info] - reduce | |
[info] - count | |
[info] - countByValue | |
[info] - mapValues | |
[info] - flatMapValues | |
[info] - cogroup | |
[info] - join | |
[info] - updateStateByKey | |
[info] - updateStateByKey - object lifecycle | |
[info] - slice | |
[info] - forgetting of RDDs - map and window operations | |
[info] WindowOperationsSuite: | |
[info] - window - basic window | |
[info] - window - tumbling window | |
[info] - window - larger window | |
[info] - window - non-overlapping window | |
[info] - reduceByKeyAndWindow - basic reduction | |
[info] - reduceByKeyAndWindow - key already in window and new value added into window | |
[info] - reduceByKeyAndWindow - new key added into window | |
[info] - reduceByKeyAndWindow - key removed from window | |
[info] - reduceByKeyAndWindow - larger slide time | |
[info] - reduceByKeyAndWindow - big test | |
[info] - reduceByKeyAndWindow with inverse function - basic reduction | |
[info] - reduceByKeyAndWindow with inverse function - key already in window and new value added into window | |
[info] - reduceByKeyAndWindow with inverse function - new key added into window | |
[info] - reduceByKeyAndWindow with inverse function - key removed from window | |
[info] - reduceByKeyAndWindow with inverse function - larger slide time | |
[info] - reduceByKeyAndWindow with inverse function - big test | |
[info] - reduceByKeyAndWindow with inverse and filter functions - big test | |
[info] - groupByKeyAndWindow | |
[info] - countByWindow | |
[info] - countByValueAndWindow | |
[info] InputStreamsSuite: | |
[info] - socket input stream | |
[info] - flume input stream | |
[info] - file input stream | |
[info] - actor input stream | |
[info] - kafka input stream | |
[info] CheckpointSuite: | |
[info] - basic rdd checkpoints + dstream graph checkpoint recovery | |
[info] - recovery with map and reduceByKey operations | |
[info] - recovery with invertible reduceByKeyAndWindow operation | |
[info] - recovery with updateStateByKey operation | |
[info] - recovery with file input stream | |
[info] Passed: : Total 94, Failed 0, Errors 0, Passed 94, Skipped 0 | |
[info] NextIteratorSuite: | |
[info] - one iteration | |
[info] - two iterations | |
[info] - empty iteration | |
[info] - close is called once for empty iterations | |
[info] - close is called once for non-empty iterations | |
[info] UnpersistSuite: | |
[info] - unpersist RDD | |
[info] PairRDDFunctionsSuite: | |
[info] - groupByKey | |
[info] - groupByKey with duplicates | |
[info] - groupByKey with negative key hash codes | |
[info] - groupByKey with many output partitions | |
[info] - reduceByKey | |
[info] - reduceByKey with collectAsMap | |
[info] - reduceByKey with many output partitons | |
[info] - reduceByKey with partitioner | |
[info] - join | |
[info] - join all-to-all | |
[info] - leftOuterJoin | |
[info] - rightOuterJoin | |
[info] - join with no matches | |
[info] - join with many output partitions | |
[info] - groupWith | |
[info] - zero-partition RDD | |
[info] - keys and values | |
[info] - default partitioner uses partition size | |
[info] - default partitioner uses largest partitioner | |
[info] - subtract | |
[info] - subtract with narrow dependency | |
[info] - subtractByKey | |
[info] - subtractByKey with narrow dependency | |
[info] - foldByKey | |
[info] - foldByKey with mutable result type | |
[info] JdbcRDDSuite: | |
[info] - basic functionality | |
[info] ZippedPartitionsSuite: | |
[info] - print sizes | |
[info] AccumulatorSuite: | |
[info] - basic accumulation | |
[info] - value not assignable from tasks | |
[info] - add value to collection accumulators | |
[info] - value not readable in tasks | |
[info] - collection accumulators | |
[info] - localValue readable in tasks | |
[info] UISuite: | |
[info] - jetty port increases under contention | |
[info] - jetty binds to port 0 correctly | |
[info] - string formatting of time durations | |
[info] - reading last n bytes of a file | |
[info] FileServerSuite: | |
[info] - Distributing files locally | |
[info] - Distributing files locally using URL as input | |
[info] - Dynamically adding JARS locally | |
[info] - Distributing files on a standalone cluster | |
[info] - Dynamically adding JARS on a standalone cluster | |
[info] ShuffleNettySuite: | |
[info] - groupByKey with compression | |
[info] - shuffle non-zero block size | |
[info] - shuffle serializer | |
[info] - zero sized blocks | |
[info] - zero sized blocks without kryo | |
[info] BroadcastSuite: | |
[info] - basic broadcast | |
[info] - broadcast variables accessed in multiple threads | |
[info] BlockManagerSuite: | |
[info] - StorageLevel object caching | |
[info] - BlockManagerId object caching | |
[info] - master + 1 manager interaction | |
[info] - master + 2 managers interaction | |
[info] - removing block | |
[info] - removing rdd | |
[info] - reregistration on heart beat | |
[info] - reregistration on block update | |
[info] - reregistration doesn't dead lock | |
[info] - in-memory LRU storage | |
[info] - in-memory LRU storage with serialization | |
[info] - in-memory LRU for partitions of same RDD | |
[info] - in-memory LRU for partitions of multiple RDDs | |
[info] - on-disk storage | |
[info] - disk and memory storage | |
[info] - disk and memory storage with getLocalBytes | |
[info] - disk and memory storage with serialization | |
[info] - disk and memory storage with serialization and getLocalBytes | |
[info] - LRU with mixed storage levels | |
[info] - in-memory LRU with streams | |
[info] - LRU with mixed storage levels and streams | |
[info] - negative byte values in ByteBufferInputStream | |
[info] - overly large block | |
[info] - block compression | |
[info] - block store put failure | |
[info] MapOutputTrackerSuite: | |
[info] - compressSize | |
[info] - decompressSize | |
[info] - master start and stop | |
[info] - master register and fetch | |
[info] - master register and unregister and fetch | |
[info] - remote fetch | |
Suite: | |
[info] - memoryBytesToString | |
[info] - copyStream | |
[info] - memoryStringToMb | |
[info] - splitCommandString | |
[info] JobLoggerSuite: | |
[info] - inner method | |
[info] - inner variables | |
[info] - interface functions | |
[info] ClosureCleanerSuite: | |
[info] - closures inside an object | |
[info] - closures inside a class | |
[info] - closures inside a class with no default constructor | |
[info] - closures that don't use fields of the outer class | |
[info] - nested closures inside an object | |
[info] - nested closures inside a class | |
[info] ThreadingSuite: | |
[info] - accessing SparkContext form a different thread | |
[info] - accessing SparkContext form multiple threads | |
[info] - accessing multi-threaded SparkContext form multiple threads | |
[info] - parallel job execution | |
[info] ParallelCollectionSplitSuite: | |
[info] - one element per slice | |
[info] - one slice | |
[info] - equal slices | |
[info] - non-equal slices | |
[info] - splitting exclusive range | |
[info] - splitting inclusive range | |
[info] - empty data | |
[info] - zero slices | |
[info] - negative number of slices | |
[info] - exclusive ranges sliced into ranges | |
[info] - inclusive ranges sliced into ranges | |
[info] - large ranges don't overflow | |
[info] - random array tests | |
[info] - random exclusive range tests | |
[info] - random inclusive range tests | |
[info] - exclusive ranges of longs | |
[info] - inclusive ranges of longs | |
[info] - exclusive ranges of doubles | |
[info] - inclusive ranges of doubles | |
[info] TaskContextSuite: | |
[info] - Calls executeOnCompleteCallbacks after failure | |
[info] ShuffleSuite: | |
[info] - groupByKey with compression | |
[info] - shuffle non-zero block size | |
[info] - shuffle serializer | |
[info] - zero sized blocks | |
[info] - zero sized blocks without kryo | |
[info] CheckpointSuite: | |
[info] - basic checkpointing | |
[info] - RDDs with one-to-one dependencies | |
[info] - ParallelCollection | |
[info] - BlockRDD | |
[info] - ShuffledRDD | |
[info] - UnionRDD | |
[info] - CartesianRDD | |
[info] - CoalescedRDD | |
[info] - CoGroupedRDD | |
[info] - ZippedRDD | |
[info] - CheckpointRDD with zero partitions | |
SerializerSuite: | |
[info] - basic types | |
[info] - pairs | |
[info] - Scala data structures | |
[info] - custom registrator | |
[info] RDDSuite: | |
[info] - basic operations | |
[info] - SparkContext.union | |
[info] - aggregate | |
[info] - basic caching | |
[info] - caching with failures | |
[info] - empty RDD | |
[info] - cogrouped RDDs | |
[info] - zipped RDDs | |
[info] - partition pruning | |
[info] - mapWith | |
[info] - flatMapWith | |
[info] - filterWith | |
[info] - top with predefined ordering | |
[info] - top with custom ordering | |
[info] - takeSample | |
[info] DistributionSuite: | |
[info] - summary | |
[info] SizeEstimatorSuite: | |
[info] - simple classes | |
[info] - strings | |
[info] - primitive arrays | |
[info] - object arrays | |
[info] - 32-bit arch | |
[info] - 64-bit arch with no compressed oops | |
[info] PipedRDDSuite: | |
[info] - basic pipe | |
[info] - advanced pipe | |
[info] - pipe with env variable | |
[info] - pipe with non-zero exit status | |
[info] DistributedSuite: | |
[info] - task throws not serializable exception | |
[info] - local-cluster format | |
[info] - simple groupByKey | |
[info] - groupByKey where map output sizes exceed maxMbInFlight | |
[info] - accumulators | |
[info] - broadcast variables | |
[info] - repeatedly failing task | |
[info] - caching | |
[info] - caching on disk | |
[info] - caching in memory, replicated | |
[info] - caching in memory, serialized, replicated | |
[info] - caching on disk, replicated | |
[info] - caching in memory and disk, replicated | |
[info] - caching in memory and disk, serialized, replicated | |
[info] - compute without caching when no partitions fit in memory | |
[info] - compute when only some partitions fit in memory | |
[info] - passing environment variables to cluster | |
[info] - recover from node failures | |
[info] - recover from repeated node failures during shuffle-map | |
[info] - recover from repeated node failures during shuffle-reduce | |
[info] - recover from node failures with replication | |
[info] - unpersist RDDs | |
[info] - job should fail if TaskResult exceeds Akka frame size | |
[info] SortingSuite: | |
[info] - sortByKey | |
[info] - large array | |
[info] - large array with one split | |
[info] - large array with many partitions | |
[info] - sort descending | |
[info] - sort descending with one split | |
[info] - sort descending with many partitions | |
[info] - more partitions than elements | |
[info] - empty RDD | |
[info] - partition balancing | |
[info] - partition balancing for descending sort | |
[info] LocalSchedulerSuite: | |
[info] - Local FIFO scheduler end-to-end test | |
[info] - Local fair scheduler end-to-end test | |
[info] FailureSuite: | |
[info] - failure in a single-stage job | |
[info] - failure in a two-stage job | |
[info] - failure because task results are not serializable | |
[info] RateLimitedOutputStreamSuite: | |
[info] - write | |
[info] ClusterSchedulerSuite: | |
[info] - FIFO Scheduler Test | |
[info] - Fair Scheduler Test | |
[info] - Nested Pool Test | |
[info] DAGSchedulerSuite: | |
[info] - zero split job | |
[info] - run trivial job | |
[info] - local job | |
[info] - run trivial job w/ dependency | |
[info] - cache location preferences w/ dependency | |
[info] - trivial job failure | |
[info] - run trivial shuffle | |
[info] - run trivial shuffle with fetch failure | |
[info] - ignore late map task completions | |
[info] - run trivial shuffle with out-of-band failure and retry | |
[info] - recursive shuffle failures | |
[info] - cached post-shuffle | |
[info] DriverSuite: | |
[info] - driver should exit after finishing | |
[info] FileSuite: | |
[info] - text files | |
[info] - text files (compressed) | |
[info] - SequenceFiles | |
[info] - SequenceFile (compressed) | |
[info] - SequenceFile with writable key | |
[info] - SequenceFile with writable value | |
[info] - SequenceFile with writable key and value | |
[info] - implicit conversions in reading SequenceFiles | |
[info] - object files of ints | |
[info] - object files of complex types | |
[info] - write SequenceFile using new Hadoop API | |
[info] - read SequenceFile using new Hadoop API | |
[info] - file caching | |
[info] PartitioningSuite: | |
[info] - HashPartitioner equality | |
[info] - RangePartitioner equality | |
[info] - HashPartitioner not equal to RangePartitioner | |
[info] - partitioner preservation | |
[info] - partitioning Java arrays should fail | |
[info] - zero-length partitions should be correctly handled | |
[info] SparkListenerSuite: | |
[info] - local metrics | |
[info] Passed: : Total 281, Failed 0, Errors 0, Passed 281, Skipped 0 | |
[success] Total time: 814 s, completed Jul 8, 2013 10:49:57 AM | |
[info] ReplSuite: | |
[info] - simple foreach with accumulator | |
[info] - external vars | |
[warn] /home/shivaram/projects/spark/core/src/test/scala/spark/FileServerSuite.scala:49: method toURL in class File is deprecated: see corresponding Javadoc for more information. | |
[warn] val partitionSumsWithSplit = nums.mapPartitionsWithSplit { | |
[warn] ^ | |
[info] - external classes | |
[info] - external functions | |
[warn] Note: /home/shivaram/projects/spark/streaming/src/test/java/spark/streaming/JavaAPISuite.java uses unchecked or unsafe operations. | |
[warn] Note: Recompile with -Xlint:unchecked for details. | |
[info] - external functions that access vars | |
[info] - broadcast vars | |
[info] - interacting with files | |
[info] - local-cluster mode | |
[info] Passed: : Total 8, Failed 0, Errors 0, Passed 8, Skipped 0 | |
[info] BagelSuite: | |
[info] - halting by voting | |
[info] - halting by message silence | |
[warn] two warnings found | |
[info] - large number of iterations | |
[warn] Note: /home/shivaram/projects/spark/core/src/test/scala/spark/JavaAPISuite.java uses unchecked or unsafe operations. | |
[warn] Note: Recompile with -Xlint:unchecked for details. | |
[info] - using non-default persistence level | |
[info] Passed: : Total 4, Failed 0, Errors 0, Passed 4, Skipped 0 | |
[info] LogisticRegressionSuite: | |
[info] - logistic regression | |
[info] ALSSuite: | |
[info] - rank-1 matrices | |
[info] - rank-2 matrices | |
[info] RidgeRegressionSuite: | |
[info] - multi-collinear variables | |
[info] KMeansSuite: | |
[info] - single cluster | |
[info] - glom | |
[info] - mapPartitions | |
[info] - groupByKey | |
[info] - reduceByKey | |
[info] - reduce | |
[info] - count | |
[info] - countByValue | |
[info] - mapValues | |
[info] - flatMapValues | |
[info] - cogroup | |
[info] - join | |
[info] - updateStateByKey | |
[info] - updateStateByKey - object lifecycle | |
[info] - slice | |
[info] - forgetting of RDDs - map and window operations | |
[info] WindowOperationsSuite: | |
[info] - window - basic window | |
[info] - window - tumbling window | |
[info] - window - larger window | |
[info] - window - non-overlapping window | |
[info] - reduceByKeyAndWindow - basic reduction | |
[info] - reduceByKeyAndWindow - key already in window and new value added into window | |
[info] - reduceByKeyAndWindow - new key added into window | |
[info] - reduceByKeyAndWindow - key removed from window | |
[info] - reduceByKeyAndWindow - larger slide time | |
[info] - reduceByKeyAndWindow - big test | |
[info] - reduceByKeyAndWindow with inverse function - basic reduction | |
[info] - reduceByKeyAndWindow with inverse function - key already in window and new value added into window | |
[info] - reduceByKeyAndWindow with inverse function - new key added into window | |
[info] - reduceByKeyAndWindow with inverse function - key removed from window | |
[info] - reduceByKeyAndWindow with inverse function - larger slide time | |
[info] - reduceByKeyAndWindow with inverse function - big test | |
[info] - reduceByKeyAndWindow with inverse and filter functions - big test | |
[info] - groupByKeyAndWindow | |
[info] - countByWindow | |
[info] - countByValueAndWindow | |
[info] InputStreamsSuite: | |
[info] - socket input stream | |
[info] - flume input stream | |
[info] - file input stream | |
[info] - actor input stream | |
[info] - kafka input stream | |
[info] CheckpointSuite: | |
[info] - basic rdd checkpoints + dstream graph checkpoint recovery | |
[info] - recovery with map and reduceByKey operations | |
[info] - recovery with invertible reduceByKeyAndWindow operation | |
[info] - recovery with updateStateByKey operation | |
[info] - recovery with file input stream | |
[info] Passed: : Total 94, Failed 0, Errors 0, Passed 94, Skipped 0 | |
[info] NextIteratorSuite: | |
[info] - one iteration | |
[info] - two iterations | |
[info] - empty iteration | |
[info] - close is called once for empty iterations | |
[info] - close is called once for non-empty iterations | |
[info] UnpersistSuite: | |
[info] - unpersist RDD | |
[info] PairRDDFunctionsSuite: | |
[info] - groupByKey | |
[info] - groupByKey with duplicates | |
[info] - groupByKey with negative key hash codes | |
[info] - groupByKey with many output partitions | |
[info] - reduceByKey | |
[info] - reduceByKey with collectAsMap | |
[info] - reduceByKey with many output partitons | |
[info] - reduceByKey with partitioner | |
[info] - join | |
[info] - join all-to-all | |
[info] - leftOuterJoin | |
[info] - rightOuterJoin | |
[info] - join with no matches | |
[info] - join with many output partitions | |
[info] - groupWith | |
[info] - zero-partition RDD | |
[info] - keys and values | |
[info] - default partitioner uses partition size | |
[info] - default partitioner uses largest partitioner | |
[info] - subtract | |
[info] - subtract with narrow dependency | |
[info] - subtractByKey | |
[info] - subtractByKey with narrow dependency | |
[info] - foldByKey | |
[info] - foldByKey with mutable result type | |
[info] JdbcRDDSuite: | |
[info] - basic functionality | |
[info] ZippedPartitionsSuite: | |
[info] - print sizes | |
[info] AccumulatorSuite: | |
[info] - basic accumulation | |
[info] - value not assignable from tasks | |
[info] - add value to collection accumulators | |
[info] - value not readable in tasks | |
[info] - collection accumulators | |
[info] - localValue readable in tasks | |
[info] UISuite: | |
[info] - jetty port increases under contention | |
[info] - jetty binds to port 0 correctly | |
[info] - string formatting of time durations | |
[info] - reading last n bytes of a file | |
[info] FileServerSuite: | |
[info] - Distributing files locally | |
[info] - Distributing files locally using URL as input | |
[info] - Dynamically adding JARS locally | |
[info] - Distributing files on a standalone cluster | |
[info] - Dynamically adding JARS on a standalone cluster | |
[info] ShuffleNettySuite: | |
[info] - groupByKey with compression | |
[info] - shuffle non-zero block size | |
[info] - shuffle serializer | |
[info] - zero sized blocks | |
[info] - zero sized blocks without kryo | |
[info] BroadcastSuite: | |
[info] - basic broadcast | |
[info] - broadcast variables accessed in multiple threads | |
[info] BlockManagerSuite: | |
[info] - StorageLevel object caching | |
[info] - BlockManagerId object caching | |
[info] - master + 1 manager interaction | |
[info] - master + 2 managers interaction | |
[info] - removing block | |
[info] - removing rdd | |
[info] - reregistration on heart beat | |
[info] - reregistration on block update | |
[info] - reregistration doesn't dead lock | |
[info] - in-memory LRU storage | |
[info] - in-memory LRU storage with serialization | |
[info] - in-memory LRU for partitions of same RDD | |
[info] - in-memory LRU for partitions of multiple RDDs | |
[info] - on-disk storage | |
[info] - disk and memory storage | |
[info] - disk and memory storage with getLocalBytes | |
[info] - disk and memory storage with serialization | |
[info] - disk and memory storage with serialization and getLocalBytes | |
[info] - LRU with mixed storage levels | |
[info] - in-memory LRU with streams | |
[info] - LRU with mixed storage levels and streams | |
[info] - negative byte values in ByteBufferInputStream | |
[info] - overly large block | |
[info] - block compression | |
[info] - block store put failure | |
[info] MapOutputTrackerSuite: | |
[info] - compressSize | |
[info] - decompressSize | |
[info] - master start and stop | |
[info] - master register and fetch | |
[info] - master register and unregister and fetch | |
[info] - remote fetch | |
Suite: | |
[info] - memoryBytesToString | |
[info] - copyStream | |
[info] - memoryStringToMb | |
[info] - splitCommandString | |
[info] JobLoggerSuite: | |
[info] - inner method | |
[info] - inner variables | |
[info] - interface functions | |
[info] ClosureCleanerSuite: | |
[info] - closures inside an object | |
[info] - closures inside a class | |
[info] - closures inside a class with no default constructor | |
[info] - closures that don't use fields of the outer class | |
[info] - nested closures inside an object | |
[info] - nested closures inside a class | |
[info] ThreadingSuite: | |
[info] - accessing SparkContext form a different thread | |
[info] - accessing SparkContext form multiple threads | |
[info] - accessing multi-threaded SparkContext form multiple threads | |
[info] - parallel job execution | |
[info] ParallelCollectionSplitSuite: | |
[info] - one element per slice | |
[info] - one slice | |
[info] - equal slices | |
[info] - non-equal slices | |
[info] - splitting exclusive range | |
[info] - splitting inclusive range | |
[info] - empty data | |
[info] - zero slices | |
[info] - negative number of slices | |
[info] - exclusive ranges sliced into ranges | |
[info] - inclusive ranges sliced into ranges | |
[info] - large ranges don't overflow | |
[info] - random array tests | |
[info] - random exclusive range tests | |
[info] - random inclusive range tests | |
[info] - exclusive ranges of longs | |
[info] - inclusive ranges of longs | |
[info] - exclusive ranges of doubles | |
[info] - inclusive ranges of doubles | |
[info] TaskContextSuite: | |
[info] - Calls executeOnCompleteCallbacks after failure | |
[info] ShuffleSuite: | |
[info] - groupByKey with compression | |
[info] - shuffle non-zero block size | |
[info] - shuffle serializer | |
[info] - zero sized blocks | |
[info] - zero sized blocks without kryo | |
[info] CheckpointSuite: | |
[info] - basic checkpointing | |
[info] - RDDs with one-to-one dependencies | |
[info] - ParallelCollection | |
[info] - BlockRDD | |
[info] - ShuffledRDD | |
[info] - UnionRDD | |
[info] - CartesianRDD | |
[info] - CoalescedRDD | |
[info] - CoGroupedRDD | |
[info] - ZippedRDD | |
[info] - CheckpointRDD with zero partitions | |
SerializerSuite: | |
[info] - basic types | |
[info] - pairs | |
[info] - Scala data structures | |
[info] - custom registrator | |
[info] RDDSuite: | |
[info] - basic operations | |
[info] - SparkContext.union | |
[info] - aggregate | |
[info] - basic caching | |
[info] - caching with failures | |
[info] - empty RDD | |
[info] - cogrouped RDDs | |
[info] - zipped RDDs | |
[info] - partition pruning | |
[info] - mapWith | |
[info] - flatMapWith | |
[info] - filterWith | |
[info] - top with predefined ordering | |
[info] - top with custom ordering | |
[info] - takeSample | |
[info] DistributionSuite: | |
[info] - summary | |
[info] SizeEstimatorSuite: | |
[info] - simple classes | |
[info] - strings | |
[info] - primitive arrays | |
[info] - object arrays | |
[info] - 32-bit arch | |
[info] - 64-bit arch with no compressed oops | |
[info] PipedRDDSuite: | |
[info] - basic pipe | |
[info] - advanced pipe | |
[info] - pipe with env variable | |
[info] - pipe with non-zero exit status | |
[info] DistributedSuite: | |
[info] - task throws not serializable exception | |
[info] - local-cluster format | |
[info] - simple groupByKey | |
[info] - groupByKey where map output sizes exceed maxMbInFlight | |
[info] - accumulators | |
[info] - broadcast variables | |
[info] - repeatedly failing task | |
[info] - caching | |
[info] - caching on disk | |
[info] - caching in memory, replicated | |
[info] - caching in memory, serialized, replicated | |
[info] - caching on disk, replicated | |
[info] - caching in memory and disk, replicated | |
[info] - caching in memory and disk, serialized, replicated | |
[info] - compute without caching when no partitions fit in memory | |
[info] - compute when only some partitions fit in memory | |
[info] - passing environment variables to cluster | |
[info] - recover from node failures | |
[info] - recover from repeated node failures during shuffle-map | |
[info] - recover from repeated node failures during shuffle-reduce | |
[info] - recover from node failures with replication | |
[info] - unpersist RDDs | |
[info] - job should fail if TaskResult exceeds Akka frame size | |
[info] SortingSuite: | |
[info] - sortByKey | |
[info] - large array | |
[info] - large array with one split | |
[info] - large array with many partitions | |
[info] - sort descending | |
[info] - sort descending with one split | |
[info] - sort descending with many partitions | |
[info] - more partitions than elements | |
[info] - empty RDD | |
[info] - partition balancing | |
[info] - partition balancing for descending sort | |
[info] LocalSchedulerSuite: | |
[info] - Local FIFO scheduler end-to-end test | |
[info] - Local fair scheduler end-to-end test | |
[info] FailureSuite: | |
[info] - failure in a single-stage job | |
[info] - failure in a two-stage job | |
[info] - failure because task results are not serializable | |
[info] RateLimitedOutputStreamSuite: | |
[info] - write | |
[info] ClusterSchedulerSuite: | |
[info] - FIFO Scheduler Test | |
[info] - Fair Scheduler Test | |
[info] - Nested Pool Test | |
[info] DAGSchedulerSuite: | |
[info] - zero split job | |
[info] - run trivial job | |
[info] - local job | |
[info] - run trivial job w/ dependency | |
[info] - cache location preferences w/ dependency | |
[info] - trivial job failure | |
[info] - run trivial shuffle | |
[info] - run trivial shuffle with fetch failure | |
[info] - ignore late map task completions | |
[info] - run trivial shuffle with out-of-band failure and retry | |
[info] - recursive shuffle failures | |
[info] - cached post-shuffle | |
[info] DriverSuite: | |
13/07/08 10:49:34 WARN Utils: Your hostname, R9EKCBR resolves to a loopback address: 127.0.1.1; using 192.168.38.1 instead (on interface vmnet8) | |
13/07/08 10:49:34 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address | |
13/07/08 10:49:36 WARN Utils: Your hostname, R9EKCBR resolves to a loopback address: 127.0.1.1; using 192.168.38.1 instead (on interface vmnet8) | |
13/07/08 10:49:36 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address | |
[info] - driver should exit after finishing | |
[info] FileSuite: | |
[info] - text files | |
[info] - text files (compressed) | |
[info] - SequenceFiles | |
[info] - SequenceFile (compressed) | |
[info] - SequenceFile with writable key | |
[info] - SequenceFile with writable value | |
[info] - SequenceFile with writable key and value | |
[info] - implicit conversions in reading SequenceFiles | |
[info] - object files of ints | |
[info] - object files of complex types | |
[info] - write SequenceFile using new Hadoop API | |
[info] - read SequenceFile using new Hadoop API | |
[info] - file caching | |
[info] PartitioningSuite: | |
[info] - HashPartitioner equality | |
[info] - RangePartitioner equality | |
[info] - HashPartitioner not equal to RangePartitioner | |
[info] - partitioner preservation | |
[info] - partitioning Java arrays should fail | |
[info] - zero-length partitions should be correctly handled | |
[info] SparkListenerSuite: | |
[info] - local metrics | |
[info] Passed: : Total 281, Failed 0, Errors 0, Passed 281, Skipped 0 | |
[success] Total time: 814 s, completed Jul 8, 2013 10:49:57 AM |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment