Last active
January 26, 2017 00:44
-
-
Save rxin/c1592c133e4bccf515dd to your computer and use it in GitHub Desktop.
DataFrame simple aggregation performance benchmark
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
data = sqlContext.load("/home/rxin/ints.parquet") | |
data.groupBy("a").agg(col("a"), avg("num")).collect() |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
val data = sqlContext.load("/home/rxin/ints.parquet") | |
data.groupBy("a").agg(col("a"), avg("num")).collect() |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import random | |
from pyspark.sql import Row | |
data = sc.parallelize(xrange(1000)).flatMap(lambda x: [Row(a=random.randint(1, 10), num=random.randint(1, 100), str=("a" * random.randint(1, 30))) for i in xrange(10000)]) | |
dataTable = sqlContext.createDataFrame(data) | |
dataTable.saveAsParquetFile("/home/rxin/ints.parquet") |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
pdata = sqlContext.load("/home/rxin/ints.parquet").select("a", "num") | |
sum_count = ( | |
pdata.map(lambda x: (x.a, [x.num, 1])) | |
.reduceByKey(lambda x, y: | |
[x[0] + y[0], x[1] + y[1]]) | |
.collect()) | |
[(x[0], float(x[1][0]) / x[1][1]) for x in sum_count] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
val pdata = sqlContext.load("/home/rxin/ints.parquet").select("a", "num") | |
val sum_count = pdata.map { row => (row.getInt(0), (row.getInt(1), 1)) } | |
.reduceByKey { (a, b) => | |
(a._1 + b._1, a._2 + b._2) | |
}.collect() | |
sum_count.foreach { case (a, (sum, count)) => println(s"$a: ${sum/count}") } |
Here you can find some analysis of this benchmark and its code: http://0x0fff.com/spark-dataframes-are-faster-arent-they/
it is confusing , since in data science machine learning we need real life algorithms comparison. Take random forest from sci kit learn (not from spark ML very poor library) or deep neural network and run.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
@melrief
// is it always better to use DataFrames instead of the functional API? //
No, it depends on the application. DataFrames are great for interactive analysis and BAU BI, but when writing my own machine learning algorithms or building complex applications I stick to the functional API.
// the reason why the DataFrame implementation is faster is only because of the Catalyst optimizer? //
No, I imagine it also uses mutable integers under the hood, whereas the Scala version uses tuple copying.
// Would it be possible to bring this optimization also to the functional API? //
Yes and no, yes in that the user could use something slightly lower level, like
combineByKey
ormapPartitions
to pre-aggregate mutably at the partition level first. No in that it would be impossible for the Spark API to automagically do this.