-
-
Save rxin/c1592c133e4bccf515dd to your computer and use it in GitHub Desktop.
data = sqlContext.load("/home/rxin/ints.parquet") | |
data.groupBy("a").agg(col("a"), avg("num")).collect() |
val data = sqlContext.load("/home/rxin/ints.parquet") | |
data.groupBy("a").agg(col("a"), avg("num")).collect() |
import random | |
from pyspark.sql import Row | |
data = sc.parallelize(xrange(1000)).flatMap(lambda x: [Row(a=random.randint(1, 10), num=random.randint(1, 100), str=("a" * random.randint(1, 30))) for i in xrange(10000)]) | |
dataTable = sqlContext.createDataFrame(data) | |
dataTable.saveAsParquetFile("/home/rxin/ints.parquet") |
pdata = sqlContext.load("/home/rxin/ints.parquet").select("a", "num") | |
sum_count = ( | |
pdata.map(lambda x: (x.a, [x.num, 1])) | |
.reduceByKey(lambda x, y: | |
[x[0] + y[0], x[1] + y[1]]) | |
.collect()) | |
[(x[0], float(x[1][0]) / x[1][1]) for x in sum_count] |
val pdata = sqlContext.load("/home/rxin/ints.parquet").select("a", "num") | |
val sum_count = pdata.map { row => (row.getInt(0), (row.getInt(1), 1)) } | |
.reduceByKey { (a, b) => | |
(a._1 + b._1, a._2 + b._2) | |
}.collect() | |
sum_count.foreach { case (a, (sum, count)) => println(s"$a: ${sum/count}") } |
In Python, it looks like you first have to do this to avoid NameError: name 'sqlContext' is not defined
:
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
I tried DFs in python and scala but col() and avg() , inside agg() function is not recognized by any of application. Should we need to import any other package ?
// is it always better to use DataFrames instead of the functional API? //
No, it depends on the application. DataFrames are great for interactive analysis and BAU BI, but when writing my own machine learning algorithms or building complex applications I stick to the functional API.
// the reason why the DataFrame implementation is faster is only because of the Catalyst optimizer? //
No, I imagine it also uses mutable integers under the hood, whereas the Scala version uses tuple copying.
// Would it be possible to bring this optimization also to the functional API? //
Yes and no, yes in that the user could use something slightly lower level, like combineByKey
or mapPartitions
to pre-aggregate mutably at the partition level first. No in that it would be impossible for the Spark API to automagically do this.
Here you can find some analysis of this benchmark and its code: http://0x0fff.com/spark-dataframes-are-faster-arent-they/
it is confusing , since in data science machine learning we need real life algorithms comparison. Take random forest from sci kit learn (not from spark ML very poor library) or deep neural network and run.
This is really interesting! I have some questions:
sum_count
in the rdd so it is faster with Spark 1.3 or for this kind of operations the functional API should never be used?