This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#' When plotting multiple data series that share a common x axis but different y axes, | |
#' we can just plot each graph separately. This suffers from the drawback that the shared axis will typically | |
#' not align across graphs due to different plot margins. | |
#' One easy solution is to reshape2::melt() the data and use ggplot2's facet_grid() mapping. However, there is | |
#' no way to label individual y axes. | |
#' facet_grid() and facet_wrap() were designed to plot small multiples, where both x- and y-axis ranges are | |
#' shared acros all plots in the facetting. While the facet_ calls allow us to use different scales with | |
#' the \code{scales = "free"} argument, they should not be used this way. | |
#' A more robust approach is to the grid package grid.draw(), rbind() and ggplotGrob() to create a grid of | |
#' individual plots where the plot axes are properly aligned within the grid. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
%pyspark | |
import matplotlib.pyplot as plt; plt.rcdefaults() | |
import numpy as np | |
import matplotlib.pyplot as plt | |
import StringIO | |
def show(p): | |
img = StringIO.StringIO() | |
p.savefig(img, format='svg') |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
object ServerSparkContext { | |
private[this] lazy val _sqlContext = { | |
val conf = new SparkConf() | |
.setAppName("....") | |
val sc = new SparkContext(conf) | |
// TODO: Bug in Spark: http://stackoverflow.com/questions/30323212 | |
val ctx = new HiveContext(sc) | |
ctx.setConf("spark.sql.hive.convertMetastoreParquet", "false") |