This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
// this flavour is pure magic... | |
def toDouble: (Any) => Double = { case i: Int => i case f: Float => f case d: Double => d } | |
// whilst this flavour is longer but you are in full control... | |
object any2Double extends Function[Any,Double] { | |
def apply(any: Any): Double = | |
any match { case i: Int => i case f: Float => f case d: Double => d } | |
} | |
// like when you can invoke any2Double from another similar conversion... |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from __future__ import print_function | |
import sys | |
from math import sqrt | |
import argparse | |
from collections import defaultdict | |
from random import randint | |
from pyspark import SparkContext |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
package org.apache.spark.sql.utils | |
import org.apache.spark.Partitioner | |
import org.apache.spark.rdd.{CoGroupedRDD, RDD} | |
import org.apache.spark.sql.catalyst.{CatalystTypeConverters, ScalaReflection} | |
import org.apache.spark.sql.execution.LogicalRDD | |
import org.apache.spark.sql.types.{ArrayType, StructField, StructType} | |
import org.apache.spark.sql.{SQLContext, DataFrame, Row} | |
import scala.reflect.ClassTag | |
import scala.reflect.runtime.universe.TypeTag |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import scala.collection.mutable.Map | |
import org.apache.spark.{Accumulator, AccumulatorParam, SparkContext} | |
import org.apache.spark.scheduler.{SparkListenerStageCompleted, SparkListener} | |
import org.apache.spark.SparkContext._ | |
/** | |
* just print out the values for all accumulators from the stage. | |
* you will only get updates from *named* accumulators, though |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/usr/bin/env python | |
__author__ = 'Aziz' | |
""" | |
Convert all ipython notebook(s) in a given directory into the selected format and place output in a separate folder. | |
usages: python cipynb.py `directory` [-to FORMAT] | |
Using: ipython nbconvert and find command (Unix-like OS). |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
name := "playground" | |
version := "1.0" | |
scalaVersion := "2.10.4" | |
libraryDependencies += "org.apache.spark" %% "spark-core" % "1.1.0" | |
libraryDependencies += "net.sf.opencsv" % "opencsv" % "2.3" |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/bin/bash | |
# | |
# tomcat | |
# | |
# chkconfig: 345 96 30 | |
# description: Start up the Tomcat servlet engine. | |
# | |
# processname: java | |
# pidfile: /var/run/tomcat.pid | |
# |
NewerOlder