Skip to content

Instantly share code, notes, and snippets.

@fzerorubigd
fzerorubigd / blocked
Last active August 29, 2015 14:16
block list for cow, in iran. please add anything you need
code.google.com
googleapis.com
googleusercontent.com
ytimg.com
youtube.com
youtube-nocookie.com
bitbucket.org
thepiratebay.se
humblebundle.com
plus.url.google.com
@denji
denji / golang-tls.md
Last active October 2, 2025 14:00 — forked from spikebike/client.go
Simple Golang HTTPS/TLS Examples
Generate private key (.key)
# Key considerations for algorithm "RSA" ≥ 2048-bit
openssl genrsa -out server.key 2048

# Key considerations for algorithm "ECDSA" ≥ secp384r1
# List ECDSA the supported curves (openssl ecparam -list_curves)
@rklaehn
rklaehn / Proxy.scala
Created January 31, 2015 16:29
Minimal akka http proxy
package akkahttptest
import akka.actor.ActorSystem
import akka.http.Http
import akka.stream.FlowMaterializer
import akka.http.server._
import akka.http.marshalling.PredefinedToResponseMarshallers._
import akka.stream.scaladsl.{HeadSink, Source}
object Proxy extends App {
@honkskillet
honkskillet / byte-sizetuts.md
Last active August 22, 2024 14:19
A series of golang tutorials with youtube videos.
package thunder.streaming
import org.apache.spark.{SparkConf, Logging}
import org.apache.spark.rdd.RDD
import org.apache.spark.SparkContext._
import org.apache.spark.streaming._
import org.apache.spark.streaming.dstream.DStream
import org.apache.spark.mllib.clustering.KMeansModel
import scala.util.Random.nextDouble
@RussellSpitzer
RussellSpitzer / sparksql.java
Last active August 29, 2015 14:13
Loading a table with java cassandra context and registering it in the hive context
package test;
/**
* Created by russellspitzer on 12/4/14.
*/
import java.io.Serializable;
import java.util.List;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaSparkContext;
@mgurov
mgurov / gist:a58c010889124a0095b6
Created December 29, 2014 07:43
dual table at apache spark
import org.apache.spark.sql._
val hc = new org.apache.spark.sql.hive.HiveContext(sc)
val schema = StructType(Seq(StructField("dual", StringType, true)))
val rowRDD = sc.parallelize(Seq(Row("dual")))
val schemaRDD = hc.applySchema(rowRDD, schema)
schemaRDD.registerTempTable("dual")
hc.hql("select 'hello world' from dual").collect
@smartkiwi
smartkiwi / init_pyspark.py
Created December 11, 2014 21:01
run pyspark in standalone module without bin/pyspark
import os
import sys
# Set the path for spark installation
# this is the path where you have built spark using sbt/sbt assembly
os.environ['SPARK_HOME'] = "/Users/vvlad/spark/spark-1.0.2"
# Append to PYTHONPATH so that pyspark could be found
sys.path.append("/Users/vvlad/spark/spark-1.0.2/python")
sys.path.append("/Users/vvlad/spark/spark-1.0.2/python/lib/py4j-0.8.1-src.zip")
@facboy
facboy / CheckpointBug.java
Created November 19, 2014 12:12
ClassCastException from JavaPairRDD.collectAsMap()
package org.facboy.spark;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import scala.Tuple2;
import scala.reflect.ClassTag;
import scala.reflect.ClassTag$;
val num = 1 to 100
//num: scala.collection.immutable.Range.Inclusive = Range(1,2,3,...,100)
val numRDD = sc.parallelize(num)
//numRDD: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[11] at parallelize at <console>:14
val numFileter = numRDD.filter(_ < 10)
//numFileter: org.apache.spark.rdd.RDD[Int] = FilteredRDD[12] at filter at <console>:16
val numMap = numFileter.map(_ + 10)