Skip to content

Instantly share code, notes, and snippets.

View mravi's full-sized avatar

Ravi Magham mravi

  • Lyft
  • Santa Clara
View GitHub Profile
{
"public_identifier": "ravimagham",
"profile_pic_url": "https://s3.us-west-000.backblazeb2.com/proxycurl/person/ravimagham/profile?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=0004d7f56a0400b0000000001%2F20231229%2Fus-west-000%2Fs3%2Faws4_request&X-Amz-Date=20231229T232529Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=f5fdd1dcbc5054bd51f76eff76e920e19e1573942f45315fc69607289d379ef1",
"background_cover_image_url": null,
"first_name": "Ravi",
"last_name": "Magham",
"full_name": "Ravi Magham",
"follower_count": 1220,
"occupation": "Software Engineer at Lyft",
"headline": "Software engineer",
@mravi
mravi / gist:86e6cab649929dc74d0a9578670cca64
Created January 24, 2017 16:43 — forked from mikeyk/gist:1329319
Testing storage of millions of keys in Redis
#! /usr/bin/env python
import redis
import random
import pylibmc
import sys
r = redis.Redis(host = 'localhost', port = 6389)
mc = pylibmc.Client(['localhost:11222'])
@mravi
mravi / SparkUtils.scala
Created July 26, 2016 13:51 — forked from ibuenros/SparkUtils.scala
Spark productionizing utilities developed by Ooyala, shown in Spark Summit 2014
//==================================================================
// SPARK INSTRUMENTATION
//==================================================================
import com.codahale.metrics.{MetricRegistry, Meter, Gauge}
import org.apache.spark.{SparkEnv, Accumulator}
import org.apache.spark.metrics.source.Source
import org.joda.time.DateTime
import scala.collection.mutable
@mravi
mravi / gist:5a78c56a4e06741f0ad6
Last active September 20, 2015 16:33 — forked from debasishg/gist:8172796
A collection of links for streaming algorithms and data structures
  1. General Background and Overview
1. Checkout source code from https://github.com/apache/incubator-zeppelin
2. Custom build the code with spark 1.3 and with the respective Hadoop version.
mvn clean package -Pspark-1.3 -Dhadoop.version=2.6.0 -Phadoop-2.6 -DskipTests
3. Have the following jars in the spark classpath by placing them in the location $ZEPPELIN_HOME/interpreter/spark
a. hbase-client.jar
b. hbase-protocol.jar
c. hbase-common.jar
d. phoenix-4.4.x-client-without-hbase.jar
4. Start Zeppelin
@mravi
mravi / PhoenixSparkJob.java
Created December 6, 2014 02:55
Phoenix Spark Example
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HConstants;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.mapreduce.JobContext;
import org.apache.hadoop.mapreduce.OutputFormat;
import org.apache.phoenix.mapreduce.PhoenixInputFormat;
import org.apache.phoenix.mapreduce.PhoenixOutputFormat;
@mravi
mravi / StockBean.java
Created December 6, 2014 02:51
Phoenix MR Example
package org.apache.phoenix.example.bean;
import java.util.Arrays;
public final class StockBean {
private String stockName;
private Integer year;
private double[] recordings;