Skip to content

Instantly share code, notes, and snippets.

View prateek's full-sized avatar

Prateek Rungta prateek

View GitHub Profile
//
// NSObject+setValuesForKeysWithJSONDictionary.h
// SafeSetDemo
//
// Created by Tom Harrington on 12/29/11.
// Copyright (c) 2011 Atomic Bird, LLC. All rights reserved.
//
#import <Foundation/Foundation.h>
/*
* Copyright 2013 Cloudera Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
@prateek
prateek / GiraphInstall
Last active December 29, 2015 05:29
Instructions to get Giraph up and running
# Giraph does not have a central repository as of writing this
# So we build a local version and store it in our local maven repository
git clone https://github.com/apache/giraph.git
# And retrieve a patch for `GIRAPH-442`
wget http://www.mail-archive.com/[email protected]/msg00945/check.diff
# We also revert to version 1.0.0 for this codebase
cd giraph
git checkout release-1.0.0-RC3
@prateek
prateek / oozie-java-driver-hook
Created March 10, 2014 13:49
Code Snippet for Hooking Java Drivers in Oozie
public class DriverHook implements Runnable{
static public DriverHookNewApi create(Job job){
return new DriverHookNewApi(job); }
Job job;
private DriverHookNewApi(Job job){
this.job = job; }
public void run(){
System.out.println("Hello from MyHook");
if(job == null)
throw new NullPointerException("err msg");
#!/usr/bin/python
import csv
import sys
import argparse
import io
csv.field_size_limit(sys.maxsize)
parser = argparse.ArgumentParser(description='Clean csv of in-line newlines')
parser.add_argument('infile',help='Path to input CSV file');

Recent versions of Cloudera's Impala added NDV, a "number of distinct values" aggregate function that uses the HyperLogLog algorithm to estimate this number, in parallel, in a fixed amount of space.

This can make a really, really big difference: in a large table I tested this on, which had roughly 100M unique values of mycolumn, using NDV(mycolumn) got me an approximate answer in 27 seconds, whereas the exact answer using count(distinct mycolumn) took ... well, I don't know how long, because I got tired of waiting for it after 45 minutes.

It's fun to note, though, that because of another recent addition to Impala's dialect of SQL, the fnv_hash function, you don't actually need to use NDV; instead, you can build HyperLogLog yourself from mathematical primitives.

HyperLogLog hashes each value it sees, and then assigns them to a bucket based on the low order bits of the hash. It's common to use 1024 buckets, so we can get the bucket by using a bitwise & with 1023:

select

Hadoop Administration Resources

  • Official Docs - overwhelming, and invaluable link

  • Kathleen Ting: 7 Deadly Hadoop Misconfigurations videoslides

  • Philip Zeyliger: The guy who's day job it is to write CM, presented Debugging Distributed Systems - slides

  • Gwen Shapira: Scaling ETL with Hadoop slides

  • Cloudera's Resource Library link

Usage Instructions for Viz-Oozie

    # install graphviz
    $ sudo yum install -y graphviz

    # install vizoozie
    $ git clone https://github.com/iprovalo/vizoozie
    $ cd vizoozie
 $ sudo python setup.py install

CDH5 Cluster Setup

These are the steps I followed to setup a 6.5 CentOS VM, and install CDH5 and CM5 on it. All these commands should be run on a single node if running on a cluster, it will serve as the master node.

Caveats:

  • This is going to use the embedded postgres db for the services, this is a TERRIBLE idea if the enviornment is anything but short lived POC.
  • This was done for a 4 node POC cluster where a single instance was going to be the dedicated master - all the CM management and Hadoop master daemons would run on it. And the 3 remaining nodes would be data nodes.

Steps to follow

  1. Create new local CentOS 6.5 Image based on this ISO.