Skip to content

Instantly share code, notes, and snippets.

View byronyi's full-sized avatar
:octocat:
Just for fun

Bairen Yi byronyi

:octocat:
Just for fun
View GitHub Profile
@byronyi
byronyi / qing.md
Last active August 29, 2015 14:21

The Qing Success Story

  • Manchu tribes, one of the descendants of Jurchen
  • Manchuria in the 16th century had been brought under Chinese type of intensive agriculture only in the southernmost region
  • The Ming had recognized the frontier nature of this region by organizing it in the military districts rather than under a civil administration only
  • In their rise to power the Manchus took full advantage of their strategic position on a frontier where they could learn Chinese ways and yet not be entirely subjected to Chinese rule

The Manchu Conquest

@byronyi
byronyi / ming.md
Last active August 29, 2015 14:21

Government in the Ming Dynasty

Legacies of the Hongwu Emperor

  • During the 276 years of the Ming China's population doubled
  • Destructive domestic warfare was largely avoided
  • Great achievements education and philosophy, literature and art, reflected the high cultural level of the elite gentry society
  • Did not attempt a continuation of the Song but tried in theory to go back to the models of Han and Tang
@byronyi
byronyi / iunzip.py
Last active August 29, 2015 14:22 — forked from andrix/iunzip.py
import itertools
from operator import itemgetter
def iunzip(iterable):
"""Iunzip is the same as zip(*iter) but returns iterators, instead of
expand the iterator. Mostly used for large sequence"""
_tmp, iterable = itertools.tee(iterable, 2)
iters = itertools.tee(iterable, len(_tmp.next()))
return (itertools.imap(itemgetter(i), it) for i, it in enumerate(iters))
# -*- mode: ruby -*-
# vi: set ft=ruby :
# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure(2) do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
docker run -d -v `pwd`/hadoop:/hadoop -e "HADOOP_HOME=/hadoop" -v `pwd`/java:/java -e "JAVA_HOME=/java" -v `pwd`/conf:/conf -e "HADOOP_CONF_DIR=/conf" --name=name-node -h name-node -p 50070:50070 ubuntu bash -c "/hadoop/bin/hdfs namenode -format && /hadoop/bin/hdfs namenode"
docker run -d -v `pwd`/hadoop:/hadoop -e "HADOOP_HOME=/hadoop" -v `pwd`/java:/java -e "JAVA_HOME=/java" -v `pwd`/conf:/conf -e "HADOOP_CONF_DIR=/conf" --link name-node ubuntu bash -c "/hadoop/bin/hdfs datanode"
docker run -d -v `pwd`/hadoop:/hadoop -e "HADOOP_HOME=/hadoop" -v `pwd`/java:/java -e "JAVA_HOME=/java" -v `pwd`/conf:/conf -e "HADOOP_CONF_DIR=/conf" --name=resource-manager -h resource-manager -p 8088:8088 ubuntu bash -c "/hadoop/bin/yarn resourcemanager"
docker run -d -v `pwd`/hadoop:/hadoop -e "HADOOP_HOME=/hadoop" -v `pwd`/java:/java -e "JAVA_HOME=/java" -v `pwd`/conf:/conf -e "HADOOP_CONF_DIR=/conf" --link resource-manager --link name-node ubuntu bash -c "/hadoop/bin/yarn nodemanager"
docker run --rm -it -v `pwd`/hadoop:/
docker run --rm -it -v `pwd`/hadoop:/hadoop -e "HADOOP_HOME=/hadoop" -v `pwd`/java:/java -e "JAVA_HOME=/java" -v `pwd`/spark:/spark -e "SPARK_HOME=/spark" -v `pwd`/conf:/conf -e "HADOOP_CONF_DIR=/conf" --link resource-manager --link name-node ubuntu
/spark/bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-cluster --num-executors 3 --driver-memory 4g --executor-memory 2g --executor-cores 1 /spark/lib/spark-examples*.jar 10
@byronyi
byronyi / script.sh
Created July 31, 2015 08:10
Dockerized Hadoop using Docker 1.8.0rc1
docker run -d --name namenode -h namenode -p 50070:50070 -v `pwd`/conf:/conf --publish-service=namenode.hadoop hadoop/namenode
docker run -d --name resourcemanager -h resourcemanager -p 8088:8088 -v `pwd`/conf:/conf --publish-service=resourcemanager.hadoop hadoop/resourcemanager
docker run -d -v `pwd`/conf:/conf --publish-service datanode1.hadoop hadoop/datanode
docker run -d -v `pwd`/conf:/conf --publish-service nodemanager1.hadoop hadoop/nodemanager
$ make clean -f Makefile_STANDALONE_LINUX && make -j8 -f Makefile_STANDALONE_LINUX
rm -rf linux_standalone; rm -f linux_standalone/spectrast linux_standalone/plotspectrast linux_standalone/plotspectrast.cgi linux_standalone/Lib2HTML core* *~
g++ -I/usr/include -Werror -Wformat -Wstrict-aliasing -Wno-deprecated -Wno-char-subscripts -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -DSTANDALONE_LINUX -I/usr/include -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -Werror -Wformat -Wstrict-aliasing -Wno-deprecated -Wno-char-subscripts -O2 -c SpectraSTLib.cpp -o linux_standalone/SpectraSTLib.o
g++ -I/usr/include -Werror -Wformat -Wstrict-aliasing -Wno-deprecated -Wno-char-subscripts -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -DSTANDALONE_LINUX -I/usr/include -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -Werror -Wformat -Wstrict-aliasing -Wno-deprecated -Wno-char-subscripts -O2 -c SpectraSTLibIndex.cpp -o linux_standalone/SpectraSTLibIndex.o
g++ -I/usr/include -Werror -Wformat -Wstrict-aliasing -Wno-deprecated -Wno-char
java GenerateReplayScript \
FB-2009_samples_24_times_1hr_0_first50jobs.tsv \
100 \
5 \
67108864 \
10 \
scriptsTest \
workGenInput \
workGenOutputTest \
67108864 \
#!/bin/bash
MASTER=10.0.1.254
export JAVA_HOME=/usr/local/opt/java
mkdir -p $JAVA_HOME
curl -Lb "oraclelicense=a" http://download.oracle.com/otn-pub/java/jdk/7u80-b15/jdk-7u80-linux-x64.tar.gz | tar xz --strip-components=1 -C $JAVA_HOME
export HADOOP_HOME=/usr/local/opt/hadoop
mkdir -p $HADOOP_HOME