Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save prayagupa/8519289 to your computer and use it in GitHub Desktop.
Save prayagupa/8519289 to your computer and use it in GitHub Desktop.
hadoop, storm, spark, mesos, zookeeper

PART 1

[STEP 2] Adding a dedicated hadoop system user

check groups

$ compgen -g
prayagupd ...
prayag@prayag$ sudo addgroup hadoop
Adding group `hadoop' (GID 1002) ...
Done.

check users

prayag@prayag$ cut -d: -f1 /etc/passwd | grep hd
sshd
hduser

$ echo $USER
prayag@prayag:~$ sudo adduser --ingroup hadoop hduser
Adding user `hduser' ...
Adding new user `hduser' (1001) with group `hadoop' ...
Creating home directory `/home/hduser' ...
Copying files from `/etc/skel' ...
Enter new UNIX password: 
Retype new UNIX password: 
passwd: password updated successfully
Changing the user information for hduser
Enter the new value, or press ENTER for the default
        Full Name []:   
        Room Number []: 
        Work Phone []: 
        Home Phone []: 
        Other []: 
Is the information correct? [Y/n] Y

Change to user hduser

prayag@prayag:~$ su - hduser
Password: 
hduser@prayag:~$ 

[STEP 3]

generate an SSH key for the hduser user.

hduser@prayag:~$ ssh-keygen -t rsa -P ""
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hduser/.ssh/id_rsa): 
Created directory '/home/hduser/.ssh'.
Your identification has been saved in /home/hduser/.ssh/id_rsa.
Your public key has been saved in /home/hduser/.ssh/id_rsa.pub.
The key fingerprint is:
ed:24:f2:1f:81:08:c5:d1:3e:2e:1c:96:51:66:cd:b4 hduser@prayag
The key's randomart image is:
+--[ RSA 2048]----+
|     .o+++.      |
|     .oo. o.     |
|    .  +  E      |
|     .+.oo       |
|     oooS.+      |
|      oo.+ .     |
|       .. o      |
|         . .     |
|          .      |
+-----------------+
hduser@prayag:~$ cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys

[STEP 4] Configuring SSH

Hadoop requires SSH access to manage its nodes.

add following line to /etc/sudoers from user prayagupd,

hduser ALL=(ALL) ALL

go back to hduser,

hduser@prayag:~$ sudo apt-get install ssh

PART 2 (make it hadoop version specific)

hduser@prayagupd:~$ sudo chown -R hduser:hadoop /usr/local/hadoop-2.2.0

[STEP 5] disable ipv6 for hadoop

hduser@prayag:~$ sudo vi /usr/local/hadoop-2.2.0/etc/hadoop/hadoop-env.sh

# Extra Java runtime options.  Empty by default.
export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true"

OR

# disable ipv6
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1

[STEP 6] Update $HOME/.bashrc

# Set Hadoop-related environment variables
HADOOP_INSTALL=/usr/local/hadoop-2.2.0
HADOOP_HOME=$HADOOP_INSTALL
HADOOP_MAPRED_HOME=$HADOOP_INSTALL
HADOOP_COMMON_HOME=$HADOOP_INSTALL
HADOOP_HDFS_HOME=$HADOOP_INSTALL
YARN_HOME=$HADOOP_INSTALL
HADOOP_CONF_DIR=${HADOOP_INSTALL}/etc/hadoop


# Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on)
# export JAVA_HOME=/usr/lib/jvm/java-6-sun

# Some convenient aliases and functions for running Hadoop-related commands
unalias fs &> /dev/null
alias fs="hadoop fs"
unalias hls &> /dev/null
alias hls="fs -ls"

# If you have LZO compression enabled in your Hadoop cluster and
# compress job outputs with LZOP (not covered in this tutorial):
# Conveniently inspect an LZOP compressed file from the command
# line; run via:
#
# $ lzohead /hdfs/path/to/lzop/compressed/file.lzo
#
# Requires installed 'lzop' command.
#
lzohead () {
    hadoop fs -cat $1 | lzop -dc | head -1000 | less
}

# adding hadoop jars in classpath
for jar in $(find $HADOOP_INSTALL/ -type f -name "*.jar"); do
    HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$jar
done

CLASSPATH=$CLASSPATH:$HADOOP_CLASSPATH
PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin

export PATH CLASSPATH HADOOP_CLASSPATH
export HADOOP_INSTALL HADOOP_HOME HADOOP_MAPRED_HOME HADOOP_COMMON_HOME HADOOP_HDFS_HOME
export YARN_HOME HADOOP_CONF_DIR
                                        

[STEP 7] configure etc/hadoop/*-site.xml

  • configure the directory where hadoop will store its data files, the network ports it listens to, etc.
  • setup will use Hadoop’s Distributed File System(HDFS), even though little “cluster” only contains single local machine.
$ sudo mkdir -p /app/hadoop/tmp
$ sudo chown hduser:hadoop /app/hadoop/tmp
$ sudo chmod 750 /app/hadoop/tmp
hduser@prayag:~$ sudo vi /usr/local/hadoop-2.2.0/etc/hadoop/core-site.xml

<configuration>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/app/hadoop/tmp</value>
  <description>A base for other temporary directories.</description>
</property>

<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
hduser@prayag:~$ sudo cp $HADOOP_HOME/etc/hadoop/mapred-site.xml.template $HADOOP_HOME/etc/hadoop/mapred-site.xml

or 

hduser@prayag:~$ sudo cp /usr/local/hadoop-2.2.0/etc/hadoop/mapred-site.xml.template /usr/local/hadoop-2.2.0/etc/hadoop/mapred-site.xml

hduser@prayag:~$ sudo vi $HADOOP_HOME/etc/hadoop/mapred-site.xml

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
  <name>mapred.job.tracker</name>
  <value>localhost:54311</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.
  </description>
</property>
</configuration>
hduser@prayag:~$ sudo vi $HADOOP_HOME/etc/hadoop/hdfs-site.xml

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
  <name>dfs.replication</name>
  <value>1</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
</property>
</configuration>

[STEP 8] Formatting the HDFS filesystem via the NameNode

http://solaimurugan.blogspot.com/2013/11/installing-hadoop-2xx-single-node.html

hduser@prayag:~$ sudo vi $HADOOP_HOME/libexec/hadoop-config.sh 
this="${BASH_SOURCE-$0}"
common_bin=$(cd -P -- "$(dirname -- "$this")" && pwd -P)
script="$(basename -- "$this")"
this="$common_bin/$script"

[ -f "$common_bin/hadoop-layout.sh" ] && . "$common_bin/hadoop-layout.sh"

## at the top of conf file
export JAVA_HOME=/usr/local/jdk1.7.0

recommended

hduser@prayag:~$ hdfs namenode -format

{{ or

  hduser@prayag:~$ hadoop namenode -format

}}

[STEP 9] Starting single-node cluster

hduser@prayag:~$ /usr/local/hadoop-2.2.0/sbin/start-dfs.sh

14/11/22 16:30:57 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/hadoop-2.2.0/logs/hadoop-hduser-namenode-prayagupd.out
localhost: starting datanode, logging to /usr/local/hadoop-2.2.0/logs/hadoop-hduser-datanode-prayagupd.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.2.0/logs/hadoop-hduser-secondarynamenode-prayagupd.out
14/11/22 16:31:24 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
hduser@prayag:~$ jps
16703 Jps
15696 SecondaryNameNode
15214 NameNode
15424 DataNode
hduser@prayag:~$ /usr/local/hadoop-2.2.0/sbin/start-yarn.sh

starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop-2.2.0/logs/yarn-hduser-resourcemanager-prayag.out
localhost: starting nodemanager, logging to /usr/local/hadoop-2.2.0/logs/yarn-hduser-nodemanager-prayag.out
hduser@prayag:~$ jps
16979 NodeManager
17273 Jps
15696 SecondaryNameNode
15214 NameNode
16768 ResourceManager
15424 DataNode

Check ports

hduser@prayag:~$ sudo netstat -plten | grep java
tcp        0      0 0.0.0.0:50090           0.0.0.0:*               LISTEN      1001       204029      15696/java      
tcp        0      0 127.0.0.1:63342         0.0.0.0:*               LISTEN      1417676764 35675       3300/java       
tcp        0      0 0.0.0.0:2864            0.0.0.0:*               LISTEN      1417676764 34523       3300/java       
tcp        0      0 0.0.0.0:50070           0.0.0.0:*               LISTEN      1001       200620      15214/java      
tcp        0      0 0.0.0.0:50010           0.0.0.0:*               LISTEN      1001       200615      15424/java      
tcp        0      0 0.0.0.0:50075           0.0.0.0:*               LISTEN      1001       200695      15424/java      
tcp        0      0 127.0.0.1:6942          0.0.0.0:*               LISTEN      1417676764 34209       3300/java       
tcp        0      0 0.0.0.0:50020           0.0.0.0:*               LISTEN      1001       203884      15424/java      
tcp        0      0 127.0.0.1:54310         0.0.0.0:*               LISTEN      1001       200627      15214/java      
tcp6       0      0 :::8040                 :::*                    LISTEN      1001       215678      16979/java      
tcp6       0      0 :::44585                :::*                    LISTEN      1001       215655      16979/java      
tcp6       0      0 :::8042                 :::*                    LISTEN      1001       215682      16979/java      
tcp6       0      0 :::8088                 :::*                    LISTEN      1001       209693      16768/java      
tcp6       0      0 :::8030                 :::*                    LISTEN      1001       215683      16768/java      
tcp6       0      0 :::8031                 :::*                    LISTEN      1001       215663      16768/java      
tcp6       0      0 :::8032                 :::*                    LISTEN      1001       211941      16768/java      
tcp6       0      0 :::8033                 :::*                    LISTEN      1001       216025      16768/java 

[STEP 10] Hadoop Web Interfaces

  1. | NameNode Web Interface (HDFS layer) | http://localhost:50070/ | web UI of the NameNode daemon

  2. | JobTracker Web Interface (MapReduce layer) | http://localhost:50030/ | web UI of the JobTracker daemon

  3. | TaskTracker Web Interface (MapReduce layer) | http://localhost:50060/ | web UI of the TaskTracker daemon

[STEP 11] stop node

hduser@prayag:~$ /usr/local/hadoop-2.2.0/sbin/stop-dfs.sh && /usr/local/hadoop-2.2.0/sbin/stop-yarn.sh

References

http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/ http://solaimurugan.blogspot.com/2013/11/installing-hadoop-2xx-single-node.html http://codesfusion.blogspot.com/2013/10/setup-hadoop-2x-220-on-ubuntu.html?m=1 http://www.ercoppa.org/Linux-Install-Hadoop-220-on-Ubuntu-Linux-1304-Single-Node-Cluster.htm http://www.fromdev.com/2010/12/interview-questions-hadoop-mapreduce.html http://stackoverflow.com/a/14573531/432903 http://blog.gopivotal.com/products/usage-and-quirks-of-fs-default-name-in-hadoop-filesystem

[STEP 8] Formatting the HDFS filesystem via the NameNode as a normal user prayagupd instead of hadoopuser

Cannot create directory /app/hadoop/tmp/dfs/name/current

$ hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

14/08/15 16:35:12 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = prayag/127.0.1.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.2.0
STARTUP_MSG:   classpath = /usr/local/hadoop-2.2.0/etc/hadoop:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/junit-4.8.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-logging-1.1.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jets3t-0.6.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-math-2.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/stax-api-1.0.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-lang-2.5.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/commons-io-2.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0-tests.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/hadoop-nfs-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-lang-2.5.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-io-2.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/avro-1.7.4.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/junit-4.10.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/paranamer-2.3.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/commons-io-2.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jar:/usr/local/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar:/usr/local/hadoop-2.2.0/contrib/capacity-scheduler/*.jar:/usr/local/hadoop-2.2.0/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common -r 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z
STARTUP_MSG:   java = 1.7.0
************************************************************/
14/08/15 16:35:12 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
14/08/15 16:35:13 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: CID-272bf7c2-2c01-4d02-86d3-30eb0ba422cc
14/08/15 16:35:13 INFO namenode.HostFileManager: read includes:
HostSet(
)
14/08/15 16:35:13 INFO namenode.HostFileManager: read excludes:
HostSet(
)
14/08/15 16:35:13 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
14/08/15 16:35:13 INFO util.GSet: Computing capacity for map BlocksMap
14/08/15 16:35:13 INFO util.GSet: VM type       = 64-bit
14/08/15 16:35:13 INFO util.GSet: 2.0% max memory = 888.9 MB
14/08/15 16:35:13 INFO util.GSet: capacity      = 2^21 = 2097152 entries
14/08/15 16:35:13 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
14/08/15 16:35:13 INFO blockmanagement.BlockManager: defaultReplication         = 1
14/08/15 16:35:13 INFO blockmanagement.BlockManager: maxReplication             = 512
14/08/15 16:35:13 INFO blockmanagement.BlockManager: minReplication             = 1
14/08/15 16:35:13 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
14/08/15 16:35:13 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
14/08/15 16:35:13 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
14/08/15 16:35:13 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
14/08/15 16:35:13 INFO namenode.FSNamesystem: fsOwner             = pupadhyay (auth:SIMPLE)
14/08/15 16:35:13 INFO namenode.FSNamesystem: supergroup          = supergroup
14/08/15 16:35:13 INFO namenode.FSNamesystem: isPermissionEnabled = true
14/08/15 16:35:13 INFO namenode.FSNamesystem: HA Enabled: false
14/08/15 16:35:14 INFO namenode.FSNamesystem: Append Enabled: true
14/08/15 16:35:14 INFO util.GSet: Computing capacity for map INodeMap
14/08/15 16:35:14 INFO util.GSet: VM type       = 64-bit
14/08/15 16:35:14 INFO util.GSet: 1.0% max memory = 888.9 MB
14/08/15 16:35:14 INFO util.GSet: capacity      = 2^20 = 1048576 entries
14/08/15 16:35:14 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/08/15 16:35:14 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
14/08/15 16:35:14 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
14/08/15 16:35:14 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
14/08/15 16:35:14 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
14/08/15 16:35:14 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
14/08/15 16:35:14 INFO util.GSet: Computing capacity for map Namenode Retry Cache
14/08/15 16:35:14 INFO util.GSet: VM type       = 64-bit
14/08/15 16:35:14 INFO util.GSet: 0.029999999329447746% max memory = 888.9 MB
14/08/15 16:35:14 INFO util.GSet: capacity      = 2^15 = 32768 entries
14/08/15 16:35:14 FATAL namenode.NameNode: Exception in namenode join
java.io.IOException: Cannot create directory /app/hadoop/tmp/dfs/name/current
	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:301)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:523)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:544)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:147)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:837)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1213)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320)
14/08/15 16:35:14 INFO util.ExitUtil: Exiting with status 1
14/08/15 16:35:14 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at prayag/127.0.1.1
************************************************************/

Solution

create /app/hadoop/tmp

Storm hacks

STEP 1 - installZook

zkServer.sh start

$ jps
17655 QuorumPeerMain
13494 Main
17701 ZooKeeperMain
17899 Jps
14029 RemoteMavenServer


zkCli.sh

[zk: localhost:2181(CONNECTED) 1] ls /
[zookeeper]

// create persistent and sequential ephemeral znodes, // set a watch and receive a change notification event when a znode's children changed

##create persistent znode
[zk: localhost:2181(CONNECTED) 2] create /smartad-config-znode a-smartad-config-znode
Created /smartad-config-znode


[zk: localhost:2181(CONNECTED) 3] ls /
[smartad-config-znode, zookeeper]

[zk: localhost:2181(CONNECTED) 5] ls /smartad-config-znode     
[]

[zk: localhost:2181(CONNECTED) 6] get /smartad-config-znode  
a-smartad-config-znode
cZxid = 0x4
ctime = Sun Dec 14 19:07:10 NPT 2014
mZxid = 0x4
mtime = Sun Dec 14 19:07:10 NPT 2014
pZxid = 0x4
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 15
numChildren = 0


##
[zk: localhost:2181(CONNECTED) 8] create -s -e /smartad-config-znode/child- data-1

WATCHER::

WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/smartad-config-znode
Created /smartad-config-znode/child-0000000000

[zk: localhost:2181(CONNECTED) 9] create -s -e /smartad-config-znode/child- data-2
Created /smartad-config-znode/child-0000000001

[zk: localhost:2181(CONNECTED) 10] create -s -e /smartad-config-znode/child- data-3
Created /smartad-config-znode/child-0000000002

[zk: localhost:2181(CONNECTED) 11] ls /smartad-config-znode
[child-0000000001, child-0000000002, child-0000000000]

# set a watch, indicated by using true as the second argument.
# watchers are one-time events
[zk: localhost:2181(CONNECTED) 12] ls /smartad-znode true
[child-0000000001, child-0000000002, child-0000000000]

[zk: localhost:2181(CONNECTED) 13] create -s -e /smartad-config-znode/child- data-4

WATCHER::

WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/smartad-config-znode
Created /smartad-config-znode/child-0000000003

//delete

[zk: localhost:2181(CONNECTED) 14] delete /smartad-config-znode
Node not empty: /smartad-config-znode

[zk: localhost:2181(CONNECTED) 15] delete /smartad-config-znode/child-0000000000 -1
[zk: localhost:2181(CONNECTED) 16] delete /smartad-config-znode/child-0000000001 -1
[zk: localhost:2181(CONNECTED) 17] delete /smartad-config-znode/child-0000000002 -1
[zk: localhost:2181(CONNECTED) 18] delete /smartad-config-znode/child-0000000003 -1
[zk: localhost:2181(CONNECTED) 19] delete /smartad-config-znode/child-0000000004 -1

[zk: localhost:2181(CONNECTED) 20] delete /smartad-config-znode -1

[zk: localhost:2181(CONNECTED) 21] ls /
[zookeeper]

java example,

import org.apache.zookeeper.AsyncCallback.DataCallback
import org.apache.zookeeper.data.Stat
import org.apache.zookeeper.{WatchedEvent, Watcher, ZooKeeper}

object ConfigReader {

  val rootPath = "/a/b/c/d/"

  object MyWatcher extends Watcher {
    def process(event: WatchedEvent) = println(event)
  }

  val zk = new ZooKeeper("", 2181, MyWatcher)

  def readConfig(pathName: String): String = {
    val y = zk.getData(rootPath + pathName, false, new Stat())
    new String(y)
  }

  def main(args: Array[String]): Unit = {
    val driver = readConfig("mysql.driver")
    val url = readConfig("mysql.url")
    
    println(driver)
    println(url)
  }
}

reference - https://www.altamiracorp.com/blog/employee-posts/distributed-coordination-with-zookeeper-part-2-test-drive

STEP 2.1 - installStorm

STEP 2.2 - installZmq

$STORM_HOME/bin/install_zmq.sh

#or 

$ /usr/local/zookeeper-3.4.5/bin/zkServer.sh start
JMX enabled by default
Using config: /usr/local/zookeeper-3.4.5/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
prayagupd at prayagupd in ~/backup/hacker_/w.clj
$ git clone git://github.com/apache/incubator-storm.git && cd incubator-storm/examples/storm-starter
$ mvn clean install -DskipTests=true

Also, check https://github.com/iPrayag/nepleaks/blob/master/nepleaks-engine/src/nepleaks_engine/services/stormService.clj#L18

(defn mk-topology []
  (topology
   {"1" (spout-spec sentence-spout)
    "2" (spout-spec (sentence-spout-parameterized
                     ["the cat jumped over the door"
                      "greetings from a faraway land"])
                     :p 2)}
   {"3" (bolt-spec {"1" :shuffle "2" :shuffle}
                   split-sentence
                   :p 5)
    "4" (bolt-spec {"3" ["word"]}
                   word-count
                   :p 6)}))

Examples

https://github.com/storm-book/examples-ch02-getting_started/ http://ashuuni123.blogspot.com/2013/09/set-up-storm-clustertwitter-storm-in.html

Configure storm.yaml, otherwise it'll kick your ass hard.

  1 ########### These MUST be filled in for a storm configuration                                       
  2  storm.zookeeper.servers:                                                                           
  3      - "localhost"                                                                                  
  4 #                                                                                                                                                                           
  5  nimbus.host: "localhost"                                                                              
  6 #                                                                                                   
  7 #                                                                                                   
  8 # ##### These may optionally be filled in:                                                          
  9 #                                                                                                   
 10 ## List of custom serializations                                                                    
 11 # topology.kryo.register:                                                                           
 12 #     - org.mycompany.MyType                                                                        
 13 #     - org.mycompany.MyType2: org.mycompany.MyType2Serializer                                      
 14 #                                                                                                   
 15 ## List of custom kryo decorators                                                                   
 16 # topology.kryo.decorators:                                                                         
 17 #     - org.mycompany.MyDecorator                                                                   
 18 #                                                                                                   
 19 ## Locations of the drpc servers                                                                    
 20 # drpc.servers:                                                                                     
 21 #     - "server1"                                                                                   
 22 #     - "server2" 
prayagupd at prayagupd in ~/backup/hacker_/w.clj/incubator-storm on master*
$ storm nimbus
Running: java -server -Dstorm.options= -Dstorm.home=/usr/local/storm-0.8.2 -Djava.library.path=/usr/local/lib:/opt/local/lib:/usr/lib -Dstorm.conf.file= -cp /usr/local/storm-0.8.2/storm-0.8.2.jar:/usr/local/storm-0.8.2/lib/jgrapht-0.8.3.jar:/usr/local/storm-0.8.2/lib/core.incubator-0.1.0.jar:/usr/local/storm-0.8.2/lib/tools.cli-0.2.2.jar:/usr/local/storm-0.8.2/lib/servlet-api-2.5.jar:/usr/local/storm-0.8.2/lib/objenesis-1.2.jar:/usr/local/storm-0.8.2/lib/commons-fileupload-1.2.1.jar:/usr/local/storm-0.8.2/lib/slf4j-api-1.5.8.jar:/usr/local/storm-0.8.2/lib/asm-4.0.jar:/usr/local/storm-0.8.2/lib/joda-time-2.0.jar:/usr/local/storm-0.8.2/lib/carbonite-1.5.0.jar:/usr/local/storm-0.8.2/lib/httpcore-4.1.jar:/usr/local/storm-0.8.2/lib/json-simple-1.1.jar:/usr/local/storm-0.8.2/lib/reflectasm-1.07-shaded.jar:/usr/local/storm-0.8.2/lib/junit-3.8.1.jar:/usr/local/storm-0.8.2/lib/minlog-1.2.jar:/usr/local/storm-0.8.2/lib/curator-client-1.0.1.jar:/usr/local/storm-0.8.2/lib/commons-logging-1.1.1.jar:/usr/local/storm-0.8.2/lib/libthrift7-0.7.0.jar:/usr/local/storm-0.8.2/lib/hiccup-0.3.6.jar:/usr/local/storm-0.8.2/lib/curator-framework-1.0.1.jar:/usr/local/storm-0.8.2/lib/tools.logging-0.2.3.jar:/usr/local/storm-0.8.2/lib/httpclient-4.1.1.jar:/usr/local/storm-0.8.2/lib/clojure-1.4.0.jar:/usr/local/storm-0.8.2/lib/ring-jetty-adapter-0.3.11.jar:/usr/local/storm-0.8.2/lib/snakeyaml-1.9.jar:/usr/local/storm-0.8.2/lib/jline-0.9.94.jar:/usr/local/storm-0.8.2/lib/compojure-1.1.3.jar:/usr/local/storm-0.8.2/lib/commons-io-1.4.jar:/usr/local/storm-0.8.2/lib/tools.macro-0.1.0.jar:/usr/local/storm-0.8.2/lib/jetty-6.1.26.jar:/usr/local/storm-0.8.2/lib/zookeeper-3.3.3.jar:/usr/local/storm-0.8.2/lib/math.numeric-tower-0.0.1.jar:/usr/local/storm-0.8.2/lib/servlet-api-2.5-20081211.jar:/usr/local/storm-0.8.2/lib/ring-core-1.1.5.jar:/usr/local/storm-0.8.2/lib/clj-time-0.4.1.jar:/usr/local/storm-0.8.2/lib/disruptor-2.10.1.jar:/usr/local/storm-0.8.2/lib/guava-13.0.jar:/usr/local/storm-0.8.2/lib/commons-lang-2.5.jar:/usr/local/storm-0.8.2/lib/slf4j-log4j12-1.5.8.jar:/usr/local/storm-0.8.2/lib/jetty-util-6.1.26.jar:/usr/local/storm-0.8.2/lib/commons-codec-1.4.jar:/usr/local/storm-0.8.2/lib/kryo-2.17.jar:/usr/local/storm-0.8.2/lib/ring-servlet-0.3.11.jar:/usr/local/storm-0.8.2/lib/clout-1.0.1.jar:/usr/local/storm-0.8.2/lib/log4j-1.2.16.jar:/usr/local/storm-0.8.2/lib/jzmq-2.1.0.jar:/usr/local/storm-0.8.2/lib/commons-exec-1.1.jar:/usr/local/storm-0.8.2/log4j:/usr/local/storm-0.8.2/conf -Xmx1024m -Dlogfile.name=nimbus.log -Dlog4j.configuration=storm.log.properties backtype.storm.daemon.nimbus

verify storm is running,

$ storm list
prayagupd at prayagupd in ~/backup/hacker_/w.clj/incubator-storm on master*
$ mvn compile exec:java -Dstorm.topology=storm.starter.clj.word_count
[INFO] Scanning for projects...
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for org.apache.storm:maven-shade-clojure-transformer:jar:0.9.2-incubating-SNAPSHOT
[WARNING] 'reporting.plugins.plugin.version' for org.apache.maven.plugins:maven-javadoc-plugin is missing. @ org.apache.storm:storm:0.9.2-incubating-SNAPSHOT, /home/pupadhyay/backup/hacker_/w.clj/incubator-storm/pom.xml, line 653, column 21
[WARNING] 'reporting.plugins.plugin.version' for org.apache.maven.plugins:maven-surefire-report-plugin is missing. @ org.apache.storm:storm:0.9.2-incubating-SNAPSHOT, /home/pupadhyay/backup/hacker_/w.clj/incubator-storm/pom.xml, line 619, column 21
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for org.apache.storm:storm-core:jar:0.9.2-incubating-SNAPSHOT
[WARNING] 'reporting.plugins.plugin.version' for org.apache.maven.plugins:maven-javadoc-plugin is missing. @ org.apache.storm:storm:0.9.2-incubating-SNAPSHOT, /home/pupadhyay/backup/hacker_/w.clj/incubator-storm/pom.xml, line 653, column 21
[WARNING] 'reporting.plugins.plugin.version' for org.apache.maven.plugins:maven-surefire-report-plugin is missing. @ org.apache.storm:storm:0.9.2-incubating-SNAPSHOT, /home/pupadhyay/backup/hacker_/w.clj/incubator-storm/pom.xml, line 619, column 21
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for org.apache.storm:storm-starter:jar:0.9.2-incubating-SNAPSHOT
[WARNING] 'reporting.plugins.plugin.version' for org.apache.maven.plugins:maven-javadoc-plugin is missing. @ org.apache.storm:storm:0.9.2-incubating-SNAPSHOT, /home/pupadhyay/backup/hacker_/w.clj/incubator-storm/pom.xml, line 653, column 21
[WARNING] 'reporting.plugins.plugin.version' for org.apache.maven.plugins:maven-surefire-report-plugin is missing. @ org.apache.storm:storm:0.9.2-incubating-SNAPSHOT, /home/pupadhyay/backup/hacker_/w.clj/incubator-storm/pom.xml, line 619, column 21
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for org.apache.storm:storm:pom:0.9.2-incubating-SNAPSHOT
[WARNING] 'reporting.plugins.plugin.version' for org.apache.maven.plugins:maven-javadoc-plugin is missing. @ line 653, column 21
[WARNING] 'reporting.plugins.plugin.version' for org.apache.maven.plugins:maven-surefire-report-plugin is missing. @ line 619, column 21
[WARNING] 
[WARNING] It is highly recommended to fix these problems because they threaten the stability of your build.
[WARNING] 
[WARNING] For this reason, future Maven versions might no longer support building such malformed projects.
[WARNING] 
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Build Order:
[INFO] 
[INFO] Storm
[INFO] maven-shade-clojure-transformer
[INFO] Storm Core
[INFO] storm-starter
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Storm 0.9.2-incubating-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.2.1:process (default) @ storm ---
[INFO] 
[INFO] >>> exec-maven-plugin:1.2.1:java (default-cli) @ storm >>>
[INFO] 
[INFO] <<< exec-maven-plugin:1.2.1:java (default-cli) @ storm <<<
[INFO] 
[INFO] --- exec-maven-plugin:1.2.1:java (default-cli) @ storm ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Storm ............................................. FAILURE [0.837s]
[INFO] maven-shade-clojure-transformer ................... SKIPPED
[INFO] Storm Core ........................................ SKIPPED
[INFO] storm-starter ..................................... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 2.129s
[INFO] Finished at: Mon Jun 23 17:24:39 NPT 2014
[INFO] Final Memory: 21M/212M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.2.1:java (default-cli) on project storm: The parameters 'mainClass' for goal org.codehaus.mojo:exec-maven-plugin:1.2.1:java are missing or invalid -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginParameterException

Alternative to submit topology - http://stackoverflow.com/a/19914787/432903

mvn -DskipTests clean package -X
$ mvn -X install:install-file -Dfile=spark-assembly-1.2.0-SNAPSHOT-hadoop1.0.4.jar -DgroupId=org.apache.spark -DartifactId=spark-assembly -Dversion=1.2.0-SNAPSHOT -Dclassifier=hadoop1.0.4 -Dpackaging=jar -DgeneratePom=true

using http

  <distributionManagement>
    <repository>
        <id>aws-release</id>
        <name>AWS S3 Release Repository</name>
        <url>s3://maven.soothsayer.co/release</url>
    </repository>
    <snapshotRepository>
        <id>aws-snapshot</id>
        <name>AWS S3 Snapshot Repository</name>
        <url>s3://maven.soothsayer.co/snapshot</url>
    </snapshotRepository>
</distributionManagement>
mvn deploy:deploy-file -Dfile=services/target/sam-services-1.0-SNAPSHOT.jar -DgroupId=co.soothsayer -DartifactId=services -Dversion=1.0-SNAPSHOT -Dpackaging=jar -DrepositoryId=aws-snapshot -Durl=http://maven.soothsayer.co.s3-website-us-east-1.amazonaws.com/snapshot -DuniqueVersion=false

using s3

mvn deploy:deploy-file -Dfile=services/target/sam-services-1.0-SNAPSHOT.jar -DgroupId=co.soothsayer -DartifactId=services -Dversion=1.0-SNAPSHOT -Dpackaging=jar -DrepositoryId=aws-snapshot -Durl=s3://maven.soothsayer.co/snapshot -DuniqueVersion=false

Method 1

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv E56151BF
DISTRO=$(lsb_release -is | tr '[:upper:]' '[:lower:]')
CODENAME=$(lsb_release -cs)
echo "deb http://repos.mesosphere.io/${DISTRO} ${CODENAME} main" |  sudo tee /etc/apt/sources.list.d/mesosphere.list
sudo apt-get -y update

sudo apt-get -y install mesos marathon

## Start mesos master
mesos-master --ip=127.0.0.1 --work_dir=/var/lib/mesos

# Start mesos slave.
mesos-slave --master=127.0.0.1:5050

Method 2

wget http://www.apache.org/dist/mesos/0.20.0/mesos-0.20.0.tar.gz
tar -zxf mesos-0.20.0.tar.gz -C /usr/local
sudo apt-get update; sudo apt-get install build-essential python-dev python-boto libcurl4-nss-dev libsasl2-dev

cd mesos-0.20.0/ 
mkdir build && cd build
../configure
make

sudo mkdir -p /var/lib/mesos
sudo chmod 777 /var/lib/mesos

## run
## Start mesos master
./bin/mesos-master.sh --ip=127.0.0.1 --work_dir=/var/lib/mesos

# Start mesos slave.
./bin/mesos-slave.sh --master=127.0.0.1:5050

## run application
https://github.com/prayagupd/smartad/tree/master/smartad-mesos

Verify

./src/examples/java/test-framework 127.0.0.1:5050
http://127.0.0.1:5050/

Reference

http://mesos.apache.org/gettingstarted/

lib url - http://stackoverflow.com/questions/9781444/how-to-install-latest-libcurl-on-debian-server/10038134#10038134

http://blog.madhukaraphatak.com/mesos-single-node-setup-ubuntu/