My notes from the Meet Chef course at http://pluralsight.com/training/Courses/TableOfContents/meet-chef
Chef is a Ruby framework for automating, reusing and documenting server configuration. It's like Unit tests for your servers.
#!/bin/bash | |
# Nagios plugin to check memory consumption | |
# Excludes Swap and Caches so only the real memory consumption is considered | |
# set default values for the thresholds | |
WARN=90 | |
CRIT=95 | |
STATE_OK=0 | |
STATE_WARN=1 |
#!/bin/sh | |
# | |
# /etc/rc.d/init.d/docker | |
# | |
# Daemon for docker.com | |
# | |
# chkconfig: 2345 95 95 | |
# description: Daemon for docker.com | |
### BEGIN INIT INFO |
What does 'Sending build context to Docker daemon' mean? | |
[root@centos65vm1 1]# docker build -t ubtest . | |
Sending build context to Docker daemon 2.56 kB | |
Quick and dirty answer: the client is tar/compressing the directory (and all subdirectories) where you executed docker build. Yeah, that's right. If you execute this in your root directory, your whole drive will get tar'd and sent to the docker daemon. Caveat something. Generally that's a mistake you only make once. | |
Anyways, the build gets run by the daemon, not the client, so the daemon needs the whole directory that included (hopefully) the Dockerfile and any other local files needed for the build. That's the context. |
avro-tools | |
avro-doc | |
spark-core | |
crunch | |
sqoop2 | |
sqoop2-client | |
hbase-solr-doc | |
hbase-solr | |
solr-mapreduce | |
search |
## to find free space | |
parted /dev/sdb print free | |
# to see free space in bytes | |
# parted /dev/sda unit B print free | grep 'Free Space' | tail -n1 | awk '{print $3}' | |
# parted /dev/sda unit TB print free | grep 'Free Space' | tail -n1 | awk '{print $3}' | |
# parted /dev/sda unit MB print free | grep 'Free Space' | tail -n1 | awk '{print $3}' |
My notes from the Meet Chef course at http://pluralsight.com/training/Courses/TableOfContents/meet-chef
Chef is a Ruby framework for automating, reusing and documenting server configuration. It's like Unit tests for your servers.
Install EPEL repository | |
# wget http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm | |
# rpm -ivh epel-release-6-8.noarch.rpm | |
Installation steps : | |
1- Install 389-ds packages | |
# yum install 389-ds* -y |
In Pocket | |
http://www.workhabit.com/blog/centos-55-and-thriftscribe | |
https://sites.google.com/a/blamethecomputer.com/segfault/braindump/building-and-installing-thrift-on-centos-5 | |
https://thrift.apache.org/docs/install/centos |
If you’ve set up Hadoop for development you may be wondering why you can’t read or write files or create MapReduce jobs then you’re probably missing a tiny bit of configuration. For most development systems in pseudo-distributed mode it’s easiest to disable permissions altogether. This means that any user, not just the “hdfs” user, can do anything they want to HDFS so do not do this in production unless you have a very good reason. | |
If that’s the case and you really want to disable permissions just add this snippet into your hdfs-site.xml file (located in /etc/hadoop-0.20/conf.empty/hdfs-site.xml on Debian Squeeze) in the configuration section: | |
<property> | |
<name>dfs.permissions</name> | |
<value>false</value> | |
</property> | |
Then restart Hadoop (su to the “hdfs” user and run bin/stop-all.sh then bin/start-all.sh) and try putting a file again. You should now be able to read/write with no restrictions. |
go to Oozie service Configuration | |
click "Enable Oozie Server Web Console" checkbox | |
wget http://extjs.com/deploy/ext-2.2.zip | |
on dwh-n1 | |
unzip ext-2.2.zip | |
cp -a ext-2.2 /var/lib/oozie/ | |
restart oozie from cloudera manager |