Skip to content

Instantly share code, notes, and snippets.

@xman1980
xman1980 / gist:d3b9449264c5ab70dfd1
Created January 17, 2016 11:53
docker_delete_all
#!/bin/bash
# Delete all containers
docker rm $(docker ps -a -q)
# Delete all images
docker rmi $(docker images -q)
pip freeze | grep -v "^-e" | xargs pip uninstall -y
@xman1980
xman1980 / gist:c94100573590e1467fe7
Created February 7, 2016 15:05
change_host_uuid_cdh5
This can be changed in /etc/defaults/cloudera-scm-agent file with CMF_AGENT_ARGS="--host_id new_host_id"
Brief summary of umask value meanings:
umask 077 - Assigns permissions so that only you have read/write access for files, and read/write/search for directories you own. All others have no access permissions to your files or directories.
umask 022 - Assigns permissions so that only you have read/write access for files, and read/write/search for directories you own. All others have read access only to your files, and read/search access to your directories.
umask 002 - Assigns permissions so that only you and members of your group have read/write access to files, and read/write/search access to directories you own. All others have read access only to your files, and read/search to your directories.
For more information about what umask does:
@xman1980
xman1980 / gist:af911ebbf50bc7d1245e
Created March 10, 2016 15:34
ffmpeg_centos_7_yum_install
yum -y install http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-5.el7.nux.noarch.rpm
sudo yum --enablerepo=nux-dextop install ffmpeg
yum install python-setuptools python-pip
pip install supervisor
mkdir -p /etc/supervisord
echo_supervisord_conf > /etc/supervisor.d/supervisord.conf
forked systemd init script (thx to Jiangge Zhang) in /usr/lib/systemd/system/supervisord.service:
[Unit]
Description=supervisord - Supervisor process control system for UNIX
Documentation=http://supervisord.org
After=network.target
CELERY=`ps -A -o pid,rss,command | grep celeryd | grep -v grep | awk '{total+=$2}END{printf("%d", total/1024)}'`
GUNICORN=`ps -A -o pid,rss,command | grep gunicorn | grep -v grep | awk '{total+=$2}END{printf("%d", total/1024)}'`
REDIS=`ps -A -o pid,rss,command | grep redis | grep -v grep | awk '{total+=$2}END{printf("%d", total)}'`
NGINX=`ps -A -o pid,rss,command | grep nginx | grep -v grep | awk '{total+=$2}END{printf("%d", total/1024)}'`
OTHER=`ps -A -o pid,rss,command | grep -v nginx | grep -v celeryd | grep -v gunicorn | grep -v redis | grep -v grep | awk '{total+=$2}END{printf("%d", total/1024)}'`
websites=`ps -A -o user,pid,rss,command | grep gunicorn | egrep -o "[a-z_]+\.py$" | sort | uniq | perl -wpe 's|\.py$||;' | xargs`
printf "%-10s %3s MB\n" "Celery:" $CELERY
printf "%-10s %3s MB\n" "Gunicorn:" $GUNICORN
printf "%-10s %3s MB\n" "Nginx:" $NGINX
printf "%-10s %3s KB\n" "Redis:" $REDIS
ansible -m debug -a "msg={{ansible_ssh_host|default('MISSING')}}" all -i /path/to/inventory
#To smoke test your Hadoop upgrade, you can run the following MapReduce job as a regular user.
#The job uses MapReduce to write 100MB of data into HDFS with RandomWriter
sudo -u hdfs hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar randomwriter -Dtest.randomwrite.total_bytes=10000000 test-after-upgrade
#Pi count
sudo -u hdfs hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 10 300