I hereby claim:
- I am tegansnyder on github.
- I am tegansnyder (https://keybase.io/tegansnyder) on keybase.
- I have a public key whose fingerprint is 6D1F 93B8 4667 46BB D542 AEDE 6BE5 2621 472A FF08
To claim this, I am signing this object:
I hereby claim:
To claim this, I am signing this object:
# download latest https://golang.org/dl/
wget https://storage.googleapis.com/golang/go1.6.3.linux-amd64.tar.gz
tar xzvf go1.6.3.linux-amd64.tar.gz
# system wide install
sudo mv go /usr/local/
# add system wide path
# crontab -e
# remove marvel indicies older than 30 days
30 2 * * * curator delete indices --timestring '%Y.%m.%d' --prefix '.marvel-es' --older-than 30 --time-unit 'days'
# install dev tools | |
yum groupinstall "Development tools" | |
# install zero mq | |
wget http://download.zeromq.org/zeromq-4.1.4.tar.gz | |
tar xvzf zeromq-4.1.4.tar.gz | |
cd zeromq-4.1.4 | |
./configure --without-libsodium | |
make |
SELECT JSON_EXTRACT(config, '$.settings.lang_code') as lang_code FROM _jobs |
Steps to get Ruby install on RHEL and JRuby with RVM for Teradata toolkit found here: https://github.com/Nordstrom/tdsql
# sudo -s or dzdo
dzdo -s
# instal ruby from RHEL packagemantent
yum install ruby
My personal documentation of how to install scikit-learn on RHEL 6.
sudo yum update -y && yum install -y python-devel.x86_64 python-matplotlib.x86_64 gcc-c++.x86_64
sudo easy_install pip
sudo pip install numpy
sudo yum install gcc-gfortran
For CSV export use "¬"
Load up the spark shell with the appropriate package for csv parsing:
./bin/spark-shell --packages com.databricks:spark-csv_2.10:1.1.0
In the scala terminal type the following, referencing the path to your csv file. Example below:
import org.apache.spark.sql.SQLContext