I hereby claim:
- I am eliasah on github.
- I am eliasah (https://keybase.io/eliasah) on keybase.
- I have a public key whose fingerprint is 1766 983D 67EA 3816 AF95 EB70 AEB8 3993 39D9 51E9
To claim this, I am signing this object:
$ curl -C - https://download.java.net/java/ga/jdk11/openjdk-11_osx-x64_bin.tar.gz -O openjdk-11_osx-x64_bin.tar.gz | |
$ tar xf openjdk-11_osx-x64_bin.tar.gz | |
$ sudo mv jdk-11.jdk /Library/Java/JavaVirtualMachines/ | |
$ java -version | |
openjdk version "11" 2018-09-25 | |
OpenJDK Runtime Environment 18.9 (build 11+28) | |
OpenJDK 64-Bit Server VM 18.9 (build 11+28, mixed mode) | |
Add the following to your .bash_profile or .zsh_rc | |
export JAVA_HOME="/Library/Java/JavaVirtualMachines/jdk-11.jdk/Contents/Home" |
Welcome to | |
____ __ | |
/ __/__ ___ _____/ /__ | |
_\ \/ _ \/ _ `/ __/ '_/ | |
/___/ .__/\_,_/_/ /_/\_\ version 2.0.0 | |
/_/ | |
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_77) | |
Type in expressions to have them evaluated. | |
Type :help for more information. |
I hereby claim:
To claim this, I am signing this object:
Bash
for file in prefix*; do mv "$file" "${file#prefix}"; done;
The for loop iterates over all files with the prefix. The do removes from all those files iterated over the prefix.
Here is an example to remove "bla_" form the following files:
bla_1.txt
bla_2.txt
To be able to use custom endpoints with the latest Spark distribution, one needs to add an external package (hadoop-aws
). Then, custum endpoints can be configured according to docs.
bin/spark-shell --packages org.apache.hadoop:hadoop-aws:2.7.2
strip_glm <- function(cm) { | |
cm$y = c() | |
cm$model = c() | |
cm$residuals = c() | |
cm$fitted.values = c() | |
cm$effects = c() | |
cm$qr$qr = c() | |
cm$linear.predictors = c() | |
cm$weights = c() |
#!/bin/bash -ex | |
# ATTENTION: | |
# | |
# 1. ensure you have about 1Gb on the storage of /usr/lib/ for the Zeppelin huge bundle chosen by default below, | |
# or choose a smaller bundle from Zeppelin web-site | |
# | |
# 2. adjust values of ZEPPELIN_NOTEBOOK_S3_BUCKET | |
# and ZEPPELIN_NOTEBOOK_S3_USER if you need S3-persistance of your Zeppelin Notebooks to your S3 bucket | |
# otherwize just remove all three last exports lines starting from 'export ZEPPELIN_NOTEBOOK_S' |
docker-machine-driver-xhyve is a docker machine driver plugin for xhyve native OS X Hypervisor. xhyve is a lightweight OS X virtualization solution. In my opinion, it's a far better option than virtualbox for running minikube.
On MacOS sierra, download latest using
brew install docker-machine-driver-xhyve --HEAD
#!/bin/bash | |
curl -X POST http://[spark-cluster-ip]:6066/v1/submissions/create --header "Content-Type:application/json;charset=UTF-8" --data '{ | |
"action":"CreateSubmissionRequest", | |
"appArgs":[ | |
"/home/eliasah/Desktop/spark_pi.py" | |
], | |
"appResource":"file:/home/eliasah/Desktop/spark_pi.py", | |
"clientSparkVersion":"2.2.1", | |
"environmentVariables":{ |