Skip to content

Instantly share code, notes, and snippets.

@apcera-code
apcera-code / multi-job-maifest.yml
Created April 20, 2016 16:58
Deploying Microservices with Multi Job Manifests: this multi job manifest example is using NATS
{
"jobs": {
"job::/demo::nats-server": {
"docker": {
"image": "nats:0.7.2"
},
"exposed_ports": [ 4222, 8222 ],
"routes": [
{
"type": "http",
@apcera-code
apcera-code / start-master-cluster.sh
Created April 19, 2016 18:38
Deploying Spark on the Apcera Cloud Platform: this is a script that will initiate a spark cluster with 3 worker (slave) nodes.
#!/bin/bash
function createSlave {
apc app from package $1 -p spark-apcera -m 1G -dr -e SPARK_MASTER=spark://$2:7077 --start-cmd '$SPARK_APCERA_HOME/bin/start-slave.sh' --batch
apc network join $3 -j $1
apc app start $1
}
export NETWORK_NAME=sparknet
export MASTER_NAME=spark-m
@apcera-code
apcera-code / command-named-network
Last active April 20, 2016 23:09
Deploying Spark on the Apcera Cloud Platform: the is a command line used to connect the named job to the named network
apc network join $NETWORK_NAME -j $JOB_NAME
@apcera-code
apcera-code / command-virtual-netwrok
Last active April 20, 2016 23:09
Deploying Spark on the Apcera Cloud Platform: this is a command that is used to set up the virtual networking
apc network create $NETWORK_NAME
@apcera-code
apcera-code / start-slave.sh
Last active April 19, 2016 18:28
Deploying Spark on the Apcera Cloud Platform: this is the start slave bash script for the Spark package
export SPARK_WORKER_DIR=/app/work
export SPARK_LOG_DIR=/app/logs
export VIRTUAL_NETWORK_IP=`ifconfig | grep "inet addr" | cut -d: -f2 | grep -v "169." | grep -v "127.0.0.1" | cut -d ' ' -f1`
$SPARK_HOME/sbin/start-slave.sh -h $VIRTUAL_NETWORK_IP $SPARK_MASTER
tail -f /app/logs/*
@apcera-code
apcera-code / start-master.sh
Last active April 19, 2016 18:28
Deploying Spark on the Apcera Cloud Platform: this is the start master bash script for the Spark package
#!/bin/bash
set -e
export SPARK_LOG_DIR=/app/logs
export VIRTUAL_NETWORK_IP=`ifconfig | grep "inet addr" | cut -d: -f2 | grep -v "169." | grep -v "127.0.0.1" | cut -d ' ' -f1`
$SPARK_HOME/sbin/start-master.sh -h $VIRTUAL_NETWORK_IP
tail -f /app/logs/*
@apcera-code
apcera-code / command-deploy-spark-apcera-package
Created April 19, 2016 18:24
Deploying Spark on the Apcera Cloud Platform: This is the command line to deploy the Spark 1.6 cluster with the custom manifest
apc package build --name spark-apcera-1.6 spark-apcera-package.conf
@apcera-code
apcera-code / apache-spark-apcera.yml
Created April 19, 2016 18:19
Deploying Spark on the Apcera Cloud Platform: this goes one step further and define one last package as much as possible, so that Spark cluster could be created, repeatedly, as simply and as fast as possible
name: "spark-apcera"
version: "1.6.1"
build_depends [
{package: "build-essential"}
]
depends [
{runtime: "spark-1.6.0"}
]
@apcera-code
apcera-code / command-deploy-spark-package
Created April 19, 2016 18:16
Deploying Spark on the Apcera Cloud Platform: This is the command line to deploy the Spark 1.6 package with the custom manifest
apc package build --name spark-1.6 spark-package.conf
@apcera-code
apcera-code / apace-spark.yml
Created April 19, 2016 18:13
Deploying Spark on the Apcera Cloud Platform: creating a custom spark package manifest
name: "spark"
version: "1.6.1"
sources [
{url: "http://apache.mirror.anlx.net/spark/spark-1.6.1/spark-1.6.1-bin-hadoop2.6.tgz"},
]
build_depends [
{ package: "build-essential" }
]