Skip to content

Instantly share code, notes, and snippets.

@ogryzek
Last active November 24, 2017 16:27
Show Gist options
  • Save ogryzek/9a67a0be763e5bf9581b5716f7db7888 to your computer and use it in GitHub Desktop.
Save ogryzek/9a67a0be763e5bf9581b5716f7db7888 to your computer and use it in GitHub Desktop.
code_snippets

AWS, Shippaple,Docker

Shippable has a nice blog post on CI/CD pipelines with Amazon EC2 Container Service and Amazon EC2 Container Registry.

Start out with the 'Getting Started' wizard, which told me to do the following:

1.) Retrieve the docker login command

aws ecr get-login --region us-west-2

2.) Run the returned docker login command

docker login -u AWS -p <some-really-really-long-value> -e none https://<some-number>.dkr.ecr.us-west-2.amazonaws.com

3.) Build the docker image

docker build -t shippable-testing .

4.) tag the image

docker tag shippable-testing:latest aws_account_id.dkr.ecr.us-west-2.amazonaws.com/shippable-testing:latest

5.) push to AWS

docker push aws_account_id.dkr.ecr.us-west-2.amazonaws.com/shippable-testing:latest

Next up, check out the AWS Developer's Guide

Continuous Integration with DockerHub and ECS

Table of Contents

  • A Ruby Application
  • Docker
  • DockerHub
  • AWS
    • EC2
      • Security Groups
      • Network Interfaces
      • Load Balancers
      • Launch Configurations
      • Auto Scaling Groups
    • ECS
      • Clusters
      • Services
      • Task Definitions

Intro

Let's take a look at a couple of approaches to continuous integration with Docker, DockerHub and Amazon's ECS. One approach is to have a script triggered when a new build occurs on DockerHub that fetches the current task definition, creates a new revision of it, then edits the service to use it And, the other is to create a task definition template with a variable for the build number, create and register a dummy task definition with a version number, then use the version number in a script that DockerHub uses to kick off a deploy service that uses the dummy task definition.

The main references for this are:

The Application

For this example, we'll build a web application (Book Search) that serves a search form, then returns a list of books using the amazon-ecs gem, and an example from the Ruby Cookbook, 2nd edition.

To run the following code, you'll need to set the follow:

Of course, you'll also need ruby and the amazon-ecs gem installed.

# Sample Code From:
#
# 18.1 Searching for Books on Amazon
# Ruby Cookbook, Second Edition, by Lucas Carlson
# and Leonard Richardson. Copyright 2015 Lucas Carlson
# and Leonard Richardson, 978-1-449-37371-9
require 'amazon/ecs'

Amazon::Ecs.configure do |options|
  options[:associate_tag] = '' # AWS associate tag
  options[:AWS_access_key_id] = '' # AWS access key id
  options[:AWS_secret_key] = ''# AWS secret access key
end

def price_books(keyword)
  response = Amazon::Ecs.item_search(keyword, {response_group: 'Medium', sort: 'salesrank'})
  response.items.each do |product|
    if product.get_element('ItemAttributes/listPrice')
      new_price = product.get_element('ItemAttributes/ListPrice').get('FormattedPrice')
      if product.get_element('LowestUsedPrice').nil?
        used_price = 'not available'
      else
        used_price = product.get_element('LowestUsedPrice').get('FormattedPrice')
      end
      puts "#{product.get('ItemAttributes/Title')}: #{new_price} new, #{used_price} used."
    end
  end
end

price_books('ruby')

Dockerhub

Now that we have the book search application with a working Dockerfile, we can clone the repo to run it locally, or just pull from the [DockerHub DrewBro Designs](docker pull drewbrodesigns/book_search) repo.

docker pull drewbrodesigns/book_search

pfSense Setup

1.) Download pfSense LiveCD ISO (AMD64 / Live CD with Installer)

2.) Create a New VM

Name: [Any-Name-You-Like]  
Type: BSD  
Version: FreeBSD (64 bit)
Memory size: 128 (default)
Hard Drive: Create a virtual hard drive now (default)
Hard drive file type: VDI (VirtualBox Disk Image) (default)
Storage on physical hard drive: Dynamically allocated (default)  
File location and size: 2.00 GB (default)  

3.) Proxy Settings

  • Storage: Click on the cd icon that says Empty. Then under attributes click the little cd icon dropdown, and select 'Choose a virtual CD/DVD disk file...'
    • Select the path to the pfsense ISO downloaded in the first step and click open
  • Network:
    • Adapter 1: Bridged Adapter / eth0
    • Adapter 2: Host-only Adapter / vboxnet0

4.) Start up the Proxy VM

Do you want to set up VLANs now [y|n]? n
Enter the WAN interface name or 'a' for auto-detection: em0
Enter the LAN interface name or 'a' for auto-detection: em1 
Enter the Optional 1 interface name or 'a' for auto-detection: [Enter]
Do you want to proceed [y|n]? y  

5.) Set WAN and LAN

Enter an option: 2 (Set interface(s) IP address)  
Enter the number of the interface you wish to configure: 2 (LAN)
Enter the new LAN IPv4 address. Prese <ENTER> for none: 192.168.56.2 (the Host-only Adapter network address +1)
Enter the new LAN IPv4 subnet bit count: 24
For a WAN, enter the new LAN IPv4 upstream gateway address.  
For a LAN, press <ENTER> for none: [Enter]
Enter the new LAN IPv6 address. Press <ENTER> for none: [Enter]  
Do you want to enable toe DHCP server on LAN [y|n]? y
Enter the start address of the IPv4 client address range: 192.168.56.100
Enter the end address of the IPv4 client address range: 192.168.56.200
Do you want to revert to HTTP as the webConfigruator protocol? (y/n) n  

6.) pfSense Console Setup

Log in the pfSense at the ipaddress you chose for your LAN, e.g. 192.168.56.2 and setup Firewall Aliases, and Rules for LAN and WAN

  • Set up Firewall Aliases
    Add aliases for ASNet, ext, and routers, under Firewall > Aliases.

Name: ASNet
Type: Network(s)
Network: 192.168.0.0
CIDR: 16

Name: ext
Type: Network(s)
Network: 192.168.68.167 Network: 192.168.68.167.xip.io

ASNet 192.168.0.0/16 ext 192.68.68.134, drewtest.activestate.com, 192.168.68.134.xip.io, routers 192.168.111.74, 192.168.111.75

  • Set up Firewall Rules (LAN)
    Firewall > Rules > ADd Rule

Source: LAN net
Destination: LAN address

Source: LAN address
Destination: LAN net

Source: LAN net
Destination: ASNet (note: Type: Single host or alias, Address: ASNet)

Source: ASNet (note: Type: Single host or alias, Address: ASNet)
Destination: LAN net

Source: LAN net
Destination: LAN net

lol

  • Set up Firewall Rules (WAN)

Source: LAN net
Destination: LAN address

Source: LAN address
Destination: LAN net

Source: LAN net
Destination: ASNet (note: Type: Single host or alias, Address: ASNet)

wee

AWS cli testing for Stackato

setup

Make sure to have the aws command like tool installed and configured

Install the aws cli

curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
unzip awscli-bundle.zip
./awscli-bundle/install -h # for the help menu
sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
aws help

Configure the aws cli

aws configure --profile <user-name>
AWS Access Key ID [None]: AKIAI44QH8DHBEXAMPLE
AWS Secret Access Key [None]: je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY
Default region name [None]: us-east-1
Default output format [None]: text
aws ec2 run-instances --image-id ami-9b7b2bab  --key-name drew-linux-keys --security-group-ids sg-c4a6aaa1 --subnet-id subnet-83db0dda --associate-public-ip-address --count 3 --instance-type m3.medium --dry-run

aws ec2 create-tags --resources ami-9b7b2bab --tags Key=Name,Value=Drew-Test

Troubleshooting:

sudo -u postgres psql postgres
kato config get postgresql_node postgresql
psql --host 192.168.68.33 --port 5432 --username=stackato --dbname=sincontacts
tail -F /s/logs/postgresql*.log

Create a PostgreSQL Database

Install PostgreSQL if you haven't already.

Ubuntu

sudo sh -c "echo 'deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main' > /etc/apt/sources.list.d/pgdg.list"
wget --quiet -O - http://apt.postgresql.org/pub/repos/apt/ACCC4CF8.asc | sudo apt-key add -
sudo apt-get update
sudo apt-get install postgresql-common
sudo apt-get install postgresql-9.3 libpq-dev

Mac OS X

brew install postgresql

###PostgeSQL Setup

Set up a password, and create a role for your service broker to use to provsion services

sudo -u postgres psql
postgres=# \password <my-password>

CREATE ROLE lol PASSWORD 'lol' CREATEUSER CREATEROLE CREATEDB LOGIN;

Edit postgresql.conf to have the listen_addresses set and edit the pg_hbg.conf to match the address(es) of the stackato vm.
If you're not sure where these files are, try this:

sudo -u postgres psql
postgres=# SHOW config_file;

postgresql.conf

 57 # - Connection Settings -
 58 
 59 #listen_addresses = 'localhost'         # what IP address(es) to listen on;
 60                                         # comma-separated list of addresses;
 61                                         # defaults to 'localhost'; use '*' for all
 62                                         # (change requires restart)
 63 listen_addresses = 'localhost'          # ADD THIS LINE AND RESTART POSTGRES!
 64 port = 5432                             # (change requires restart)
 65 max_connections = 100                   # (change requires restart)
 66 # Note:  Increasing max_connections costs ~400 bytes of shared memory per
 67 # connection slot, plus lock space (see max_locks_per_transaction).
 68 #superuser_reserved_connections = 3     # (change requires restart)
 69 unix_socket_directories = '/var/run/postgresql' # comma-separated list of directories

pg_hbg.conf Set the address of the stackato VM as a trusted address

 83 # Database administrative login by Unix domain socket
 84 local   all             postgres                                peer
 85 
 86 # TYPE  DATABASE        USER            ADDRESS                 METHOD
 87 
 88 # "local" is for Unix domain socket connections only
 89 local   all             all                                     peer
 90 # IPv4 local connections:
 91 host    all             all             127.0.0.1/32            md5
 ###### ADD THE IP OF YOUR STACKATO VM HERE AS TRUST
 92 host    all             all             192.168.68.171/32       trust
 93 # IPv6 local connections:
 94 host    all             all             ::1/128                 md5

Stackato VM PostgreSQL Service Broker Config

Set kato config get postgresql_node postgresql to use the external service.

kato config set postgresql_node postgresql/host <postgresql-host-ip>
kato config set postgresql_node postgresql/port 5432
kato config set postgresql_node postgresql/user lol
kato config set postgresql_node postgresql/pass lol

Batch Patching for Stackato 3.4.1: Stackato Batch Pack 1

Patch Release Queue | How to Issue Patches | Patch Folder

There has been 1 patch issued for 3.4.1 since the final release. Note: the actual release was not tagged. On the cloud controller, the branch (and tag) release-v3.4 was set for RC1. Commit 8198218 marks the actual release (Jul 30th).

Components Included in Stackato Batch Pack 1

cloud_controller_ng

Status: PENDING
Status update: Lorne has ussued patches 81982, 8feab

Fixes:

  • Unable to delete applications
  • new LDAP users on first login assigned to dynamically generated space within org
  • LDAP user quota
  • LDAP user login on non-default org
  • add api enhancements
  • potential Loggregator issue with being unable to deploy apps

Bug Tracking:

  • 104487
  • 104670
    • This patch also needs to run "kato op import_from_yaml_file --upgrade cloud_controller_ng" once it's been applied.
  • 104730
  • 104784
  • 104963
  • 104692
  • no number: "cloud_controller_ng" 7d971eff884de4542f99c1caf90994f20f180621 (Qualcomm: Quota not set properly)
  • no number: "cloud_controller_ng" 629f4180845a83b823b8210896191ead984c658b, b5c855183d614dcc80c0da72817a2df0c7b12e73, 168f88b00e4a7431b572ad1e1e547392f209f0e2, cfb167fe2d8a3c827519be92e819d0c699a00118, 635a124545e7f64f5602aa0c2f9b65f6bb0d90a2, f205701d3b4da44c8f0adf4ee5e91af6b0667d27, 635b92e39e2867098fc76abdacc2f6916e22c2ac, bf00991a728e0bca304dbd1a72f5c4c50d9ac571, d8185df7997c2a775cee4d2d260fe8510509754f

Notes:

  • The config.yaml portion of 8feab55 will be manually extracted (excluded) from this batch patch.
  • Using when using --from and --to flags with generate patch, it will not include the commit given at --from but the one after that.
# ~/stackato-src
git clone [email protected]:ActiveState/cloud_controller_ng.git && cd cloud_controller_ng
git checkout release-v3.4
cd ~/stackato-src/stackato-ops/dev-tools/kato-patch

./generate-patch --from 8198218 --to 8499853 --repo ~/stackato-src/cloud_controller_ng --stackato-version 3.4.1 --get-stackato-path ~/stackato-src/get-stackato --name batch-patch-1-cloud_controller_ng

echo "kato op import_from_yaml_file --upgrade cloud_controller_ng" >> ~/stackato-src/get-stackato/static/kato-patch/3.4.1/batch-patch-1-cloud_controller_ng/patch.sh

console

Status: PENDING

Fixes:

  • kato report --all change to kato report --cluster

Bug tracking:

Notes:

  • 8fe24d8 release-v3.4 styles/OEM customs
  • 2fb2f0f release-v3.4 --all/--cluster
  • 53f0a8f release-v3.4 (possibly last commit before release july 25)
# ~/stackato-src
git clone [email protected]:ActiveState/console.git && cd console
git checkout release-v3.4
cd ~/stackato-src/stackato-ops/dev-tools/kato-patch

./generate-patch --from 53f0a8f --to 8fe24d8 --repo ~/stackato-src/console --stackato-version 3.4.1 --get-stackato-path ~/stackato-src/get-stackato --name batch-patch-1-console

fence

Status: Pending
Fixes:

  • memory reporting issue
  • Two apps can communicate only under limited circumstances
  • Unable to mount NFS on container

Buck tracking:

Roles to Restart: DEA
Notes: We have 3 commits on the Patch List

# ~/stackato-src
git clone gitolite@gitolite:fence && cd fence
git checkout release-v3.4
git checkout 281c5d4
git checkout -b 104709
git cherry-pick 31ca397

cd ~/stackato-src/stackato-ops/dev-tools/kato-patch

./generate-patch --from b7f11f2 --to 281c5d4 --repo ~/stackato-src/fence --stackato-version 3.4.1 --get-stackato-path ~/stackato-src/get-stackato --name batch-patch-1-fence

printf "cd /s/code/fence/fence/\ntest -f Gemfile.lock && mv Gemfile.lock Gemfile.lock.bak\nbundle install && cd ~\n" >> ~/stackato-src/get-stackato/static/kato-patch/3.4.1/batch-patch-1-fence/patch.sh

kato

Status: PENDING
Fixes:

  • Inconsistent kato command kato data repair_routes to kato data repair routes
  • hard-coded DOCKER_OPTS when relocating containers
  • recursive kato patch update issue

Bug tracking:

  • 104785
    • Consult with Adam on how to test.
  • 104893
    • make sure that the command is kato data routes repair
  • 105014
    • Set bug's patch needed to - once patch is complete.

Notes:

  • For patching kato, if the patch generator generates a path /s/kato/core/lib/kato/ this is in the repo core/lib/kato/ and on the vm /opt/rubies/current/lib/ruby/gems/1.9.1/gems/stackato-kato-3.0.0/lib/kato/ (this will be manually changed for now, but should definitely be refactored)

  • There is somewhat of a workaround for this, currently on the master branch, in the Makefile

    •   for dirs in aok cloud_controller_ng dea_ng fence/fence health_manager services/{filesystem,mysql,memcached,postgresql,redis,harbor,mongodb,rabbitmq}; \
        do \
          rsync -rv $(STACKATO_SSH_OPTS) --checksum --exclude=.git core/lib stackato@$(VM):/s/code/$$dirs/vendor/bundle/ruby/1.9.1/gems/stackato-kato-$(KATO_CORE_VERSION)/; \
        done
  • This patch contains a single commit

# ~/stackato-src
git clone [email protected]:ActiveState/kato.git && cd kato
git checkout release-v3.4

cd ~/stackato-src/stackato-ops/dev-tools/kato-patch

./generate-patch --from 882f228 --to 6c91e85 --repo ~/stackato-src/kato --stackato-version 3.4.1 --get-stackato-path ~/stackato-src/get-stackato --name batch-patch-1-kato

sentinel

Status: PENDING
Fixes: Loggregator issue: Unable to delete app after upgrade from 3.2.1 -> 3.4.1

Bug tracking:

Notes:

  • 26249c6 is the only patch in the patch release queue, which is from 24 days ago (aug.12).
  • b027687 is from July 30th (maybe last patch before release? - asking Stefan for method to verify: edit: it doesn't matter)
# ~/stackato-src
git clone [email protected]:ActiveState/sentinel.git && cd sentinel
cd ~/stackato-src/stackato-ops/dev-tools/kato-patch

./generate-patch --from b027687 --to 26249c6 --repo ~/stackato-src/sentinel --stackato-version 3.4.1 --get-stackato-path ~/stackato-src/get-stackato --name batch-patch-1-sentinel
=> unknown repo!

Note: sentinel hasn't been added to the generate-patch script. Open up the file and add it in the repo map

# generate-patch

# ...

REPO_MAP = {
  "2" => {
    "stackato"          => "/s/",
    "aok"               => "/s/aok/",
    "kato"              => "/s/kato/",
    "cloud_controller"  => "/s/vcap/cloud_controller/",
    "dea"               => "/s/vcap/dea/",
    "fence"             => "/s/vcap/fence/",
    "stackato-router"   => "/s/vcap/stackato-router/",
    "vcap-staging"      => "/s/vcap/staging/",
  },
  "3" => {
    "stackato"          => "/s/",
    "kato"              => "/s/kato/",
    "sentinel"          => "/s/code/sentinel/",
    "aok"               => "/s/code/aok",

# ...

Then rerun the generate-patch script

./generate-patch --from b027687 --to 26249c6 --repo ~/stackato-src/sentinel --stackato-version 3.4.1 --get-stackato-path ~/stackato-src/get-stackato --name batch-patch-1-sentinel

=> patch.sh generated.
=> patch.yml generated. Make sure you edit it to add a description, edit the severity if necessary, and specify the roles to restart!
=> patch can be found in /home/drewo/stackato-src/get-stackato/static/kato-patch/3.4.1/batch-patch-1-sentinel.

##Batch Patch Testing
cloud_controller_ng

  • 104487
  • 104670
  • 104730
  • 104784
  • 104963
  • 104692
  • no number: "cloud_controller_ng" 7d971eff884de4542f99c1caf90994f20f180621 (Qualcomm: Quota not set properly)
  • no number: "cloud_controller_ng" 629f4180845a83b823b8210896191ead984c658b, b5c855183d614dcc80c0da72817a2df0c7b12e73, 168f88b00e4a7431b572ad1e1e547392f209f0e2, cfb167fe2d8a3c827519be92e819d0c699a00118, 635a124545e7f64f5602aa0c2f9b65f6bb0d90a2, f205701d3b4da44c8f0adf4ee5e91af6b0667d27, 635b92e39e2867098fc76abdacc2f6916e22c2ac, bf00991a728e0bca304dbd1a72f5c4c50d9ac571, d8185df7997c2a775cee4d2d260fe8510509754f

console

fence

kato

sentinel

Creating a Patch for Stackato

For the purpose of this guide, we will be using bug 106134. And this commit on AOK.

In this case, we only need to change one file controllers/oauth_controller.rb. This is because we do not need to have our unit tests, and their dependencies deployed to production.

We want to make a tarball of all the files touched in the patch, and the paths to where they will reside on the vm. So, for example this file aok/controllers/oauth_controller.rb will reside at home/stackato/code/aok/controllers/oauth_controller.rb on the VM.

PATCH_NAME=aok-endpoint-fix
TMPDIR=$(mktemp -d X) && cd $TMPDIR
PATH_ON_VM=stackato/code/aok/controllers
mkdir -p $PATH_ON_VM && cd $PATH_ON_VM
curl https://raw.githubusercontent.com/ActiveState/aok/4be72a9ab975c0fc0865faef99ce0a6a1754fe9e/controllers/oauth_controller.rb > oauth_controller.rb
cd $TMPDIR
tar -cvf $PATCH_NAME.tar stackato

Upload the tar to HP cloud: US East > Object Storage > kato-patch-blob > 3.4.2 > [Create a Psuedo-folder with patch number and name], in this case, we end up with 001-aok-endpoint-fix.

Note: You will also want to create another tarball with the file(s) touched in the patch at the state previous to those commits, called <patch-name-revert>.tar, which can be used to revert the changes.

Then create a shell script with the appropriate variables set (you should only need to set PATCH_DIR, TAR, REVERT_TAR, and possibly VERSION).

#!/bin/bash

set -o errexit

VERSION="3.4.2"
PATCH_DIR="001-aok-endpoint-fix"
TAR="aok-endpoint-fix"
REVERT_TAR="aok-endpoint-fix-revert"

HPCLOUD="https://a248.e.akamai.net/cdn.hpcloudsvc.com/g701e14e9fa37a30cfbbc3a01f78365e2/prodae1/"
REVERT_PATH=$HPCLOUD/$VERSION/$PATCH_DIR/$REVERT_TAR.tar
TAR_PATH=$HPCLOUD/$VERSION/$PATCH_DIR/$TAR.tar

TMPDIR=$(mktemp -d)
cd $TMPDIR

if [ "$1" == "revert" ]; then
  curl $REVERT_PATH > $REVERT_TAR.tar
  tar -C $HOME -xvf $REVERT_TAR.tar
else
  curl $TAR_PATH > $TAR.tar
  tar -C $HOME -xvf $TAR.tar
fi

cd $HOME
rm -rf $TMPDIR

The resulting patch.sh should be placed in get-stackato/static/kato-patch/$VERSION/<patch-name>, in this case get-stackato/static/kato-patch/3.4.2/aok-endpoint-fix/patch.sh and an appropriate patch.yml needs to be created. Here's an example:

---
id: 1
name: aok-endpoint-fix
stackato_version: 3.4.2
description: 'Correct aok endpoint redirecting to custom uri'
roles_to_restart: [router, controller]
severity: required

The manifest.json for the version will also have to be manually updated, in this case it it located at get-stackato/static/kato-patch/3.4.2/manifest.json, and looks like this (at the time of this writing)

{
  stackato_version: "3.4.2",
  kato_patch: {
    version: "15",
    url: "https://get.stackato.com/kato-patch/updates/3.4.2/kato-patch-15.sh"
  },
  patches: {
    aok-endpoint-fix: {
      download_url: "https://get.stackato.com/kato-patch/3.4.2/aok-endpoint-fix/patch.sh",
      id: 1,
      description: "Correct aok endpoint redirecting to custom uri",
      roles_to_restart: [
        "router",
        "controller"
      ],
    severity: "required"
    }
  }
}

note: This guide is not yet complete. There are many things that need to be improved about this process, but it is a start.

#!/usr/bin/ruby
require "docopt"
require "io/console"
doc = <<DOCOPT
kpatch - Patch Creation Tool
Usage:
kpatch [options]
Options:
-h --help Display this help menu.
-v --version Display kpatch version.
-o --options Display all options.
--stackato-version <version> Set stackato version [default: 3.4.2].
-r --repo <path> Set path to repository for patch.
--stackato-src <path> Set path to stackato-src directory.
-f --from <commit> Set commit one commit before where you want to start from.
-t --to <commit> Set commit to the last commit to include in the patch.
DOCOPT
begin
options = Docopt::docopt(doc)
rescue Docopt::Exit => e
puts e.message
end
if options["--version"]
puts "kpatch version 1.0"
end
STACKATO_VERSION = "release-v#{options["--stackato"]}" || "release-v3.4.2"
REPO = options["--repo"]
STACKATO_SRC = options["--stackato-src"]
FROM_COMMIT = options["--from"]
TO_COMMIT = options["--to"]
if options["--options"]
options.each do |k, v|
puts "#{k}: #{v}"
end
end
class KatoPatch
def kato_gem_map(*args)
$kato_path = "vendor/bundle/ruby/1.9.1/gems/stackato-kato-3.0.0/"
map = {}
code = ["aok", "cloud_controller_ng", "dea_ng", "fence", "health_manager"]
services = ["filesystem", "harbor", "memcached", "mongodb", "mysql",
"postgresql", "rabbitmq", "redis"]
if args[0] == "all"
args.push("global")
args.push("kato")
code.each do |c|
args.push(c)
end
services.each do |s|
args.push(s)
end
end
args.each do |component|
if component == "kato"
map["kato"] = "stackato/kato/#{$kato_path}"
elsif component == "global"
map["global"] = "stackato/.rbenv/versions/current/lib/ruby/gems/1.9.1/gems/stackato-kato-3.0.0"
elsif code.include?(component)
map[component] = "stackato/code/#{component}/#{$kato_path}"
elsif services.include?(component)
map[component] = "stackato/code/services/#{component}/#{$kato_path}"
else
puts "ERROR!: No kato gem mappings for: #{component}"
end
end
map
end
def make_dirs(map)
map.each do |k, v|
system("mkdir -p #{v}")
end
end
def add_files(map, file_name, file_path)
map.each do |k, v|
system("mkdir -p #{v}/#{file_path} && cp #{file_name} #{v}/#{file_path}")
end
end
end

NFSv4 quick start | digitalocean tutorial

NFS Mounts on Containers: Ubuntu 12.04 an VirtualBox

2 VMs(example IP address, please use your own): Master: 192.168.68.81 Client: 192.168.68.26

Version Ubuntu 12.04:
NFSv4 Server

sudo su
apt-get install nfs-kernel-server portmap
mkdir -p /export/users   
mkdir /home/users
mount --bind /home/users /export/users
echo "/home/users    /export/users   none    bind  0  0" >> /etc/fstab
printf "[Translation]\n\nMethod = nsswitch" >> /etc/idmapd.conf
rpcbind
printf "/export       *(rw,fsid=0,insecure,no_subtree_check,async)\n/export/users       *(rw,nohide,insecure,no_subtree_check,async)" >> /etc/exports
echo "rpcbind mountd nfsd statd lockd rquotad : ALL" >>  /etc/hosts.deny
# echo "rpcbind mountd nfsd statd lockd rquotad : [LIST-OF-SPACE-DELIMITED-IP-ADDRESSES]" >> /etc/hosts.allow
echo "rpcbind mountd nfsd statd lockd rquotad : 127.0.0.1 192.168.68.81 192.168.68.26 192.168.69.135" >> /etc/hosts.allow
exportfs -ra
sudo service portmap restart
service nfs-kernel-server restart

NFSv4 Client

apt-get update
apt-get install nfs-common
echo "rpcbind : ALL" >> /etc/hosts.deny 
echo "rpcbind : 192.168.68.81" >> /etc/hosts.allow
mount -t nfs -o nolock,proto=tcp,port=2049 192.168.68.81:/export /mnt

Client: For mounting on Stackato App

1. kato config set dea_ng docker/privileged true
2. stackato quota configure --allow-sudo
3. kato config push fence docker/allowed_subnet_ips 192.168.68.81
4. stackato.yml configured with:
  a. requirements:
       ubuntu:
       - nfs-common
  b. hooks:
       pre-running:
       - sudo mount 192.168.68.81:/export /mnt
#!/bin/bash
DOWNLOADDIR=`mktemp -d`
#SVMPATH=http://svmbuild.activestate.com/v3-nightly-ovf/latest/
#SVMPATH=http://svmbuild.activestate.com/index/all/3.4.1-nightly-ovf/latest/
SVMPATH=http://svmbuild.activestate.com/index/all/release-v3.4.1-ovf/latest/
TODAY=`date +"%b%d"`
VMSUFFIX=nightly
VMNAME=$TODAY-$VMSUFFIX
# delete old vms
DATETOTRASH=`date +"%b%d" --date='2 days ago'`
TRASHVM=$DATETOTRASH-$VMSUFFIX
# cluster suffixes
SUFFIXES="A B C"
download_and_unzip () {
cd $DOWNLOADDIR
wget -r -l1 -A zip -nd $1
unzip *.zip -d $2
mv $2/Stackato-VM/* $2
rm *.zip
}
setup_base_vm () {
vboxmanage import $1/*.ovf --vsys 0 --unit 4 --ignore --vmname $VMNAME
vboxmanage modifyvm $1 --nic1 bridged --bridgeadapter1 eth0 --cpus 2 --memory 2048
vboxmanage snapshot $1 take FirstBoot
}
trash_old_micro () {
if [ -n $1 ]; then
vboxmanage unregistervm $1 --delete
fi
}
trash_old_cluster () {
if [ -n $1 ]; then
for i in $2
do
vboxmanage unregistervm $1-$i --delete
done
fi
}
clonecluster_vm () {
for i in $2
do
vboxmanage clonevm $1 --mode all --name $1-$i --register
done
}
while getopts ":cmtn:" opt; do
case $opt in
c)
CRONJOB=1
;;
n)
VMNAME=$OPTARG
;;
t)
TRASH=1
;;
m)
MICROONLY=1
;;
esac
done
echo $VMNAME
if [ -n $CRONJOB ]; then
echo "NonInteractive"
download_and_unzip $SVMPATH $VMNAME
setup_base_vm $VMNAME
if [ -n $TRASH ]; then
trash_old_micro $TRASHVM
fi
if [ -z $MICROONLY ]; then
clonecluster_vm $VMNAME "$SUFFIXES"
if [ -n "$TRASH" ]; then
trash_old_cluster $TRASHVM "$SUFFIXES"
fi
fi
else
echo "Interactive"
read -p "Download Micro? (y/n)"
if [[ $REPLY == "y" ]]; then
download_and_unzip $SVMPATH $VMNAME
setup_base_vm $VMNAME
fi
read -p "3-node cluster? (y/n)"
if [[ $REPLY == "y" ]]; then
clonecluster_vm $VMNAME "$SUFFIXES"
fi
fi
#!/usr/bin/env ruby
require 'rubygems'
require 'net/ssh'
require 'net/ssh/shell'
require 'net/http'
# Some potentially usefule VBoxManage commands:
#
# GROUPS
# vboxmanage createvm --groups <group>
# vboxmanage modifyvm --groups <group>
# vboxmanage clonevm --groups <group>
#
# Variables for download and setup
TODAY = Time.now.to_date
DELETEDATE = TODAY - 2
ROLES = ["base", "controller", "dea", "filesystem", "harbor", "load_balancer", "mdns", "memcached", "mongodb", "mysql", "postgresql", "primary", "rabbit", "rabbit3", "redis", "router"]
# Nightly Build Download
NIGHTLY = 'http://svmbuild.com/index/all/3.4.1-nightly-ovf/latest/'
Net::HTTP.start(NIGHTLY) do |http|
resp = http.get
end
# VM Setup
GROUP = "nightly-#{TODAY.strftime('%b%d')}"
VMNAME = "nightly-#{ROLE}-#{TODAY.strftime('%b%d')}"
USER = 'stackato'
PASSWORD = 'stackato'
CORE_IP = '192.168.68.17'
DEA_IP = '192.168.68.15'
SERVICES_IP = '192.168.69.101'
# begin
#
# Net::SSH.start( CORE_IP, USER, password: PASSWORD ) do |ssh|
# puts ">>>>>>>>>>>>> CORE IP SETUP STARTING >>>>>>>>>>>>>>"
# ssh.shell do |bash|
# process = bash.execute 'whoami'
# process.on_output do |a, b|
# puts b
# end
# bash.wait!
# bash.execute! 'exit'
# end
# ssh.exec("kato node setup core api.#{CORE_IP}.xip.io")
# puts ">>>>>>>>>>>>> CORE IP SETUP COMPLETE >>>>>>>>>>>>>>"
# end
#
# rescue
# puts "Unable to connect user: #{USER} to #{CORE_IP}."
# end
#
# begin
# Net::SSH.start( DEA_IP, USER, password: PASSWORD ) do |ssh|
# puts ">>>>>>>>>>>>> DEA IP SETUP STARTING >>>>>>>>>>>>>>"
# ssh.shell do |bash|
# process = bash.execute 'whoami'
# process.on_output do |a, b|
# puts b
# end
# bash.wait!
# bash.execute! 'exit'
# end
# ssh.exec("kato node attach -e dea #{CORE_IP}")
# puts ">>>>>>>>>>>>> DEA IP SETUP COMPLETE >>>>>>>>>>>>>>"
# end
# rescue
# puts "Unable to connect user #{USER} to #{DEA_IP}"
# end
begin
Net::SSH.start( SERVICES_IP, USER, password: PASS ) do |ssh|
puts ">>>>>>>>>>>>> SERVICES IP SETUP STARTING >>>>>>>>>>>>>>"
ssh.shell do |bash|
process = bash.execute 'sudo su'
process.on_output do |a, b|
puts b
end
bash.wait!
bash.execute 'stackato'
bash.execute 'exit'
end
ssh.exec("kato node attach -e dea #{CORE_IP}")
puts ">>>>>>>>>>>>> SERVICES IP SETUP COMPLETE >>>>>>>>>>>>>>"
end
rescue
puts "Unable to connect user #{USER} to #{SERVICES_IP}"
end
#!/usr/bin/env ruby
require 'yaml'
require 'trollop'
require 'fileutils'
# Nested map of Stackato version and component name to installed location.
# Short version tuples can be used as wildcards. If a longer tuple is
# defined here and it also matches the given version, it will win over the
# shorter match.
REPO_MAP = {
"2" => {
"stackato" => "/s/",
"aok" => "/s/aok/",
"kato" => "/s/kato/",
"cloud_controller" => "/s/vcap/cloud_controller/",
"dea" => "/s/vcap/dea/",
"fence" => "/s/vcap/fence/",
"stackato-router" => "/s/vcap/stackato-router/",
"vcap-staging" => "/s/vcap/staging/",
},
"3" => {
"stackato" => "/s/",
"kato" => "/s/kato/",
"sentinel" => "/s/code/sentinel/",
"aok" => "/s/code/aok",
"cloud_controller_ng" => "/s/code/cloud_controller_ng",
"console" => "/s/code/console",
"dea_ng" => "/s/code/dea_ng",
"fence" => "/s/code/fence",
"health_manager" => "/s/code/health_manager",
"stackato-rest" => "/s/code/stackato-rest",
"stackato-router" => "/s/code/stackato-router",
"vcap-common" => "/s/code/common",
"base" => "/s/code/services/base",
"filesystem" => "/s/code/services/filesystem",
"harbor" => "/s/code/services/harbor",
"memcached" => "/s/code/services/memcached",
"mongodb" => "/s/code/services/mongodb",
"mysql" => "/s/code/services/mysql",
"postgresql" => "/s/code/services/postgresql",
"rabbitmq" => "/s/code/services/rabbitmq",
"redis" => "/s/code/services/redis",
"elasticsearch" => "/s/code/services/elasticsearch",
"stackato-service-postgresql" => "/s/code/services/postgresql",
}
}
# Versions which compare such that "n.0" > "n". This allows more specific
# version comparisons to rank higher than shorter generic ones.
class Version < Array
def initialize s
super(s.split(".").map { |e| e.to_i })
end
def < x
(self <=> x) < 0
end
def > x
(self <=> x) > 0
end
def == x
(self <=> x) == 0
end
end
opts = Trollop::options do
banner 'kato patch generation'
opt :from, "Commit from which to patch (the patch will not contain this commit)", :type => String
opt :to, "Commit to which to patch", :type => String
opt :commit, "Commit which contains the patch", :type => String
opt :repo, "Full path to the repo", :type => String
opt :stackato_version, "stackato version for this patch", :type => String
opt :name, "Name for this patch", :type => String
opt :get_stackato_path, "Full path to your get-stackato.git repository", :type => String
opt :root, "Root required to run the patch?"
end
def lookup_repo_map(stackato_version, repo_name)
# Find the closest matching version in the repo map and then look up the
# specified name. Return nil on no matching version or no matching name.
list = []
REPO_MAP.each do |version, dummy|
list << version if Regexp.new("^#{Regexp.escape(version)}") =~ stackato_version
end
closest = list.sort.last
return nil unless closest
REPO_MAP[closest][repo_name]
end
stackato_version = opts[:stackato_version]
repo = opts[:repo]
Trollop::die :stackato-version, "must be provided" unless stackato_version
Trollop::die :repo, "must be provided" unless repo
repo_name = File.basename(repo)
unless lookup_repo_map(stackato_version, repo_name)
abort "unknown repo!"
end
md5_prog = nil
uname = `uname`.strip
if uname == "Darwin"
md5_prog = "md5"
elsif uname == "Linux"
md5_prog = "md5sum"
end
patch = nil
changed_files = {}
Dir.chdir(repo) do
if opts[:commit]
patch = `git diff --no-prefix #{opts[:commit]}^1 #{opts[:commit]} 2>/dev/null`
elsif opts[:from] && opts[:to]
patch = `git diff --no-prefix #{opts[:from]} #{opts[:to]} 2>/dev/null`
end
patch = patch.split "\n"
patch.map! do |line|
next if line =~ /^diff/
next if line =~ /^index/
next if line =~ /^new file mode/
if line =~ /^[\+\-][\+\-][\+\-]/
delim, tail_path = line.split(' ', 2)
if tail_path.strip == '/dev/null'
line
else
path = File.join(lookup_repo_map(stackato_version, repo_name), tail_path)
if delim == "---"
prev_commit = opts[:from] ? opts[:from] : "#{opts[:commit]}^1"
changed_files[path] = `git show #{prev_commit}:#{tail_path} 2>/dev/null | #{md5_prog} 2>/dev/null`.strip
end
"#{delim} #{path}"
end
else
line
end
end
patch.delete(nil)
default_patch_options = "-f -N --reject-file - --quiet -p0"
if opts[:root]
patch.unshift "sudo patch $REVERSEOPT #{default_patch_options} << 'EOF'"
else
patch.unshift "patch $REVERSEOPT #{default_patch_options} << 'EOF'"
end
patch << "EOF"
patch << nil
patch = patch.join "\n"
patch_header = <<'EOT'
#!/bin/bash
set -o errexit
if [[ "$1" == "revert" ]]; then
REVERSEOPT="--reverse"
fi
EOT
patch = patch_header + patch
end
IO.write("patch.sh", patch)
puts "patch.sh generated."
patch_id = 1
version_path = nil
if opts[:get_stackato_path]
version_path = File.expand_path(File.join(opts[:get_stackato_path], "static", "kato-patch", opts[:stackato_version]))
else
version_path = File.expand_path(File.join(File.dirname(File.expand_path($0)), '..', '..', '..', opts[:stackato_version]))
end
if File.directory? version_path
patch_list = Dir.entries(version_path)
patch_list.delete('.')
patch_list.delete('..')
patch_list.delete('manifest.json')
patch_id = patch_list.length + 1
end
manifest = {
"id" => patch_id,
"name" => opts[:name],
"stackato_version" => opts[:stackato_version],
"description" => "",
"roles_to_restart" => [],
"severity" => "required", # or 'optional'
# "changed_files" => changed_files,
}
IO.write("patch.yml", YAML.dump(manifest) )
puts "patch.yml generated. Make sure you edit it to add a description, edit the severity if necessary, and specify the roles to restart!"
patch_path = File.join(version_path, opts[:name])
FileUtils.mkdir_p(patch_path)
FileUtils.mv(["patch.sh", "patch.yml"], patch_path)
puts "patch can be found in #{patch_path}."

Git Workflow

Strategy

Branches

Master
Used to mark latest released version. Only successfully tagged RC commits can be merged here as markers of what was released. This is optional. We can mark RCs that have been released elsewhere or with a different tag.

RC
Used to make release candidates. Only completed features that are "production ready." The latest RC (Release Candidate) tagged commit can always be published. Tags are made manually by people fulfilling the QA role after a commit in the RC is demmed to be prodcution ready. Should have documentation, automated specs, smoke tests, release notes and anything else necessary. This branch can be reset as needed. Tags will hold relevant history. This branch will create artefacts and presist them if all tests pass by the CI (Continuous Integration) server.

QA
Used to test a combination of dev-complete work by QA. This branch will create artefacts for automated tests, but not persist them by the CI server. It's the equivalent of the dev branch but for QAs.

Dev
Used to facilitate the integration of testing of on-going work. Incoplete features are merged here frequently to ensure cobinations of features don't adversely affect one another or other effects. This branch will build artefacts and run tests by not persist them beyond the CI server. This is akin to our current master branch.

Feature
Implementing a feature or fixing a bug will be done in a topic branch. This branch will be merged onto the dev branch frequently for integration puposes. The key is that the merge command is issued when the dev branch is checked out to eliminate back merges as features would become bound to one another. Once complete, the branch can be merged onto the QA branch and then to the RC branch one QA checks out.

Repository Structure
The repositories will mirror Cloud Foundry's structure. Any branching done for work should bubble up to stackato-release.

Example Git Flow

Find the common ancestor

git merge-base --octopus v2.10.4 v2.10.6 v3.2.1 v3.4.1

Make a branch for your work

git checkout -b bug100000 $(git merge-base --octopus v2.10.4 v2.10.6 v3.2.1 v3.4.1)

Do work, test changes, then commit the fix into history

vim path/to/file.rb
git add -A          # include all changes
git commit -m "100000: Add missing link to GitHub Wiki search plugin"
git push -u

Merge Fix into Each Version

git checkout v2.10.6
git merge bug100000
git commit 

/users.json

curl -i -X GET http://localhost:3000/users.json
returns

[{"id":1,"name":"Customer Test","address":"1234 Anywhere","province":"BC","country":"Canada","url":"http://localhost:3000/users/1.json"},{"id":2,"name":"Driver Test","address":"1234 Anywhere","province":"BC","country":"Canada","url":"http://localhost:3000/users/2.json"},{"id":3,"name":"Vendor Test","address":"1234 Anywhere","province":"BC","country":"Canada","url":"http://localhost:3000/users/3.json"},{"id":4,"name":"Grower Test","address":"1234 Anywhere","province":"BC","country":"Canada","url":"http://localhost:3000/users/4.json"},{"id":6,"name":"Other Dispensary Test","address":"1234 Anywhere","province":"BC","country":"Canada","url":"http://localhost:3000/users/6.json"},{"id":7,"name":"drew ogryzek","address":"1234 Anywhere","province":"BC","country":"Canada","url":"http://localhost:3000/users/7.json"},{"id":5,"name":"Admin Test","address":"1234 Anywhere","province":"BC","country":"Canada","url":"http://localhost:3000/users/5.json"}]

in the rails helper

# spec/spec_helper.rb

# ...

module Requests                
  module JsonHelpers           
    def json                   
      @json ||= JSON.parse(response.body)
    end
  end   
end

RSpec.configure do |config|    
   config.include Requests::JsonHelpers, type: :request

  # ... 

end

spec/requests/users_spec.rb

require 'rails_helper'

RSpec.describe "/users", type: :request do
  it "GET /users" do
    status, headers, body = *response
    puts "status: #{status}"
    puts "headers: #{headers}"
    puts "body: #{body.to_ary}"
  end
end

Returns

status: 200
headers: {"X-Frame-Options"=>"SAMEORIGIN", "X-XSS-Protection"=>"1; mode=block", "X-Content-Type-Options"=>"nosniff", "Content-Type"=>"application/json; charset=utf-8", "ETag"=>"W/\"d751713988987e9331980363e24189ce\"", "Cache-Control"=>"max-age=0, private, must-revalidate", "X-Request-Id"=>"f1cb49e5-9473-4a8f-af7e-e0117a43303d", "X-Runtime"=>"0.075757", "Vary"=>"Origin", "X-Rack-CORS"=>"preflight-hit; no-origin", "Content-Length"=>"2"}
body:
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment