Calico | Flannel | Weave | Docker Overlay Network | |
---|---|---|---|---|
Network Model | Pure Layer-3 Solution | VxLAN or UDP Channel | VxLAN or UDP Channel | VxLAN |
Application Isolation | Profile Schema | CIDR Schema | CIDR Schema | CIDR Schema |
Protocol Support | TCP, UDP, ICMP & ICMPv6 | ALL | ALL | ALL |
#!/bin/bash | |
#This script requires an up-to-date version of the aws cli tool | |
profile=$1 | |
environment=$2 | |
region=us-east-1 | |
service_name=com.amazonaws.$region.s3 | |
get_env_vpc_id () { |
From Why Client Side Rendering Won:
- No Full Page Reload Required
- Lazy Loading
- Rich Interactions
- Cheap Hosting
- Use a CDN
- Easy Deployments
- Enforced Separation of Concerns
- Learn Once, Write Everywhere
- Same UI Technology for Web, Native Mobile, and Desktop
how-do-i-install-from-a-local-cache-with-pip
PIP_DOWNLOAD_CACHE has some serious problems. Most importantly, it encodes the hostname of the download into the cache, so using mirrors becomes impossible.
The better way to manage a cache of pip downloads is to separate the "download the package" step from the "install the package" step. The downloaded files are commonly referred to as "sdist files" (source distributions) and I'm going to store them in a directory $SDIST_CACHE.
The two steps end up being:
{ | |
"method": "$context.httpMethod", | |
"resourcePath": "$context.resourcePath", | |
"querystring": { | |
#foreach($key in $input.params().querystring.keySet()) | |
"$key": "$input.params().querystring.get($key)"#if($foreach.hasNext),#end | |
#end | |
}, | |
"path": { | |
#foreach($key in $input.params().path.keySet()) |
# This is a "Managed Script" in Jenkins | |
COMMIT=`aws lambda get-alias --region $AWS_REGION --function-name $FUNCTION_NAME --name $PUBLISH_FROM_ALIAS | grep "Description" | cut -d'"' -f4` | |
VERSION=`aws lambda publish-version --region $AWS_REGION --function-name $FUNCTION_NAME --description $COMMIT | grep "Version" | cut -d'"' -f4` | |
aws lambda update-alias --region $AWS_REGION --function-name $FUNCTION_NAME --function-version $VERSION --name $PUBLISH_TO_ALIAS --description $COMMIT |
From Some more on AWS IoT. It’s a little difficult to simplify what’s going on, but I think this is pretty good. At the highest level, think of it this way: AWS IoT recieves messages and routes them based on rules to other AWS services
Here’s the basic workflow of AWS IoT. It’s a simplification and leaves out a number of important services, but this is the core of it. Understanding this is to understand what AWS IOT can do … and what it can do for you.
Here’s the workflow:
- your Things send messages
- the Device Gateway receives and authenticates the messages
- the Rules Engine authorizes the messages and then routes them to other AWS services
{ | |
AWSEBDockerrunVersion: "1", | |
Authentication: { | |
Bucket: "staffjoy-deploy", | |
Key: "docker.cfg" | |
}, | |
Image: { | |
Name: "staffjoy/app:TAG", | |
Update: "true" | |
}, |