I hereby claim:
- I am AndrewFarley on github.
- I am farleyfarley (https://keybase.io/farleyfarley) on keybase.
- I have a public key whose fingerprint is 00A1 B493 ECE1 1838 2348 51EC 4C9F 2F01 1DF0 D96A
To claim this, I am signing this object:
| resource "template_file" "cf" { | |
| vars { | |
| cluster_name = "${var.cluster_name}" | |
| csg_name = "${aws_elasticache_subnet_group.default_redis_sg.name}" | |
| cluster_internal_sg_id = "${module.ecs-cluster.cluster_internal_sg_id}" | |
| } | |
| template = <<STACK | |
| { | |
| "Resources" : { |
| { | |
| "Version": "2012-10-17", | |
| "Statement": [ | |
| { | |
| "Effect": "Allow", | |
| "Action": [ | |
| "cloudformation:Describe*", | |
| "cloudformation:List*", | |
| "cloudformation:Get*", | |
| "cloudformation:PreviewStackUpdate", |
| provider "aws" { | |
| region = "eu-west-1" | |
| } | |
| # These are inputs we need to define | |
| variable "domain" { | |
| default = "mydomain.com" | |
| } | |
| # For every VPC in here we'll associate with our internal zone | |
| variable "vpcs" { |
| #!/bin/bash | |
| LOGFILE=/tmp/backup-gitlab-to-s3.log | |
| GITLAB_BACKUP_FOLDER=/var/opt/gitlab/backups | |
| S3_FILE_PREFIX=gitlab | |
| S3_BUCKET_PATH=bucket-name-goes-here/folder-here | |
| SLACK_USERNAME="Backup Gitlab Daily - `hostname`" | |
| SLACK_CHANNEL="#od-infra-monitoring" | |
| SLACK_ICON="https://s3.amazonaws.com/kudelabs-archives/harddrive256.png" |
| #!/usr/bin/env python | |
| ''' | |
| This simple helper creates a S3 bucket for remote state usage, and | |
| then creates a .tf file with the remote state information. This | |
| is great for when team/pair developing on the same environment and | |
| helps allow a stack to be able to be used on multiple accounts, or | |
| many times on the same account (depending on the uniqueness of the | |
| region, stack name, or env name) | |
| Written by Farley <farley@neonsurge.com> <farley@olindata.com |
| ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDdSnRT/tSeI6C1/B9oguf6YM9mk/OMJkPK3gT61RPhGneCrxIB5UOxZ3eo37SkeC+cLzimiUy6FZYFL+xr2Bj+ZGi4L5TGTwQaIsQobt6kY11ph3S+o7osg/5SlzgaFvLbHFt/3g2WCNR1baZ/jwJoErjQsb364tyeVrFn6+lnX62eAAol+ewJicvXrde9MUYz9kcCt8V9Ly1jgwHme46ikSUqYbV+f5H3ijm4MZTvk5lTDg2uWo6awM4SMHfwDqz0ktk8Y1rsLqihfWB8cmBBavCqNrHckiZMgx4fZUY3mB1PbYSIVl0qc/zgKXMC9trjV2jqckoehAF3XBVhwd8z farley@thedragon2 |
| ### Overview ### | |
| In order to replicate from an AWS RDS database to an external server, you need 3 components, and to keep two things in mind: | |
| **Components** | |
| * RDS Master DB - `rds-master` | |
| * RDS Read-Only Slave - `rds-slave` | |
| * External DB Server - `external-db` | |
| **Two Things to Keep in Mind** | |
| * This process is fairly brittle and not fully supported by AWS except for temporary data extraction. |
| -- The model name | |
| local modelName = "Unknown" | |
| -- I'm using 8 NiMH Batteries, which is 1.1v low, and ~1.325v high | |
| local lowVoltage = 6.6 | |
| local currentVoltage = 8.4 | |
| local highVoltage = 8.4 | |
| -- For our timer tracking | |
| local timerLeft = 0 | |
| local maxTimerValue = 0 |
I hereby claim:
To claim this, I am signing this object:
| { | |
| "AWSAccountActivityAccess": { | |
| "Arn": "arn:aws:iam::aws:policy/AWSAccountActivityAccess", | |
| "AttachmentCount": 0, | |
| "CreateDate": "2015-02-06T18:41:18+00:00", | |
| "DefaultVersionId": "v1", | |
| "Document": { | |
| "Statement": [ | |
| { | |
| "Action": [ |