Use this script to delete all object versions in a S3 bucket. For very large object version counts it may be more effective to use Object Lifecycle Management instead to expire versions.
python delete-object-versions.py (s3-bucket-name)
import botocore | |
_model = botocore.waiter.WaiterModel({ | |
'version': 2, | |
'waiters': { | |
'StacksetOpComplete': { | |
'delay': 30, | |
'operation': 'DescribeStackSetOperation', | |
'maxAttempts': 50, | |
'description': |
Use this script to delete all object versions in a S3 bucket. For very large object version counts it may be more effective to use Object Lifecycle Management instead to expire versions.
python delete-object-versions.py (s3-bucket-name)
(function(f){if(typeof exports==="object"&&typeof module!=="undefined"){module.exports=f()}else if(typeof define==="function"&&define.amd){define([],f)}else{var g;if(typeof window!=="undefined"){g=window}else if(typeof global!=="undefined"){g=global}else if(typeof self!=="undefined"){g=self}else{g=this}g.poisson = f()}})(function(){var define,module,exports;return (function(){function r(e,n,t){function o(i,f){if(!n[i]){if(!e[i]){var c="function"==typeof require&&require;if(!f&&c)return c(i,!0);if(u)return u(i,!0);var a=new Error("Cannot find module '"+i+"'");throw a.code="MODULE_NOT_FOUND",a}var p=n[i]={exports:{}};e[i][0].call(p.exports,function(r){var n=e[i][1][r];return o(n||r)},p,p.exports,r,e,n,t)}return n[i].exports}for(var u="function"==typeof require&&require,i=0;i<t.length;i++)o(t[i]);return o}return r})()({1:[function(require,module,exports){ | |
var Process = require('./lib/Process'); | |
// Raw sampling function | |
exports.sample = require('./lib/sample'); | |
// Semantic version, useful for inspection when v |
#!/usr/bin/env python | |
import requests | |
from nltk import download, pos_tag, word_tokenize | |
download('punkt') | |
download('averaged_perceptron_tagger') | |
response = requests.get('http://www.textfiles.com/food/food') | |
text = word_tokenize(response.text) |
I may have let Arch go months without updates before. Its not advisable to apply them on a schedule either.
Providing regular prompts to apply updates seems like the best solution.
This cronjob will download packages and cache locally. Put into root's crontab or run via sudo.
An apex (or root) record of a zone cannot be a CNAME by specification. This has implications for domains that try to integrate with virtually any cloud platform (AWS, Azure, Heroku, etc.). A common pattern to mitigate this issue is to perform a HTTP 301 redirect from the apex record to a subdomain (usually www.domain.com).
On the other hand, using a single cloud provider for DNS and underlying infrastructure may yield proprietary solutions. For example, AWS provides a Route 53 alias record which can be used at the zone apex.
To understand the prevalence of the apex redirect pattern this script will test the top 1000 Alexa rank domains (as of 3/11/16).
Per http://docs.ansible.com/ansible/playbooks_vault.html you can set an environment variable to use a password file for vault access. We can use this to create an environment variable to hold the password.
Copy vault-env
from this project to ~/bin
. Then add this to your ~/.bashrc
:
export ANSIBLE_VAULT_PASSWORD_FILE=~/bin/vault-env
#!/usr/bin/env python | |
# | |
# Edit ns.conf appropriately and pass to this script: | |
# python ns2r53.py <hosted-zone-id> <ns.conf> | |
from datetime import datetime | |
import sys | |
import boto3 | |
import botocore |