Create a websites
user.
sudo adduser --disabled-password --home /src/websites websites
Login and install dropbox.
sudo -s su - websites
{ | |
"vars": { | |
"@gray-base": "#000", | |
"@gray-darker": "lighten(@gray-base, 13.5%)", | |
"@gray-dark": "lighten(@gray-base, 20%)", | |
"@gray": "lighten(@gray-base, 33.5%)", | |
"@gray-light": "lighten(@gray-base, 46.7%)", | |
"@gray-lighter": "lighten(@gray-base, 93.5%)", | |
"@brand-primary": "darken(#428bca, 6.5%)", | |
"@brand-success": "#5cb85c", |
# Author: Aram Grigorian <[email protected]> | |
# https://github.com/aramg | |
# https://github.com/opendns | |
# | |
# By default, nginx will close upstream connections after every request. | |
# The upstream-keepalive module tries to remedy this by keeping a certain minimum number of | |
# persistent connections open at all times to upstreams. These connections are re-used for | |
# all requests, regardless of downstream connection source. There are options available | |
# for load balacing clients to the same upstreams more consistently. | |
# This is all designed around the reverse proxy case, which is nginxs main purpose. |
# You don't need Fog in Ruby or some other library to upload to S3 -- shell works perfectly fine | |
# This is how I upload my new Sol Trader builds (http://soltrader.net) | |
# Based on a modified script from here: http://tmont.com/blargh/2014/1/uploading-to-s3-in-bash | |
S3KEY="my aws key" | |
S3SECRET="my aws secret" # pass these in | |
function putS3 | |
{ | |
path=$1 |
# zonecfg -z <uuid> | |
# add attr | |
# set name=qemu-extra-opts | |
# set type=string | |
# set value="LXNtcCBjcHVzPTEsY29yZXM9NCx0aHJlYWRzPTI=" | |
# end | |
# commit | |
# exit | |
Then reboot the machine. The value is the base64 encoded string that will be added to the qemu-kvm options. The above is "-smp cpus=1,cores=4,threads=2", which plays nice with Windows which for some stupid reason only supports 2 cpus. |
Host github.com | |
ProxyCommand ssh -qxT <ssh server you have access to> nc %h %p |
"Good hardware costs money, and your time has value. The ditches on either | |
side of the road through the land of hardware are steep and home to | |
manifold ferocious beasts. Stray from this golden path at your peril." | |
-- Keith Wesolowski | |
http://www.listbox.com/member/archive/184463/2013/02/sort/time_rev/page/1/entry/0:156/20130218134633:82C0ABBC-79FB-11E2-B214-A90A0365DAE4/ |
max_connections = 1500 # (change requires restart) | |
shared_buffers = 12000MB # min 128kB, based on 80GB RAM DB | |
temp_buffers = 8MB # min 800kB | |
work_mem = 64MB # min 64kB | |
maintenance_work_mem = 1500MB # min 1MB | |
wal_level = hot_standby # minimal, archive, or hot_standby | |
checkpoint_segments = 256 # in logfile segments, min 1, 16MB each | |
checkpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0 | |
max_wal_senders = 6 # max number of walsender processes |
#!/bin/bash | |
# Simple Ad Hoc Carbon Cache Service | |
# | |
# put in /opt/custom/share/svc/carbon-cache.sh | |
set -o xtrace | |
. /lib/svc/share/smf_include.sh | |
cd / | |
PATH=/usr/sbin:/usr/bin:/opt/custom/bin:/opt/custom/sbin; export PATH |
class Chef | |
module Mixin | |
module ConsumeJson | |
def consume_json(url) | |
Chef::Log.debug "consume_json: requesting url: #{url}" | |
info = nil | |
fetch(url) do |data| | |
info = JSON.parse(data) | |
Chef::Log.debug "consume_json: parsed: #{info.inspect}" |