Most Unix/Linux systems come with python pre-installed:
$ python -V
#!/usr/bin/env python | |
import boto, boto.jsonresponse | |
conn = boto.connect_sts() | |
e = boto.jsonresponse.Element() | |
boto.jsonresponse.XmlHandler(e, conn).parse(conn.make_request('GetCallerIdentity',{},'/','POST').read()) | |
e['GetCallerIdentityResponse']['GetCallerIdentityResult']['Account'] |
import argparse | |
import json | |
import time | |
import boto.ec2 | |
# Set up argument parser | |
parser = argparse.ArgumentParser( | |
description='Request AWS EC2 spot instance and tag instance and volumes.', |
server { | |
listen 80; | |
listen 443 default_server ssl; | |
ssl on; | |
ssl_certificate /etc/ssl/certs/myssl.crt; | |
ssl_certificate_key /etc/ssl/private/myssl.key; | |
server_name *.example.com; | |
root /var/www/vhosts/website; |
#!/bin/bash | |
# Rsync based file backup script, with hard-linking enabled. | |
# | |
# Runtime options: | |
# -n - dry-run; do nothing, just display what is going to happen on a real run | |
# -v - verbose output; print out what is being backed up | |
# | |
# set to your rsync location |
For this configuration you can use web server you like, i decided, because i work mostly with it to use nginx.
Generally, properly configured nginx can handle up to 400K to 500K requests per second (clustered), most what i saw is 50K to 80K (non-clustered) requests per second and 30% CPU load, course, this was 2 x Intel Xeon
with HyperThreading enabled, but it can work without problem on slower machines.
You must understand that this config is used in testing environment and not in production so you will need to find a way to implement most of those features best possible for your servers.
I wanted to be really able to explain to a fair amount of detail how does the program :command:`ls` actually work right from the moment you type the command name and hit ENTER. What goes on in user space and and in kernel space? This is my attempt and what I have learned so far on Linux (Fedora 19, 3.x kernel).
How does the shell find the location of 'ls' ?
[unix_http_server] | |
file=/tmp/supervisor.sock ; path to your socket file | |
[supervisord] | |
logfile=/var/log/supervisord/supervisord.log ; supervisord log file | |
logfile_maxbytes=50MB ; maximum size of logfile before rotation | |
logfile_backups=10 ; number of backed up logfiles | |
loglevel=error ; info, debug, warn, trace | |
pidfile=/var/run/supervisord.pid ; pidfile location | |
nodaemon=false ; run supervisord as a daemon |
#!/bin/sh | |
TABLE_SCHEMA=$1 | |
TABLE_NAME=$2 | |
mytime=`date '+%y%m%d%H%M'` | |
hostname=`hostname | tr 'A-Z' 'a-z'` | |
file_prefix="trimax$TABLE_NAME$mytime$TABLE_SCHEMA" | |
bucket_name=$file_prefix | |
splitat="4000000000" | |
bulkfiles=200 |
date -s "$(wget --no-cache -S -O /dev/null google.com 2>&1 | sed -n -e '/ *Date: */ {' -e s///p -e q -e '}')" |