Skip to content

Instantly share code, notes, and snippets.

View pingali's full-sized avatar

Venkata Pingali pingali

View GitHub Profile
http://www.cloudiquity.com/2009/02/securing-distributed-applications-on-ec2/
* The default mode is to deny access, you have to explicitly open ports to allow for inbound network traffic
* If no security group is specified a special default group is assigned to the instance. This group allows all network traffic from other members of this group and discards traffic from other IP addresses and groups. You can change settings for this group
* You can assign multiple security groups to an AMI instance.
* The security groups for an instance are set at launch time and can not be changed. You can dynamically modify the rules in a security group and the new rules are automatically enforced for all running and future instance, there may be a small delay depending on the number of instances
* You can control access either from named security groups or source IP address range. You can specify the protocol(TCP, UDP, or ICMP) , individual ports or port range to open
http://ec2dream.blogspot.com/search/label/Networking%20Multi-Tier%20Applications
#!/usr/bin/ruby
require 'rubygems'
require 'right_aws'
AMAZON_PUBLIC_KEY=<public key>
AMAZON_PRIVATE_KEY=<private key>
#
(2:58:43 PM) mikewadhera_: this master instance -- is this an HAProxy LB into other instances?
(2:58:51 PM) auser: has_package :name => "memcached" == has_package "memcached"
(2:59:15 PM) mikewadhera_: ok good to know
(2:59:31 PM) auser: not any more... that's where the DNS round-robin comes into play
(2:59:54 PM) auser: we're on that this week too
(2:59:56 PM) auser: although
(3:00:00 PM) auser: you can add haproxy in
(3:00:02 PM) auser: and
(3:00:55 PM) auser: if you have has_variable :name => "node_ips", :value => %x[cloud-list].split("\n").map {|a| a[1] }, then you can access that in a chef template with
(3:01:01 PM) auser: @node[:poolparty][:node_ips]
#########
# Cloud configuration
.
./test
./test/clouds.rb
./test/plugins
pool :test do
cloud :app do
instances 2..5
# Following Paul Dowman's instructions on setting up the
# mailserver
# http://pauldowman.com/2008/02/17/smtp-mail-from-ec2-web-server-setup/
myhostname = <hostname-name>
mydomain = <domain-name>
myorigin = $mydomain
smtpd_banner = $myhostname ESMTP $mail_name
# Replicate the manager
# http://matt.simerson.net/computing/sql/mrm/mysql_replicate_manager.pl
#!/usr/bin/perl -w
use strict;
=head1 NAME
mysql_replicate_manager.pl - Mysql Replication Manager
Restoring a MySQL Replication with corrupted binlogs!
Posted on December 30th, 2005 by Basil
Major assumption: you have a safe, but halted replication you want to restore from.
Situation — Your replication server is down or simply the MySQL slave has stopped, possible because the hard drive filled up and your binlogs got corrupted beyond repair? Some might have the luxury to start a new snapshot from the main DB server, lets call it the Master DB server. But what if you can’t afford to do a read-lock on the server? (Which would be most of the times for production servers with decent traffic). Well — since you planned ahead - you should have multiple replication servers running — it helps to reduce your read load with the round-robin method anyways.
Here’s some things I did to get our primary MySQL replication slave up to date using the planned, redundant replication server, secondary Slave.
So, on a regular day, Master DB replicates its binlog to Primary slave and Master DB replicates to secondatry slav
# http://stackoverflow.com/questions/431025/mysql-replication-one-website-many-servers-different-continents
Don't use master-master replication, ever. There is no mechanism for resolving conflicts. If you try to write to both masters at the same time (or write to one master before it has caught up with changes you previously wrote to the other one), then you will end up with a broken replication scenario. The service won't stop, they'll just drift further and further apart making reconciliation impossible.
Don't use MySQL replication without some well-designed monitoring to check that it's working ok. Don't assume that becuase you've configured it correctly initially it'll either keep working, OR stay in sync.
DO have a well-documented, well-tested procedure for recovering slaves from being out of sync or stopped. Have a similarly documented procedure for installing a new slave from scratch.
Your application may need sufficient intelligence to know that a slave is out of sync or stopped, and that it shou
#!/bin/sh
#Backup mysql binary logs
#Developed In: bash — Contributed by: Partha Dutta
# http://forge.mysql.com/tools/search.php?page=7
# mysqlbinlogbackup - backup binary logs
slave=
slaves=