#!/usr/bin/python | |
""" | |
To use this to mimic the EC2 metadata service entirely, run it like: | |
# where 'eth0' is *some* interface. if i used 'lo:0' i got 5 second or so delays on response. | |
sudo ifconfig eth0:0 169.254.169.254 netmask 255.255.255.255 | |
sudo ./mdserv 169.254.169.254:80 | |
Then: | |
wget -q http://169.254.169.254/latest/meta-data/instance-id -O -; echo | |
curl --silent http://169.254.169.254/latest/meta-data/instance-id ; echo |
You can find the MAC address for LAN1/eth0 (not the BMC MAC) via the SuperMicro IPMI interface by running the following command:
$ ipmitool -U $IPMI_USER -P $IPMI_PASS -H $IPMI_HOST raw 0x30 0x21 | tail -c 18
The eth0 MAC address will be output in this format:
00 25 90 f0 be ef
--- | |
# | |
# Important required settings | |
# | |
# set haproxy to handle ssl offloading | |
haproxy_ssl: true | |
# configure the SSL certificates for haproxy | |
# these file paths are on the deployment host |
--- | |
- hosts: all | |
gather_facts: no | |
vars: | |
string: "string" | |
list: | |
- item1 | |
- item2 | |
dict: | |
key1: value1 |
#!/usr/bin/env bash | |
# This is a simple script to do bulk operations on all projects we support | |
# Operation: | |
# The script clones project config from OpenStack infra then parses the gerrit | |
# projects for all of our known projects. Known projects are determined by the | |
# name using "openstack/openstack-ansible". Once all projects are discovered a | |
# string is built with the "<NAME>|<URL>" and printed. The script then clones | |
# all projects into the workspace and runs the ``bulk_function``. When complete | |
# the script commits the changes using the message provided and submits |
cloud-init
is absolute cancer. Its code is horrible. It has no documentation at all.
It took me 5 fucking hours to figure out how to properly configure networking on recent
cloud-init
(Ubuntu 16.04 cloud image
) with local datasource.
It's not mentioned anywhere you need to provide dsmode: local
. (but only if you need network-config,
besides that everything is fine; someone below noted that -m
flag does the same thing, good to know) Of course nobody needs documentation for network-config
format
either. (cloudinit/net/__init__.py
is a protip, enjoy the feces dive)
Oh, and by the way - no, it's not possible to provide network-config
to uvt-kvm
without patching shit.
Assumption: You have two clusters, access to both, and a pool that exists in both clustsers and you wish to replicate some or all images in that pool to the other cluster.
Mirroring in both directions is required for Cinder to properly implement failover and failback.
Make sure you have the rbd-mirror
package installed.
An instructional document by Robin H Johnson [email protected]. I wrote much of the staticsites functionality of Ceph-RGW, during during late 2015 and early 2016, based on an early prototype by Yehuda Sadeh (yehudasa). It was written for usage at Dreamhost, but developed in the open for community improvement.
It is fully functional as of Jewel v10.2.3 plus PR11280 (ceph/ceph#11280). Prior to that, neither the non-CNAME
nor CNAME-to-service
modes will function correctly.
These configuration files represent how to quickly set up RGW+HAProxy for staticsite serving. I've tried to make them more readable, without leaving out too many details. You are strongly recommended to run a seperate RGW instance for staticsites, on a DIFFERENT outward-faciing IP than your normal instance (and in fact, certain functionality is not supported without it).
In place of using HAProxy, you could run the second rgw instance on port 80,
#!/bin/bash | |
# do this on localhost (deployment host) | |
# ensure that there's a local ssh private key | |
ssh-keygen -t rsa -N '' -f ~/.ssh/id_rsa | |
# now make sure that the public key is in the second host's authorized_keys | |
# then do a test ssh connection to make sure it works, and to add the host | |
# to known hosts |