- Download docker-compose.yml to dir named
sentry
- Change
SENTRY_SECRET_KEY
to random 32 char string - Run
docker-compose up -d
- Run
docker-compose exec sentry sentry upgrade
to setup database and create admin user - (Optional) Run
docker-compose exec sentry pip install sentry-slack
if you want slack plugin, it can be done later - Run
docker-compose restart sentry
- Sentry is now running on public port
9000
This guide is unmaintained and was created for a specific workshop in 2017. It remains as a legacy reference. Use at your own risk.
Workshop Instructor:
- Lilly Ryan @attacus_au
This workshop is distributed under a CC BY-SA 4.0 license.
[pytest] | |
addopts = --reuse-db | |
DJANGO_SETTINGS_MODULE = settings.dev | |
python_files = tests.py test_*.py *_tests.py |
On the 14th of October HAProxy 1.6 was released! Among all features announced I am particularly interested in the ability to log the body of a request.
It wasn't straightforward for me to understand how to do that, which is why I'm blogging it.
The relevant part could be found in the "Captures" section of the announcement, this is how I changed it to suit my needs:
# you need this so that HAProxy can access the request body
{% for item in host_keys_hostname['results'] %} | |
{% for key in item['stdout_lines'] %} | |
{{ key }} | |
{% endfor %} | |
{% endfor %} | |
{% for item in host_keys_ip['results'] %} | |
{% for key in item['stdout_lines'] %} | |
{{ key }} | |
{% endfor %} | |
{% endfor %} |
--- | |
- hosts: all | |
become: true | |
tasks: | |
- name: scan and register | |
command: "ssh-keyscan {{ item }}" | |
register: "host_keys" | |
changed_when: false | |
with_items: '{{ groups.all }}' | |
delegate_to: localhost |
# not really needed anymore as consul supports the ability to filter our ec2 instances now | |
#create initial consul server config sans server IPs | |
cat<<EOF > /tmp/consul_config.json | |
{ | |
"datacenter": "${dc}", | |
"retry_join": [ | |
] | |
} | |
EOF |
To be clear we continue to run many Redis services in our production environment. It’s a great tool for prototyping and small workloads. For our use case however, we believe the cost and complexity of our setup justifies urgently finding alternate solutions.
- Each of our Redis servers are clearly numbered with a current leader in one availability zone, and a follower in another zone.
- The servers run ~16 different individual Redis processes. This helps us utilize CPUs (as Redis is single-threaded) but it also means we only need an extra 1/16th memory in order to safely perform a BGSAVE (due to copy-on-write), though in practice it’s closer to 1/8 because it’s not always evenly balanced.
- Our leaders do not every run BGSAVE unless we’re bringing up a new slave which is carefully done manually. Since issues with the slave should not affect the leader and new slave connections might trigger an unsafe BGSAVE on the leader, slave Redis processes are set to not automatically rest
This document details a simple RPM build flow pattern used to build and host RPM artifacts for open source projects. The below is a visual summary of this flow.
In order to achieve this multiple tools and services are used. The services and their purpose in the flow is as listed below.
Service | Purpose |
---|---|
GitHub | As is the most common use for GitHub, it holds the build source code. In this case we hold only the spec files and related source files. All other sources, including project binaries/sources are retrieved at build time. |
[Unit] | |
Description=sshuttle service a permanent tunnel | |
After=network.target | |
[Service] | |
ExecStart=/usr/bin/sshuttle -r h4s@localhost:39111 0.0.0.0/0 --dns -D --pidfile=/var/run/sshuttle.pid -e 'ssh -i /home/h4s/.ssh/whtunnel2' | |
Restart=always | |
Type=forking | |
PIDFile=/var/run/sshuttle.pid |