Skip to content

Instantly share code, notes, and snippets.

// $ dagger do -p azure_cli_with_auth_cache.cue run --log-format=plain
// 11:10AM INF actions.run.script._write | computing
// 11:10AM INF actions.auth._dag."0"._dag."0"._op | computing
// 11:10AM INF actions.auth._dag."1".script._write | computing
// 11:10AM INF actions.run.script._write | completed duration=0s
// 11:10AM INF actions.auth._dag."1".script._write | completed duration=100ms
// 11:10AM INF actions.auth._dag."0"._dag."0"._op | completed duration=100ms
// 11:10AM INF actions.auth._dag."0"._dag."1"._exec | computing
// 11:10AM INF actions.auth._dag."0"._dag."1"._exec | completed duration=0s
// 11:10AM INF actions.auth._dag."1"._exec | computing
@gerhard
gerhard / gist:cf763de79c05bc01f495740372a63e90
Created October 21, 2019 11:35
Failing to get deps from repo.hex.pm...
gmake[3]: Entering directory '/Users/gerhard/github.com/rabbitmq/20191003/deps/rabbitmq_cli'
GEN escript/rabbitmqctl
Failed to check for new Hex version
Failed to fetch record for 'hexpm/observer_cli' from registry (using cache)
{:failed_connect, [{:to_address, {'repo.hex.pm', 443}}, {:inet, [:inet], {:option, :server_only, :honor_cipher_order}}]}
{:failed_connect, [{:to_address, {'repo.hex.pm', 443}}, {:inet, [:inet], {:option, :server_only, :honor_cipher_order}}]}
Failed to fetch record for 'hexpm/csv' from registry (using cache)
{:failed_connect, [{:to_address, {'repo.hex.pm', 443}}, {:inet, [:inet], {:option, :server_only, :honor_cipher_order}}]}
Failed to fetch record for 'hexpm/x509' from registry (using cache)
{:failed_connect, [{:to_address, {'repo.hex.pm', 443}}, {:inet, [:inet], {:option, :server_only, :honor_cipher_order}}]}
@gerhard
gerhard / prometheus-monitoring-alerts.md
Last active October 16, 2019 12:11
Gabriele-Prometheus-Monitoring-Alerts

Prometheus Monitoring & Alerts

As of RabbitMQ 3.8.0, it is possible to enable Prometheus metrics natively, no need to run an external exporter. To enable native Prometheus metrics, set rabbitmqPrometheusPlugin.enabled to true. This will expose all RabbitMQ node metrics via the <<rabbitmqhost>>:15692/metrics URL. Since all metrics are node local, they add the least pressure on RabbitMQ and will be available for as long as RabbitMQ is running, regardless of inter-node pressure or other nodes in the cluster going away.

To learn more about RabbitMQ's native support for Prometheus, please refer to the official Monitoring with Prometheus & Grafana guide.

Team RabbitMQ manages Grafana dashboards that are meant to be used with the native Prometheus support. They are publicly available at grafana.com/orgs/rabbitmq.

To enable metrics via the traditional rabbitmq_exporter, set prometheus.enabled to true. See values.yaml

In this post, we will cover the new feature flags subsystem, which is part of the upcoming RabbitMQ 3.8.0. Feature flags will allow a rolling cluster upgrade to the next minor version, without requiring all nodes to be stopped before upgrading.

Upgrading from RabbitMQ 3.6.x to 3.7.x

It you had to upgrade a cluster from RabbitMQ 3.6.x to 3.7.x, you probably had to use one of the following solutions:

  • Deploy a new cluster alongside the existing one (a.k.a. blue-green deploy), then migrate data & clients to the new cluster
  • Stop all nodes in the existing cluster, upgrade the last node that was stopped first, then continue upgrading all other nodes, one-by-one

The above solutions were painful because the steps involved were complex. The new feature flags subsystem is meant to reduce this pain to the minimum.

#!/usr/bin/env bash
docker-compose up --remove-orphans --detach
while ! docker-compose exec rmq1 rabbitmqctl await_online_nodes 1
do
sleep 1
done
echo "SYSTEM HOSTNAME"
@gerhard
gerhard / beam.smp
Last active June 19, 2018 06:00
RabbitMQ 3.7.7-beta.1 on Erlang/OTP 21.0-rc.2
pgrep -a beam
18848 /var/vcap/packages/erlang-21.0-rc2/lib/erlang/erts-10.0/bin/beam.smp -W w -A 64 -MBas ageffcbf -MHas ageffcbf -MBlmbcs 512 -MHlmbcs 512 -MMmcs 30 -P 1048576 -t 5000000 -stbt db -zdbbl 128000 -K true -stbt db -zdbbl 128000 -P 1048576 -t 5000000 -- -root /var/vcap/packages/erlang-21.0-rc2/lib/erlang -progname erl -- -home /home/vcap -- -kernel shell_history enabled -pa /var/vcap/jobs/rabbitmq-server/packages/rabbitmq-server/ebin -noshell -noinput -s rabbit boot -sname rabbit@rmq0-low-latency -boot start_sasl -config /var/vcap/jobs/rabbitmq-server/rabbitmq -kernel inet_default_connect_options [{nodelay,true}] -sasl errlog_type error -sasl sasl_error_logger false -rabbit lager_log_root "/var/vcap/sys/log/rabbitmq-server" -rabbit lager_default_file "/var/vcap/sys/log/rabbitmq-server/[email protected]" -rabbit lager_upgrade_file "/var/vcap/sys/log/rabbitmq-server/rabbit@rmq0-low-latency_upgrade.log" -rabbit enabled_plugins_file "/var/vcap/jobs/rabbitmq-server/packages/rabbitmq-server/e
goroutine 8129128 [running]:
runtime/pprof.writeGoroutineStacks(0xe81660, 0xc4201da680, 0xadf5c0, 0x30)
/var/vcap/data/packages/golang/a0931a026af46cc631482a5af1b91ca75e6d9f0c/src/runtime/pprof/pprof.go:585 +0x79
runtime/pprof.writeGoroutine(0xe81660, 0xc4201da680, 0x2, 0x0, 0xb2c4c0)
/var/vcap/data/packages/golang/a0931a026af46cc631482a5af1b91ca75e6d9f0c/src/runtime/pprof/pprof.go:574 +0x44
runtime/pprof.(*Profile).WriteTo(0x10beaa0, 0xe81660, 0xc4201da680, 0x2, 0xc4201da680, 0xc420ec3504)
/var/vcap/data/packages/golang/a0931a026af46cc631482a5af1b91ca75e6d9f0c/src/runtime/pprof/pprof.go:298 +0x341
net/http/pprof.handler.ServeHTTP(0xc420ec3511, 0x9, 0xe88fa0, 0xc4201da680, 0xc420ac8c30)
/var/vcap/data/packages/golang/a0931a026af46cc631482a5af1b91ca75e6d9f0c/src/net/http/pprof/pprof.go:209 +0x1a6
net/http/pprof.Index(0xe88fa0, 0xc4201da680, 0xc420ac8c30)
@gerhard
gerhard / bosh-ssh-into-the-statsdb-node-and-copy-paste-the-following.sh
Last active January 20, 2017 09:59
This will only work if they are aggregating logs to a central syslog server. Even though not ideal, Splunk will do.
# become root
sudo -i
# install script dependencies
apt-get install -y dstat jq
# in the background, run a shell process that will send system metrics to syslog every 30s
(dstat -clmdn --nocolor --tcp 30 | logger -t dstat) &
# in the background, run a shell process that will send ets table info to syslog every 30s
@gerhard
gerhard / sum.bash
Last active September 24, 2015 20:48
#!/bin/bash -e
while read
do
n="${REPLY#*:}"
(( sum += n ))
done < <( echo "a:1
b:2
c:3
d:4" )
#!/usr/bin/env bash
[ -z "$DEBUG" ] || set -x
main() {
resolve_dependencies
if [[ "$@" =~ watch ]]
then
autotest