#Some discussions on logging from docker: Using logstash Using Papertrail
A lot of this boils down to whether you want a single or multi-process (systemd, supervisord etc.) container...
#!/bin/sh | |
# Converts a mysqldump file into a Sqlite 3 compatible file. It also extracts the MySQL `KEY xxxxx` from the | |
# CREATE block and create them in separate commands _after_ all the INSERTs. | |
# Awk is choosen because it's fast and portable. You can use gawk, original awk or even the lightning fast mawk. | |
# The mysqldump file is traversed only once. | |
# Usage: $ ./mysql2sqlite mysqldump-opts db-name | sqlite3 database.sqlite | |
# Example: $ ./mysql2sqlite --no-data -u root -pMySecretPassWord myDbase | sqlite3 database.sqlite |
# vim: ft=sh:ts=4:sw=4:autoindent:expandtab: | |
# Author: Avishai Ish-Shalom <[email protected]> | |
# We need to specify GNU sed for OS X, BSDs, etc. | |
if [[ "$(uname -s)" == "Darwin" ]]; then | |
SED=gsed | |
else | |
SED=sed | |
fi |
function heidiDecode(hex) { | |
var str = ''; | |
var shift = parseInt(hex.substr(-1)); | |
hex = hex.substr(0, hex.length - 1); | |
for (var i = 0; i < hex.length; i += 2) | |
str += String.fromCharCode(parseInt(hex.substr(i, 2), 16) - shift); | |
return str; | |
} | |
document.write(heidiDecode('755A5A585C3D8141786B3C385E3A393')); |
// Just before switching jobs: | |
// Add one of these. | |
// Preferably into the same commit where you do a large merge. | |
// | |
// This started as a tweet with a joke of "C++ pro-tip: #define private public", | |
// and then it quickly escalated into more and more evil suggestions. | |
// I've tried to capture interesting suggestions here. | |
// | |
// Contributors: @r2d2rigo, @joeldevahl, @msinilo, @_Humus_, | |
// @YuriyODonnell, @rygorous, @cmuratori, @mike_acton, @grumpygiant, |
You can use the consumer script to verify that data is being received by the http listener (http_producer) on port 80 of the Kafka broker node. | |
sudo /usr/local/share/kafka/bin/kafka-console-consumer.sh --zookeeper [zk-private-ip]:2181 --topic [topic] --from-beginning | |
If you curl records at the kafka http listener, you should see them come out at the kafka broker node: | |
(On the Kafka listener node) | |
curl -XGET localhost:80 -d '{"name":"Bill","position":"Sales"}' |
curl --header 'Authorization: token INSERTACCESSTOKENHERE' \ | |
--header 'Accept: application/vnd.github.v3.raw' \ | |
--remote-name \ | |
--location https://api.github.com/repos/owner/repo/contents/path | |
# Example... | |
TOKEN="INSERTACCESSTOKENHERE" | |
OWNER="BBC-News" | |
REPO="responsive-news" |
Producer | |
Setup | |
bin/kafka-topics.sh --zookeeper esv4-hcl197.grid.linkedin.com:2181 --create --topic test-rep-one --partitions 6 --replication-factor 1 | |
bin/kafka-topics.sh --zookeeper esv4-hcl197.grid.linkedin.com:2181 --create --topic test --partitions 6 --replication-factor 3 | |
Single thread, no replication | |
bin/kafka-run-class.sh org.apache.kafka.clients.tools.ProducerPerformance test7 50000000 100 -1 acks=1 bootstrap.servers=esv4-hcl198.grid.linkedin.com:9092 buffer.memory=67108864 batch.size=8196 |
#Some discussions on logging from docker: Using logstash Using Papertrail
A lot of this boils down to whether you want a single or multi-process (systemd, supervisord etc.) container...
git clone https://gist.github.com/dd6f95398c1bdc9f1038.git vault
cd vault
docker-compose up -d
export VAULT_ADDR=http://192.168.99.100:8200
Initializing a vault:
vault init
docker run -d --name es elasticsearch
docker run -d --name logstash --link es:elasticsearch logstash -v /tmp/logstash.conf:/config-dir/logstash.conf logstash logstash -f /config-dir/logstash.conf
docker run --link es:elasticsearch -d kibana
LOGSTASH_ADDRESS=$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' logstash)