Skip to content

Instantly share code, notes, and snippets.

@merlijntishauser
Created October 2, 2014 17:24
Show Gist options
  • Save merlijntishauser/9b8e58022e37942d99ee to your computer and use it in GitHub Desktop.
Save merlijntishauser/9b8e58022e37942d99ee to your computer and use it in GitHub Desktop.
Notes: Open source logging and monitoring
Logging: Steven Merril
Open source logging and monitoring, Phase 2
http://www.slideshare.net/Phase2Technology/open-source-logging-and-metric-toolsopen-source-logging-and-metrics-tools
Logaggregation
collectd for grabbing system metrics
worker instance
utility instance
Graphite, grafana, bucky (statsd)
siege for generating traffic
kibana is a frontend for elastic search
The ELK stack
parsed by logstash, sent to elastic search, shown by kibana
just use out of the box tools, like rsyslog
logs = time + data
keeping track of events
logs can be very different
error logs, transaction logs, trace & debug logs
log formats -> add smarts to them
drupal used pipe delimited
but most daemons can produce extra information, like header info
time spent to process the request
Logstash can normalize them
mod_log_config for apache
%D
%{Host}i value of the host http header
%p
also available for nginx (note to self: pronounce as enginex)
look them up at logstash documentation
%{Varnish:hitmiss}x = text hit or miss
Shipping logs
mostly -> disk
how to collate them
syslog-ng
rsyslogd
logstash (can be used as log forwarder)
logstash-forwarder (formerly lumberjack) (standalone go daemon, has tls and compression)
Concers: Queuing
max spool disk usage
retries
security -> encrypted channel, encrypted at rest, access to sensitieve data
*.* @@utility:514 (@@ means tcp)
$template to include hostname and date
modload imfile
rsyslog is pretty awesome
logstash/net/docs/1.4.2
has inputs, filter, output pipelines
ElasticSearch -> better rest api
Kibana needs direct http access to ElasticSearch
It's an angular app ;)
Grok, feature of logstash, regex patterns
grokconstructor.appshot.com
grokdebug.herokuapp.com/patterns
gives you the output which will be sent to elasticsearch
git.io/e6TvAg
output to for example to s3
date {} => use the date mentioned in the log itself, not the time it has been recieved by logstash
strip syslog of
mutate {
mutate => replace [ fields ]
}
graphite, next-gen munin, cacti etc
statsd
logstash can sent triggers to graphite
statsd {
increment => "varnish.response.%{type}"
}
grafana is for graphite what kibana is for elasticsearch
avoid alert fatigue
make sure that alerts have value
umpire/statsd
elks!
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment