Skip to content

Instantly share code, notes, and snippets.

@skyrocknroll
Forked from mat/INSTALL
Last active December 27, 2015 18:39
Show Gist options
  • Save skyrocknroll/7371217 to your computer and use it in GitHub Desktop.
Save skyrocknroll/7371217 to your computer and use it in GitHub Desktop.
graphite statsd installation seyren
# This needs to be in your server's config somewhere, probably
# the main httpd.conf
NameVirtualHost *:80
# You may need to manually edit this file to fit your needs.
# This configuration assumes the default installation prefix
# of /opt/graphite/, if you installed graphite somewhere else
# you will need to change all the occurances of /opt/graphite/
# in this file to your chosen install location.
<VirtualHost *:80>
ServerName graphite
Doc

https://github.com/gingerlime/graphite-fabric

Fabric file to install all the graphite and statsd

TODO:

  • edit /opt/statsd/local.js
    • correct the graphite host to localhost
    • if desired, put 'debug: true' in there
  • make the box accessible via the hostname 'graphite'
  • update conf/storage-schemas.conf, see example for these retention rules: 6 hours of 10 second data 1 week of 1 minute data 5 years of 10 minute data

Faced with issues:

rm /etc/apache2/sites-enabled/000-default

edit /etc/apache2/sites-enabled/graphite.conf

replace WSGISocketPrefix run/wsgi with following line

WSGISocketPrefix /var/run/apache2/wsgi

use the forever script to run the statsd or better one pm2 ?

npm install forever -g

cd /opt/statsd forever start stats.js local.js

Data were screwed after 6 hours.

Reason Because storage-schema.conf was configured with this value

retentions = 10:2160,60:10080,600:262974

10:2160 10 seconds of data point stored only 2160 seconds only. so after that aggregation kicked in and data was average with lot of skewed data.

statsd/statsd#302

https://graphite.readthedocs.org/en/latest/config-carbon.html http://graphite.readthedocs.org/en/0.9.x/whisper.html#archives-retention-and-precision

Read the example properly.

To support accurate aggregation from higher to lower resolution archives, the precision of a longer retention archive must be divisible by precision of next lower retention archive. 

https://github.com/etsy/statsd/blob/master/docs/graphite.md#storage-schemas

Also Don't forget setup storage-aggregation.conf also for counts use sum for gauge use average.

https://github.com/BrightcoveOS/Diamond

Diamond seems to be wonderful python stats collector deamon. we can write our own custom collectors also.

Data not stored properly :( counts are not reported properly.

Problem : found out that priority in storage schemas are ignored.

The order in which we define the pattern match in storage-schemas.conf is used. the fist one it matches uses that storage schema.

Since i put the default one which matches all , in the top it took priority and other storage schemas where ignored :(

#!/bin/bash
version=0.9.10
# install git and graphite dependencies
aptitude install git-core curl python-cairo python-pip python-django memcached python-memcache python-ldap python-twisted apache2 libapache2-mod-python libapache2-mod-wsgi
# download and install everything for graphite
mkdir -pv /opt/graphite/install
cd /opt/graphite/install
for a in graphite-web carbon whisper; do
wget "http://launchpad.net/graphite/0.9/$version/+download/$a-$version.tar.gz"
tar xfz $a-$version.tar.gz
cd $a-$version
python setup.py install
cd ..
done
pip install django-tagging
# carbon: copy conf.example to conf
for a in carbon.conf graphTemplates.conf storage-schemas.conf graphite.wsgi; do
cp -v /opt/graphite/conf/$a.example /opt/graphite/conf/$a
done
cp -v /opt/graphite/webapp/graphite/local_settings.py.example \
/opt/graphite/webapp/graphite/local_settings.py
# apache conf
chown -Rv www-data:www-data /opt/graphite/storage/
cp -v /opt/graphite/install/graphite-web-$version/examples/example-graphite-vhost.conf \
/etc/apache2/sites-available/graphite.conf
ln -sv /etc/apache2/sites-available/graphite.conf \
/etc/apache2/sites-enabled/graphite.conf
/etc/init.d/apache2 restart
# run syncdb to setup the db and prime the authentication model (if you're using the DB model)
cd /opt/graphite/webapp/graphite
python manage.py syncdb
# start the carbon cache
/opt/graphite/bin/carbon-cache.py start
#!/bin/bash
# install nodejs from seperate apt repo
apt-get install python-software-properties
add-apt-repository ppa:chris-lea/node.js
apt-get update
apt-get install nodejs
npm install forever
# clone and setup statsd
git clone https://github.com/etsy/statsd.git /opt/statsd
cd /opt/statsd
cp -v /opt/statsd/exampleConfig.js /opt/statsd/local.js
# start statsd
forever start stats.js local.js

Alerting can be implemented using Seyren

First Impression awesome :)

https://github.com/scobal/seyren

Set the required Environment variables

https://github.com/scobal/seyren/blob/master/README.md

install mongoDB 
mvn clean package
export GRAPHITE_URL=http://graphite.foohost.com:80
java -jar seyren-web/target/seyren-web-*-war-exec.jar
open http://localhost:8080
export GRAPHITE_URL=http://10.30.0.181:80
export MONGO_URL=mongodb://localhost:27017/seyren
export SMTP_HOST=10.30.0.89
export [email protected]
export SEYREN_LOG_PATH=/opt/seyren/logs/
export SEYREN_URL=http://10.30.0.163:8080/
export GRAPHITE_REFRESH=10000

Multiple alerting mechanism is there.

Tested Email and HTTP

works awesome

# Aggregation methods for whisper files. Entries are scanned in order,
# and first match wins. This file is scanned for changes every 60 seconds
#
# [name]
# pattern = <regex>
# xFilesFactor = <float between 0 and 1>
# aggregationMethod = <average|sum|last|max|min>
#
# name: Arbitrary unique name for the rule
# pattern: Regex pattern to match against the metric name
# xFilesFactor: Ratio of valid data points required for aggregation to the next retention to occur
# aggregationMethod: function to apply to data points for aggregation
#
[min]
pattern = \.min$
xFilesFactor = 0.1
aggregationMethod = min
[max]
pattern = \.max$
xFilesFactor = 0.1
aggregationMethod = max
[sum]
pattern = \.count$
xFilesFactor = 0
aggregationMethod = sum
[iconsole_sum]
pattern = ^stats_counts
xFilesFactor = 0
aggregationMethod = sum
[default_average]
pattern = .*
xFilesFactor = 0.5
aggregationMethod = average
# Schema definitions for Whisper files. Entries are scanned in order,
# and first match wins. This file is scanned for changes every 60 seconds.
#
# [name]
# pattern = regex
# retentions = timePerPoint:timeToStore, timePerPoint:timeToStore, ...
# Carbon's internal metrics. This entry should match what is specified in
# CARBON_METRIC_PREFIX and CARBON_METRIC_INTERVAL settings
# Graphite parses this file from top to bottom and the first match wins.
[stats]
pattern = ^stats\..*
#retentions = 10:2160,60:10080,600:262974
retentions = 10s:30d,30s:90d,60s:180d
[default_1min_for_1day]
pattern = .*
#retentions = 10:2160,60:10080,600:262974
retentions = 10s:30d,30s:90d,60s:180d
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment