Last active
August 3, 2016 03:05
-
-
Save shineyear/355d1ae38dd43d6ff092 to your computer and use it in GitHub Desktop.
newrelic cloudwatch docker all in one monitor
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
this dashing plugin is for integration newrelic, aws cloudwatch in one page | |
for docker monitor (cloudwatch) and instance, application monitor (newrelic) as well, | |
you can use it for both docker in ec2 and docker with elastic beanstalk, | |
this plugin include some 3rd part component: | |
aws cloudwatch custom metrics perl script | |
http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/mon-scripts-perl.html | |
dashing aws cloudwatch plugin | |
https://gist.github.com/s0enke/68b3288bd1cbec3336ad | |
dashing newrelic plugin | |
https://gist.github.com/assimovt/5181244 | |
dashing rickshawgraph widget | |
https://gist.github.com/jwalton/6614023 | |
file structure | |
dashboards | |
newrelic_aws.rb | |
jobs | |
docker.rb | |
ec2.rb | |
newrelic_rpm.rb | |
lib | |
dashing_docker.rb | |
dashing_ec2.rb | |
widgets | |
rickshawgraph | |
rickshawgraph.coffee | |
rickshawgraph.html | |
rickshawgraph.scss | |
Gemfile | |
mon-put-instance-data.pl | |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#lib/dashing_docker.rb | |
require 'aws-sdk' | |
require 'time' | |
class DashingDOCKER | |
def initialize(options) | |
@access_key_id = options[:access_key_id] | |
@secret_access_key = options[:secret_access_key] | |
@clientCache = {} | |
end | |
# Get statistics for an instance | |
# | |
# * `instance_id` is the instance to get data about. | |
# * `region` is the name of the region the instance is from (e.g. 'us-east-1'.) See | |
# [monitoring URIs](http://docs.aws.amazon.com/general/latest/gr/rande.html#cw_region). | |
# * `metric_name` is the metric to get. See | |
# [the list of build in metrics](http://docs.aws.amazon.com/AWSEC2/2011-07-15/UserGuide/index.html?using-cloudwatch.html). | |
# * `type` is `:average` or `:maximum`. | |
# * `options` are [:start_time, :end_time, :period, :dimensions] as per | |
# `get_metric_statistics()`, although all are optional. Also: | |
# * `:duration` - If supplied, and no start_time or end_time are supplied, then start_time | |
# and end_time will be computed based on this value in seconds. Defaults to 6 hours. | |
def getInstanceStats(instance_id, region, metric_name, type=:average, options={}) | |
if type == :average | |
statName = "Average" | |
elsif type == :maximum | |
statName = "Maxmimum" | |
end | |
statKey = type | |
# Get an API client instance | |
cw = @clientCache[region] | |
if not cw | |
cw = @clientCache[region] = AWS::CloudWatch::Client.new({ | |
server: "https://monitoring.#{region}.amazonaws.com", | |
access_key_id: @access_key_id, | |
secret_access_key: @secret_access_key, | |
region: region | |
}) | |
end | |
# Build a default set of options to pass to get_metric_statistics | |
duration = (options[:duration] or (60*60*6)) # Six hours | |
start_time = (options[:start_time] or (Time.now - duration)) | |
end_time = (options[:end_time] or (Time.now)) | |
get_metric_statistics_options = { | |
namespace: "dockerWatchTest-env", | |
metric_name: metric_name, | |
statistics: [statName], | |
start_time: start_time.utc.iso8601, | |
end_time: end_time.utc.iso8601, | |
period: (options[:period] or (60 * 5)), # Default to 5 min stats | |
dimensions: (options[:dimensions] or [{name: "InstanceId", value: instance_id}]) | |
} | |
# Go get stats | |
result = cw.get_metric_statistics(get_metric_statistics_options) | |
if ((not result[:datapoints]) or (result[:datapoints].length == 0)) | |
# TODO: What kind of errors can I get back? | |
puts "\e[33mWarning: Got back no data for instanceId: #{region}:#{instance_id} for metric #{metric_name}\e[0m" | |
answer = nil | |
else | |
# Turn the result into a Rickshaw-style series | |
data = [] | |
result[:datapoints].each do |datapoint| | |
point = { | |
x: (datapoint[:timestamp].to_i), # time in seconds since epoch | |
y: datapoint[statKey] | |
} | |
data.push point | |
end | |
data.sort! { |a,b| a[:x] <=> b[:x] } | |
answer = { | |
name: "#{metric_name} for #{instance_id}", | |
data: data | |
} | |
end | |
return answer | |
end | |
end |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#lib/dashing_ec2.rb | |
require 'aws-sdk' | |
require 'time' | |
class DashingEC2 | |
def initialize(options) | |
@access_key_id = options[:access_key_id] | |
@secret_access_key = options[:secret_access_key] | |
@clientCache = {} | |
end | |
# Get statistics for an instance | |
# | |
# * `instance_id` is the instance to get data about. | |
# * `region` is the name of the region the instance is from (e.g. 'us-east-1'.) See | |
# [monitoring URIs](http://docs.aws.amazon.com/general/latest/gr/rande.html#cw_region). | |
# * `metric_name` is the metric to get. See | |
# [the list of build in metrics](http://docs.aws.amazon.com/AWSEC2/2011-07-15/UserGuide/index.html?using-cloudwatch.html). | |
# * `type` is `:average` or `:maximum`. | |
# * `options` are [:start_time, :end_time, :period, :dimensions] as per | |
# `get_metric_statistics()`, although all are optional. Also: | |
# * `:duration` - If supplied, and no start_time or end_time are supplied, then start_time | |
# and end_time will be computed based on this value in seconds. Defaults to 6 hours. | |
def getInstanceStats(instance_id, region, metric_name, type=:average, options={}) | |
if type == :average | |
statName = "Average" | |
elsif type == :maximum | |
statName = "Maxmimum" | |
end | |
statKey = type | |
# Get an API client instance | |
cw = @clientCache[region] | |
if not cw | |
cw = @clientCache[region] = AWS::CloudWatch::Client.new({ | |
server: "https://monitoring.#{region}.amazonaws.com", | |
access_key_id: @access_key_id, | |
secret_access_key: @secret_access_key, | |
region: region | |
}) | |
end | |
# Build a default set of options to pass to get_metric_statistics | |
duration = (options[:duration] or (60*60*6)) # Six hours | |
start_time = (options[:start_time] or (Time.now - duration)) | |
end_time = (options[:end_time] or (Time.now)) | |
get_metric_statistics_options = { | |
namespace: "AWS/EC2", | |
metric_name: metric_name, | |
statistics: [statName], | |
start_time: start_time.utc.iso8601, | |
end_time: end_time.utc.iso8601, | |
period: (options[:period] or (60 * 5)), # Default to 5 min stats | |
dimensions: (options[:dimensions] or [{name: "InstanceId", value: instance_id}]) | |
} | |
# Go get stats | |
result = cw.get_metric_statistics(get_metric_statistics_options) | |
if ((not result[:datapoints]) or (result[:datapoints].length == 0)) | |
# TODO: What kind of errors can I get back? | |
puts "\e[33mWarning: Got back no data for instanceId: #{region}:#{instance_id} for metric #{metric_name}\e[0m" | |
answer = nil | |
else | |
# Turn the result into a Rickshaw-style series | |
data = [] | |
result[:datapoints].each do |datapoint| | |
point = { | |
x: (datapoint[:timestamp].to_i), # time in seconds since epoch | |
y: datapoint[statKey] | |
} | |
data.push point | |
end | |
data.sort! { |a,b| a[:x] <=> b[:x] } | |
answer = { | |
name: "#{metric_name} for #{instance_id}", | |
data: data | |
} | |
end | |
return answer | |
end | |
end |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
require './lib/dashing_docker' | |
dashing_docker = DashingDOCKER.new({ | |
:access_key_id => "", | |
:secret_access_key => "", | |
}) | |
# See documentation here for cloud watch API here: https://github.com/aws/aws-sdk-ruby/blob/af638994bb7d01a8fd0f8a6d6357567968638100/lib/aws/cloud_watch/client.rb | |
# See documentation on various metrics and dimensions here: http://docs.aws.amazon.com/AWSEC2/2011-07-15/UserGuide/index.html?using-cloudwatch.html | |
# Note that Amazon charges [$0.01 per 1000 reqeuests](http://aws.amazon.com/pricing/cloudwatch/), | |
# so: | |
# | |
# | frequency | $/month/stat | | |
# |:---------:|:------------:| | |
# | 1m | $0.432 | | |
# | 10m | $0.043 | | |
# | 1h | $0.007 | | |
# | |
# In the free tier, stats are only available for 5m intervals, so querying more often than | |
# once every 5 minutes is kind of pointless. You've been warned. :) | |
# | |
SCHEDULER.every '1m', :first_in => 0 do |job| | |
usage = [ | |
{name: 'dockerWatchTest-env', instance_id: "i-bf085381", region: 'ap-southeast-2'} | |
] | |
mem_series = [] | |
usage.each do |item| | |
mem_data = dashing_docker.getInstanceStats(item[:instance_id], item[:region], "DockerMemoryUsage", :average) | |
mem_data[:name] = item[:name] | |
mem_series.push mem_data | |
end | |
usercpu_series = [] | |
usage.each do |item| | |
usercpu_data = dashing_docker.getInstanceStats(item[:instance_id], item[:region], "DockerCpuUser", :average) | |
usercpu_data[:name] = item[:name] | |
usercpu_series.push usercpu_data | |
end | |
syscpu_series = [] | |
usage.each do |item| | |
syscpu_data = dashing_docker.getInstanceStats(item[:instance_id], item[:region], "DockerCpuSystem", :average) | |
syscpu_data[:name] = item[:name] | |
syscpu_series.push syscpu_data | |
end | |
# If you're using the Rickshaw Graph widget: https://gist.github.com/jwalton/6614023 | |
send_event "docker-mem", { series: mem_series } | |
send_event "docker-usercpu", { series: usercpu_series } | |
send_event "docker-syscpu", { series: syscpu_series } | |
# If you're just using the regular Dashing graph widget: | |
#send_event "docker-mem", { points: cpu_series[0][:data] } | |
end # SCHEDULER | |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
require './lib/dashing_ec2' | |
dashing_ec2 = DashingEC2.new({ | |
:access_key_id => "", | |
:secret_access_key => "", | |
}) | |
# See documentation here for cloud watch API here: https://github.com/aws/aws-sdk-ruby/blob/af638994bb7d01a8fd0f8a6d6357567968638100/lib/aws/cloud_watch/client.rb | |
# See documentation on various metrics and dimensions here: http://docs.aws.amazon.com/AWSEC2/2011-07-15/UserGuide/index.html?using-cloudwatch.html | |
# Note that Amazon charges [$0.01 per 1000 reqeuests](http://aws.amazon.com/pricing/cloudwatch/), | |
# so: | |
# | |
# | frequency | $/month/stat | | |
# |:---------:|:------------:| | |
# | 1m | $0.432 | | |
# | 10m | $0.043 | | |
# | 1h | $0.007 | | |
# | |
# In the free tier, stats are only available for 5m intervals, so querying more often than | |
# once every 5 minutes is kind of pointless. You've been warned. :) | |
# | |
SCHEDULER.every '1m', :first_in => 0 do |job| | |
usage = [ | |
{name: '172.21.71.140', instance_id: "i-f9f9ebc6", region: 'ap-southeast-2'}, | |
{name: '172.21.71.141', instance_id: "i-2dfae812", region: 'ap-southeast-2'}, | |
{name: '172.21.71.10', instance_id: "i-f68909c9", region: 'ap-southeast-2'}, | |
{name: '172.21.71.12', instance_id: "i-238c0c1c", region: 'ap-southeast-2'}, | |
{name: '172.21.71.11', instance_id: "i-f58909ca", region: 'ap-southeast-2'}, | |
{name: '172.21.71.142', instance_id: "i-5afeec65", region: 'ap-southeast-2'} | |
] | |
cpu_series = [] | |
usage.each do |item| | |
cpu_data = dashing_ec2.getInstanceStats(item[:instance_id], item[:region], "CPUUtilization", :average) | |
cpu_data[:name] = item[:name] | |
cpu_series.push cpu_data | |
end | |
netin_series = [] | |
usage.each do |item| | |
netin_data = dashing_ec2.getInstanceStats(item[:instance_id], item[:region], "NetworkIn", :average) | |
netin_data[:name] = item[:name] | |
netin_series.push netin_data | |
end | |
netout_series = [] | |
usage.each do |item| | |
netout_data = dashing_ec2.getInstanceStats(item[:instance_id], item[:region], "NetworkOut", :average) | |
netout_data[:name] = item[:name] | |
netout_series.push netout_data | |
end | |
diskrio_series = [] | |
usage.each do |item| | |
diskrio_data = dashing_ec2.getInstanceStats(item[:instance_id], item[:region], "DiskReadOps", :average) | |
diskrio_data[:name] = item[:name] | |
diskrio_series.push diskrio_data | |
end | |
diskwio_series = [] | |
usage.each do |item| | |
diskwio_data = dashing_ec2.getInstanceStats(item[:instance_id], item[:region], "DiskWriteOps", :average) | |
diskwio_data[:name] = item[:name] | |
diskwio_series.push diskwio_data | |
end | |
diskrb_series = [] | |
usage.each do |item| | |
diskrb_data = dashing_ec2.getInstanceStats(item[:instance_id], item[:region], "DiskReadBytes", :average) | |
diskrb_data[:name] = item[:name] | |
diskrb_series.push diskrb_data | |
end | |
diskwb_series = [] | |
usage.each do |item| | |
diskwb_data = dashing_ec2.getInstanceStats(item[:instance_id], item[:region], "DiskWriteBytes", :average) | |
diskwb_data[:name] = item[:name] | |
diskwb_series.push diskwb_data | |
end | |
# If you're using the Rickshaw Graph widget: https://gist.github.com/jwalton/6614023 | |
send_event "ec2-cpu", { series: cpu_series } | |
send_event "ec2-netin", { series: netin_series } | |
send_event "ec2-netout", { series: netout_series } | |
send_event "ec2-diskrio", { series: diskrio_series } | |
send_event "ec2-diskwio", { series: diskwio_series } | |
send_event "ec2-diskrb", { series: diskrb_series } | |
send_event "ec2-diskwb", { series: diskwb_series } | |
# If you're just using the regular Dashing graph widget: | |
#send_event "ec2-cpu-server1", { points: cpu_series[0][:data] } | |
#send_event "ec2-cpu-server2", { points: cpu_series[1][:data] } | |
#send_event "ec2-cpu-server3", { points: cpu_series[2][:data] } | |
#send_event "ec2-cpu-server4", { points: cpu_series[3][:data] } | |
#send_event "ec2-cpu-server5", { points: cpu_series[4][:data] } | |
#send_event "ec2-cpu-server6", { points: cpu_series[5][:data] } | |
end # SCHEDULER | |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
source 'https://rubygems.org' | |
gem 'dashing' | |
## Remove this if you don't need a twitter widget. | |
gem 'twitter', '>= 5.9.0' | |
gem 'activeresource' | |
gem 'newrelic_api' | |
gem 'aws-sdk' |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/usr/bin/perl -w | |
# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved. | |
# | |
# Licensed under the Apache License, Version 2.0 (the "License"). You may not | |
# use this file except in compliance with the License. A copy of the License | |
# is located at | |
# | |
# http://aws.amazon.com/apache2.0/ | |
# | |
# or in the "LICENSE" file accompanying this file. This file is distributed | |
# on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either | |
# express or implied. See the License for the specific language governing | |
# permissions and limitations under the License. | |
our $usage = <<USAGE; | |
Usage: mon-put-instance-data.pl [options] | |
Collects memory, swap, and disk space utilization on an Amazon EC2 | |
instance and sends this data as custom metrics to Amazon CloudWatch. | |
Description of available options: | |
--cpu-docker Reports cpu used in seconds by docker. | |
--mem-docker Reports memory used in megabytes by docker. | |
--mem-util Reports memory utilization in percentages. | |
--mem-used Reports memory used in megabytes. | |
--mem-avail Reports available memory in megabytes. | |
--swap-util Reports swap utilization in percentages. | |
--swap-used Reports allocated swap space in megabytes. | |
--disk-path=PATH Selects the disk by the path on which to report. | |
--disk-space-util Reports disk space utilization in percentages. | |
--disk-space-used Reports allocated disk space in gigabytes. | |
--disk-space-avail Reports available disk space in gigabytes. | |
--aggregated[=only] Adds aggregated metrics for instance type, AMI id, and overall. | |
--auto-scaling[=only] Adds aggregated metrics for Auto Scaling group. | |
If =only is specified, reports only aggregated metrics. | |
--mem-used-incl-cache-buff Count memory that is cached and in buffers as used. | |
--memory-units=UNITS Specifies units for memory metrics. | |
--disk-space-units=UNITS Specifies units for disk space metrics. | |
Supported UNITS are bytes, kilobytes, megabytes, and gigabytes. | |
--aws-credential-file=PATH Specifies the location of the file with AWS credentials. | |
--aws-access-key-id=VALUE Specifies the AWS access key ID to use to identify the caller. | |
--aws-secret-key=VALUE Specifies the AWS secret key to use to sign the request. | |
--aws-iam-role=VALUE Specifies the IAM role name to provide AWS credentials. | |
--from-cron Specifies that this script is running from cron. | |
--verify Checks configuration and prepares a remote call. | |
--verbose Displays details of what the script is doing. | |
--version Displays the version number. | |
--help Displays detailed usage information. | |
Examples | |
To perform a simple test run without posting data to Amazon CloudWatch | |
./mon-put-instance-data.pl --mem-util --verify --verbose | |
To set a five-minute cron schedule to report memory and disk space utilization to CloudWatch | |
*/5 * * * * ~/aws-scripts-mon/mon-put-instance-data.pl --mem-util --disk-space-util --disk-path=/ --from-cron | |
For more information on how to use this utility, see Amazon CloudWatch Developer Guide at | |
http://docs.amazonwebservices.com/AmazonCloudWatch/latest/DeveloperGuide/mon-scripts-perl.html | |
USAGE | |
use strict; | |
use warnings; | |
use Switch; | |
use Getopt::Long; | |
use File::Basename; | |
use Sys::Hostname; | |
use Sys::Syslog qw(:DEFAULT setlogsock); | |
use Sys::Syslog qw(:standard :macros); | |
BEGIN | |
{ | |
my $script_dir = &File::Basename::dirname($0); | |
push @INC, $script_dir; | |
} | |
use CloudWatchClient; | |
use constant | |
{ | |
KILO => 1024, | |
MEGA => 1048576, | |
GIGA => 1073741824, | |
}; | |
my $version = '1.1.0'; | |
my $client_name = 'CloudWatch-PutInstanceData'; | |
my $mcount = 0; | |
my $report_cpu_docker; | |
my $report_mem_docker; | |
my $report_mem_util; | |
my $report_mem_used; | |
my $report_mem_avail; | |
my $report_swap_util; | |
my $report_swap_used; | |
my $report_disk_util; | |
my $report_disk_used; | |
my $report_disk_avail; | |
my $mem_used_incl_cache_buff; | |
my @mount_path; | |
my $mem_units; | |
my $disk_units; | |
my $mem_unit_div = 1; | |
my $disk_unit_div = 1; | |
my $aggregated; | |
my $auto_scaling; | |
my $from_cron; | |
my $verify; | |
my $verbose; | |
my $show_help; | |
my $show_version; | |
my $enable_compression; | |
my $aws_credential_file; | |
my $aws_access_key_id; | |
my $aws_secret_key; | |
my $aws_iam_role; | |
my $parse_result = 1; | |
my $parse_error = ''; | |
my $argv_size = @ARGV; | |
{ | |
# Capture warnings from GetOptions | |
local $SIG{__WARN__} = sub { $parse_error .= $_[0]; }; | |
$parse_result = GetOptions( | |
'help|?' => \$show_help, | |
'version' => \$show_version, | |
'cpu-docker' => \$report_cpu_docker, | |
'mem-docker' => \$report_mem_docker, | |
'mem-util' => \$report_mem_util, | |
'mem-used' => \$report_mem_used, | |
'mem-avail' => \$report_mem_avail, | |
'swap-util' => \$report_swap_util, | |
'swap-used' => \$report_swap_used, | |
'disk-path:s' => \@mount_path, | |
'disk-space-util' => \$report_disk_util, | |
'disk-space-used' => \$report_disk_used, | |
'disk-space-avail' => \$report_disk_avail, | |
'auto-scaling:s' => \$auto_scaling, | |
'aggregated:s' => \$aggregated, | |
'memory-units:s' => \$mem_units, | |
'disk-space-units:s' => \$disk_units, | |
'mem-used-incl-cache-buff' => \$mem_used_incl_cache_buff, | |
'verify' => \$verify, | |
'from-cron' => \$from_cron, | |
'verbose' => \$verbose, | |
'aws-credential-file:s' => \$aws_credential_file, | |
'aws-access-key-id:s' => \$aws_access_key_id, | |
'aws-secret-key:s' => \$aws_secret_key, | |
'enable-compression' => \$enable_compression, | |
'aws-iam-role:s' => \$aws_iam_role | |
); | |
} | |
if (!$parse_result) { | |
exit_with_error($parse_error); | |
} | |
if ($show_version) { | |
print "\n$client_name version $version\n\n"; | |
exit 0; | |
} | |
if ($show_help || $argv_size < 1) { | |
print $usage; | |
exit 0; | |
} | |
if ($from_cron) { | |
$verbose = 0; | |
} | |
# check for empty values in provided arguments | |
if (defined($aws_credential_file) && length($aws_credential_file) == 0) { | |
exit_with_error("Path to AWS credential file is not provided."); | |
} | |
if (defined($aws_access_key_id) && length($aws_access_key_id) == 0) { | |
exit_with_error("Value of AWS access key id is not specified."); | |
} | |
if (defined($aws_secret_key) && length($aws_secret_key) == 0) { | |
exit_with_error("Value of AWS secret key is not specified."); | |
} | |
if (defined($mem_units) && length($mem_units) == 0) { | |
exit_with_error("Value of memory units is not specified."); | |
} | |
if (defined($disk_units) && length($disk_units) == 0) { | |
exit_with_error("Value of disk space units is not specified."); | |
} | |
if (defined($aws_iam_role) && length($aws_iam_role) == 0) { | |
exit_with_error("Value of AWS IAM role is not specified."); | |
} | |
# check for inconsistency of provided arguments | |
if (defined($aws_credential_file) && defined($aws_access_key_id)) { | |
exit_with_error("Do not provide AWS credential file and AWS access key id options together."); | |
} | |
elsif (defined($aws_credential_file) && defined($aws_secret_key)) { | |
exit_with_error("Do not provide AWS credential file and AWS secret key options together."); | |
} | |
elsif (defined($aws_access_key_id) && !defined($aws_secret_key)) { | |
exit_with_error("AWS secret key is not specified."); | |
} | |
elsif (!defined($aws_access_key_id) && defined($aws_secret_key)) { | |
exit_with_error("AWS access key id is not specified."); | |
} | |
elsif (defined($aws_iam_role) && defined($aws_credential_file)) { | |
exit_with_error("Do not provide AWS IAM role and AWS credential file options together."); | |
} | |
elsif (defined($aws_iam_role) && defined($aws_secret_key)) { | |
exit_with_error("Do not provide AWS IAM role and AWS access key id/secret key options together."); | |
} | |
# decide on the reporting units for memory and swap usage | |
if (!defined($mem_units) || lc($mem_units) eq 'megabytes') { | |
$mem_units = 'Megabytes'; | |
$mem_unit_div = MEGA; | |
} | |
elsif (lc($mem_units) eq 'bytes') { | |
$mem_units = 'Bytes'; | |
$mem_unit_div = 1; | |
} | |
elsif (lc($mem_units) eq 'kilobytes') { | |
$mem_units = 'Kilobytes'; | |
$mem_unit_div = KILO; | |
} | |
elsif (lc($mem_units) eq 'gigabytes') { | |
$mem_units = 'Gigabytes'; | |
$mem_unit_div = GIGA; | |
} | |
else { | |
exit_with_error("Unsupported memory units '$mem_units'. Use Bytes, Kilobytes, Megabytes, or Gigabytes."); | |
} | |
# decide on the reporting units for disk space usage | |
if (!defined($disk_units) || lc($disk_units) eq 'gigabytes') { | |
$disk_units = 'Gigabytes'; | |
$disk_unit_div = GIGA; | |
} | |
elsif (lc($disk_units) eq 'bytes') { | |
$disk_units = 'Bytes'; | |
$disk_unit_div = 1; | |
} | |
elsif (lc($disk_units) eq 'kilobytes') { | |
$disk_units = 'Kilobytes'; | |
$disk_unit_div = KILO; | |
} | |
elsif (lc($disk_units) eq 'megabytes') { | |
$disk_units = 'Megabytes'; | |
$disk_unit_div = MEGA; | |
} | |
else { | |
exit_with_error("Unsupported disk space units '$disk_units'. Use Bytes, Kilobytes, Megabytes, or Gigabytes."); | |
} | |
my $df_path = ''; | |
my $report_disk_space; | |
foreach my $path (@mount_path) { | |
if (length($path) == 0) { | |
exit_with_error("Value of disk path is not specified."); | |
} | |
elsif (-e $path) { | |
$report_disk_space = 1; | |
$df_path .= ' '.$path; | |
} | |
else { | |
exit_with_error("Disk file path '$path' does not exist or cannot be accessed."); | |
} | |
} | |
if ($report_disk_space && !$report_disk_util && !$report_disk_used && !$report_disk_avail) { | |
exit_with_error("Disk path is provided but metrics to report disk space are not specified."); | |
} | |
if (!$report_disk_space && ($report_disk_util || $report_disk_used || $report_disk_avail)) { | |
exit_with_error("Metrics to report disk space are provided but disk path is not specified."); | |
} | |
# check that there is a need to monitor at least something | |
if (!$report_mem_util && !$report_mem_used && !$report_mem_avail && !$report_mem_docker && !$report_cpu_docker | |
&& !$report_swap_util && !$report_swap_used && !$report_disk_space) | |
{ | |
exit_with_error("No metrics specified for collection and submission to CloudWatch."); | |
} | |
my $now = time(); | |
my $timestamp = CloudWatchClient::get_timestamp($now); | |
my $instance_id = CloudWatchClient::get_instance_id(); | |
if (!defined($instance_id) || length($instance_id) == 0) { | |
exit_with_error("Cannot obtain instance id from EC2 meta-data."); | |
} | |
if ($aggregated && lc($aggregated) ne 'only') { | |
exit_with_error("Unrecognized value '$aggregated' for --aggregated option."); | |
} | |
if ($aggregated && lc($aggregated) eq 'only') { | |
$aggregated = 2; | |
} | |
elsif (defined($aggregated)) { | |
$aggregated = 1; | |
} | |
my $image_id; | |
my $instance_type; | |
if ($aggregated) { | |
$image_id = CloudWatchClient::get_image_id(); | |
$instance_type = CloudWatchClient::get_instance_type(); | |
} | |
if ($auto_scaling && lc($auto_scaling) ne 'only') { | |
exit_with_error("Unrecognized value '$auto_scaling' for --auto-scaling option."); | |
} | |
if ($auto_scaling && lc($auto_scaling) eq 'only') { | |
$auto_scaling = 2; | |
} | |
elsif (defined($auto_scaling)) { | |
$auto_scaling = 1; | |
} | |
my $as_group_name; | |
if ($auto_scaling) | |
{ | |
my %opts = (); | |
$opts{'aws-credential-file'} = $aws_credential_file; | |
$opts{'aws-access-key-id'} = $aws_access_key_id; | |
$opts{'aws-secret-key'} = $aws_secret_key; | |
$opts{'verbose'} = $verbose; | |
$opts{'verify'} = $verify; | |
$opts{'user-agent'} = "$client_name/$version"; | |
$opts{'aws-iam-role'} = $aws_iam_role; | |
my ($code, $reply) = CloudWatchClient::get_auto_scaling_group(\%opts); | |
if ($code == 200) { | |
$as_group_name = $reply; | |
} | |
else { | |
report_message(LOG_WARNING, "Failed to call EC2 to obtain Auto Scaling group name. ". | |
"HTTP Status Code: $code. Error Message: $reply"); | |
} | |
if (!$as_group_name) | |
{ | |
if (!$verify) | |
{ | |
report_message(LOG_WARNING, "The Auto Scaling metrics will not be reported this time."); | |
if ($auto_scaling == 2) { | |
print("\n") if (!$from_cron); | |
exit 0; | |
} | |
} | |
else { | |
$as_group_name = 'VerificationOnly'; | |
} | |
} | |
} | |
my %params = (); | |
$params{'Action'} = 'PutMetricData'; | |
$params{'Namespace'} = 'dockerWatchTest-env'; | |
# avoid a storm of calls at the beginning of a minute | |
if ($from_cron) { | |
sleep(rand(20)); | |
} | |
# collect cpu metrics | |
if ($report_cpu_docker) | |
{ | |
my $docker = `/bin/cat /etc/elasticbeanstalk/.aws_beanstalk.current-container-id`; | |
chomp($docker); | |
my %docker_cpuinfo; | |
my $prefix = "/cgroup/cpuacct/docker/"; | |
foreach my $cpu (split('\n', `/bin/cat $prefix$docker"/cpuacct.stat"`)) { | |
if($cpu =~ /^(.*?)\s+(\d+)/) { | |
$docker_cpuinfo{$1} = $2; | |
} | |
} | |
my $user_cpu = $docker_cpuinfo{'user'}; | |
my $system_cpu = $docker_cpuinfo{'system'}; | |
add_metric('DockerCpuUser', 'Milliseconds', $user_cpu); | |
add_metric('DockerCpuSystem', 'Milliseconds', $system_cpu); | |
} | |
# collect memory and swap metrics | |
if ($report_mem_docker) | |
{ | |
my $docker = `/bin/cat /etc/elasticbeanstalk/.aws_beanstalk.current-container-id`; | |
chomp($docker); | |
my $prefix = "/cgroup/memory/docker/"; | |
my $mem_usage = `/bin/cat $prefix$docker"/memory.usage_in_bytes"`; | |
chomp($mem_usage); | |
add_metric('DockerMemoryUsage', $mem_units, $mem_usage / $mem_unit_div); | |
} | |
if ($report_mem_util || $report_mem_used || $report_mem_avail || $report_swap_util || $report_swap_used) | |
{ | |
my %meminfo; | |
foreach my $line (split('\n', `/bin/cat /proc/meminfo`)) { | |
if($line =~ /^(.*?):\s+(\d+)/) { | |
$meminfo{$1} = $2; | |
} | |
} | |
# meminfo values are in kilobytes | |
my $mem_total = $meminfo{'MemTotal'} * KILO; | |
my $mem_free = $meminfo{'MemFree'} * KILO; | |
my $mem_cached = $meminfo{'Cached'} * KILO; | |
my $mem_buffers = $meminfo{'Buffers'} * KILO; | |
my $mem_avail = $mem_free; | |
if (!defined($mem_used_incl_cache_buff)) { | |
$mem_avail += $mem_cached + $mem_buffers; | |
} | |
my $mem_used = $mem_total - $mem_avail; | |
my $swap_total = $meminfo{'SwapTotal'} * KILO; | |
my $swap_free = $meminfo{'SwapFree'} * KILO; | |
my $swap_used = $swap_total - $swap_free; | |
if ($report_mem_util) { | |
my $mem_util = 0; | |
$mem_util = 100 * $mem_used / $mem_total if ($mem_total > 0); | |
add_metric('MemoryUtilization', 'Percent', $mem_util); | |
} | |
if ($report_mem_used) { | |
add_metric('MemoryUsed', $mem_units, $mem_used / $mem_unit_div); | |
} | |
if ($report_mem_avail) { | |
add_metric('MemoryAvailable', $mem_units, $mem_avail / $mem_unit_div); | |
} | |
if ($report_swap_util) { | |
my $swap_util = 0; | |
$swap_util = 100 * $swap_used / $swap_total if ($swap_total > 0); | |
add_metric('SwapUtilization', 'Percent', $swap_util); | |
} | |
if ($report_swap_used) { | |
add_metric('SwapUsed', $mem_units, $swap_used / $mem_unit_div); | |
} | |
} | |
# collect disk space metrics | |
if ($report_disk_space) | |
{ | |
my @df = `/bin/df -k -l -P $df_path`; | |
shift @df; | |
foreach my $line (@df) | |
{ | |
my @fields = split('\s+', $line); | |
# Result of df is reported in 1k blocks | |
my $disk_total = $fields[1] * KILO; | |
my $disk_used = $fields[2] * KILO; | |
my $disk_avail = $fields[3] * KILO; | |
my $fsystem = $fields[0]; | |
my $mount = $fields[5]; | |
if ($report_disk_util) { | |
my $disk_util = 0; | |
$disk_util = 100 * $disk_used / $disk_total if ($disk_total > 0); | |
add_metric('DiskSpaceUtilization', 'Percent', $disk_util, $fsystem, $mount); | |
} | |
if ($report_disk_used) { | |
add_metric('DiskSpaceUsed', $disk_units, $disk_used / $disk_unit_div, $fsystem, $mount); | |
} | |
if ($report_disk_avail) { | |
add_metric('DiskSpaceAvailable', $disk_units, $disk_avail / $disk_unit_div, $fsystem, $mount); | |
} | |
} | |
} | |
# send metrics over to CloudWatch if any | |
if ($mcount > 0) | |
{ | |
my %opts = (); | |
$opts{'aws-credential-file'} = $aws_credential_file; | |
$opts{'aws-access-key-id'} = $aws_access_key_id; | |
$opts{'aws-secret-key'} = $aws_secret_key; | |
$opts{'short-response'} = 1; | |
$opts{'retries'} = 2; | |
$opts{'verbose'} = $verbose; | |
$opts{'verify'} = $verify; | |
$opts{'user-agent'} = "$client_name/$version"; | |
$opts{'enable_compression'} = 1 if ($enable_compression); | |
$opts{'aws-iam-role'} = $aws_iam_role; | |
my ($code, $reply) = CloudWatchClient::call(\%params, \%opts); | |
if ($code == 200 && !$from_cron) { | |
if ($verify) { | |
print "\nVerification completed successfully. No actual metrics sent to CloudWatch.\n\n"; | |
} else { | |
print "\nSuccessfully reported metrics to CloudWatch. Reference Id: $reply\n\n"; | |
} | |
} | |
elsif ($code < 100) { | |
exit_with_error("Failed to initialize: $reply"); | |
} | |
elsif ($code != 200) { | |
exit_with_error("Failed to call CloudWatch: HTTP $code. Message: $reply"); | |
} | |
} | |
else { | |
exit_with_error("No metrics prepared for submission to CloudWatch."); | |
} | |
exit 0; | |
# | |
# Prints out or logs an error and then exits. | |
# | |
sub exit_with_error | |
{ | |
my $message = shift; | |
report_message(LOG_ERR, $message); | |
if (!$from_cron) { | |
print STDERR "\nFor more information, run 'mon-put-instance-data.pl --help'\n\n"; | |
} | |
exit 1; | |
} | |
# | |
# Prints out or logs a message. | |
# | |
sub report_message | |
{ | |
my $log_level = shift; | |
my $message = shift; | |
chomp $message; | |
if ($from_cron) | |
{ | |
setlogsock('unix'); | |
openlog($client_name, 'nofatal', LOG_USER); | |
syslog($log_level, $message); | |
closelog; | |
} | |
elsif ($log_level == LOG_ERR) { | |
print STDERR "\nERROR: $message\n"; | |
} | |
elsif ($log_level == LOG_WARNING) { | |
print "\nWARNING: $message\n"; | |
} | |
elsif ($log_level == LOG_INFO) { | |
print "\nINFO: $message\n"; | |
} | |
} | |
# | |
# Adds one metric to the CloudWatch request. | |
# | |
sub add_single_metric | |
{ | |
my $name = shift; | |
my $unit = shift; | |
my $value = shift; | |
my $mcount = shift; | |
my $dims = shift; | |
my $dcount = 0; | |
$params{"MetricData.member.$mcount.MetricName"} = $name; | |
$params{"MetricData.member.$mcount.Timestamp"} = $timestamp; | |
$params{"MetricData.member.$mcount.Value"} = $value; | |
$params{"MetricData.member.$mcount.Unit"} = $unit; | |
foreach my $key (sort keys %$dims) | |
{ | |
++$dcount; | |
$params{"MetricData.member.$mcount.Dimensions.member.$dcount.Name"} = $key; | |
$params{"MetricData.member.$mcount.Dimensions.member.$dcount.Value"} = $dims->{$key}; | |
} | |
} | |
# | |
# Adds a metric and its aggregated clones to the CloudWatch request. | |
# | |
sub add_metric | |
{ | |
my $name = shift; | |
my $unit = shift; | |
my $value = shift; | |
my $filesystem = shift; | |
my $mount = shift; | |
my $dcount = 0; | |
my %dims = (); | |
my %xdims = (); | |
$xdims{'MountPath'} = $mount if $mount; | |
$xdims{'Filesystem'} = $filesystem if $filesystem; | |
my $auto_scaling_only = defined($auto_scaling) && $auto_scaling == 2; | |
my $aggregated_only = defined($aggregated) && $aggregated == 2; | |
if (!$auto_scaling_only && !$aggregated_only) { | |
%dims = (('InstanceId' => $instance_id), %xdims); | |
add_single_metric($name, $unit, $value, ++$mcount, \%dims); | |
} | |
if ($as_group_name) { | |
%dims = (('AutoScalingGroupName' => $as_group_name), %xdims); | |
add_single_metric($name, $unit, $value, ++$mcount, \%dims); | |
} | |
if ($instance_type) { | |
%dims = (('InstanceType' => $instance_type), %xdims); | |
add_single_metric($name, $unit, $value, ++$mcount, \%dims); | |
} | |
if ($image_id) { | |
%dims = (('ImageId' => $image_id), %xdims); | |
add_single_metric($name, $unit, $value, ++$mcount, \%dims); | |
} | |
if ($aggregated) { | |
%dims = %xdims; | |
add_single_metric($name, $unit, $value, ++$mcount, \%dims); | |
} | |
print "$name [$mount]: $value ($unit)\n" if ($verbose && $mount); | |
print "$name: $value ($unit)\n" if ($verbose && !$mount); | |
} | |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
<div class="gridster"> | |
<ul> | |
<li data-row="1" data-col="1" data-sizex="1" data-sizey="1"> | |
Cashbook - Api | |
</li> | |
<li data-row="1" data-col="1" data-sizex="1" data-sizey="1"> | |
<div data-id="3256023_rpm_throughput" data-view="Meter" data-title="RPM" data-min="0" data-max="1000"></div> | |
</li> | |
<li data-row="1" data-col="1" data-sizex="1" data-sizey="1"> | |
<div data-id="3256023_rpm_response_time" data-view="Meter" data-title="RSP" data-min="0" data-max="5000"></div> | |
</li> | |
<li data-row="1" data-col="1" data-sizex="1" data-sizey="1"> | |
<div data-id="3256023_rpm_error_rate" data-view="Number" data-title="ERR" data-prefix="%"></div> | |
</li> | |
<li data-row="1" data-col="1" data-sizex="1" data-sizey="1"> | |
Cashbook - Banking | |
</li> | |
<li data-row="1" data-col="1" data-sizex="1" data-sizey="1"> | |
<div data-id="3560345_rpm_throughput" data-view="Meter" data-title="RPM" data-min="0" data-max="1000"></div> | |
</li> | |
<li data-row="1" data-col="1" data-sizex="1" data-sizey="1"> | |
<div data-id="3560345_rpm_response_time" data-view="Meter" data-title="RSP" data-min="0" data-max="5000"></div> | |
</li> | |
<li data-row="1" data-col="1" data-sizex="1" data-sizey="1"> | |
<div data-id="3560345_rpm_error_rate" data-view="Number" data-title="ERR" data-prefix="%"></div> | |
</li> | |
<li data-row="1" data-col="1" data-sizex="1" data-sizey="1"> | |
Cashbook - GL | |
</li> | |
<li data-row="1" data-col="1" data-sizex="1" data-sizey="1"> | |
<div data-id="3256084_rpm_throughput" data-view="Meter" data-title="RPM" data-min="0" data-max="1000"></div> | |
</li> | |
<li data-row="1" data-col="1" data-sizex="1" data-sizey="1"> | |
<div data-id="3256084_rpm_response_time" data-view="Meter" data-title="RSP" data-min="0" data-max="5000"></div> | |
</li> | |
<li data-row="1" data-col="1" data-sizex="1" data-sizey="1"> | |
<div data-id="3256084_rpm_error_rate" data-view="Number" data-title="ERR" data-prefix="%"></div> | |
</li> | |
</ul> | |
</div> | |
<div class="gridster"> | |
<ul> | |
<li data-row="11" data-col="1" data-sizex="3" data-sizey="2"> | |
<div data-id="ec2-cpu" data-view="Rickshawgraph" data-title="CPU Usage" | |
data-moreinfo="" | |
data-renderer="line" | |
data-min="0" | |
data-max="50" | |
data-summary-method="none" | |
data-legend="true" | |
data-colors="rgba(192,132,255,1):rgba(96,170,255,1)" | |
style="background-color:#333A52;" | |
data-color-scheme="rainbow" | |
></div> | |
</li> | |
</ul> | |
<ul> | |
<li data-row="11" data-col="1" data-sizex="3" data-sizey="2"> | |
<div data-id="ec2-netin" data-view="Rickshawgraph" data-title="Network In" | |
data-moreinfo="" | |
data-renderer="line" | |
data-min="0" | |
data-max="10000000" | |
data-summary-method="none" | |
data-legend="true" | |
data-colors="rgba(192,132,255,1):rgba(96,170,255,1)" | |
style="background-color:#333A52;" | |
data-color-scheme="rainbow" | |
></div> | |
</li> | |
</ul> | |
<ul> | |
<li data-row="11" data-col="1" data-sizex="3" data-sizey="2"> | |
<div data-id="ec2-netout" data-view="Rickshawgraph" data-title="Network Out" | |
data-moreinfo="" | |
data-renderer="line" | |
data-min="0" | |
data-max="10000000" | |
data-summary-method="none" | |
data-legend="true" | |
data-colors="rgba(192,132,255,1):rgba(96,170,255,1)" | |
style="background-color:#333A52;" | |
data-color-scheme="rainbow" | |
></div> | |
</li> | |
</ul> | |
<ul> | |
<li data-row="11" data-col="1" data-sizex="3" data-sizey="2"> | |
<div data-id="ec2-diskrio" data-view="Rickshawgraph" data-title="Disk RIO" | |
data-moreinfo="" | |
data-renderer="line" | |
data-min="0" | |
data-max="100" | |
data-summary-method="none" | |
data-legend="true" | |
data-colors="rgba(192,132,255,1):rgba(96,170,255,1)" | |
style="background-color:#333A52;" | |
data-color-scheme="rainbow" | |
></div> | |
</li> | |
</ul> | |
<ul> | |
<li data-row="11" data-col="1" data-sizex="3" data-sizey="2"> | |
<div data-id="ec2-diskwio" data-view="Rickshawgraph" data-title="Disk WIO" | |
data-moreinfo="" | |
data-renderer="line" | |
data-min="0" | |
data-max="100" | |
data-summary-method="none" | |
data-legend="true" | |
data-colors="rgba(192,132,255,1):rgba(96,170,255,1)" | |
style="background-color:#333A52;" | |
data-color-scheme="rainbow" | |
></div> | |
</li> | |
</ul> | |
<ul> | |
<li data-row="11" data-col="1" data-sizex="3" data-sizey="2"> | |
<div data-id="ec2-diskrb" data-view="Rickshawgraph" data-title="Disk RB" | |
data-moreinfo="" | |
data-renderer="line" | |
data-min="0" | |
data-max="100000" | |
data-summary-method="none" | |
data-legend="true" | |
data-colors="rgba(192,132,255,1):rgba(96,170,255,1)" | |
style="background-color:#333A52;" | |
data-color-scheme="rainbow" | |
></div> | |
</li> | |
</ul> | |
<ul> | |
<li data-row="11" data-col="1" data-sizex="3" data-sizey="2"> | |
<div data-id="ec2-diskwb" data-view="Rickshawgraph" data-title="Disk WB" | |
data-moreinfo="" | |
data-renderer="line" | |
data-min="0" | |
data-max="100000" | |
data-summary-method="none" | |
data-legend="true" | |
data-colors="rgba(192,132,255,1):rgba(96,170,255,1)" | |
style="background-color:#333A52;" | |
data-color-scheme="rainbow" | |
></div> | |
</li> | |
</ul> | |
<ul> | |
<li data-row="11" data-col="1" data-sizex="3" data-sizey="2"> | |
<div data-id="docker-mem" data-view="Rickshawgraph" data-title="Docker Memory Usage" | |
data-moreinfo="" | |
data-renderer="bar" | |
data-min="0" | |
data-max="50" | |
data-summary-method="none" | |
data-legend="true" | |
data-colors="rgba(192,132,255,1):rgba(96,170,255,1)" | |
style="background-color:#333A52;" | |
data-color-scheme="rainbow" | |
></div> | |
</li> | |
</ul> | |
<ul> | |
<li data-row="11" data-col="1" data-sizex="3" data-sizey="2"> | |
<div data-id="docker-usercpu" data-view="Rickshawgraph" data-title="Docker User Cpu" | |
data-moreinfo="" | |
data-renderer="line" | |
data-min="0" | |
data-max="100" | |
data-summary-method="none" | |
data-legend="true" | |
data-colors="rgba(192,132,255,1):rgba(96,170,255,1)" | |
style="background-color:#333A52;" | |
data-color-scheme="rainbow" | |
></div> | |
</li> | |
</ul> | |
<ul> | |
<li data-row="11" data-col="1" data-sizex="3" data-sizey="2"> | |
<div data-id="docker-syscpu" data-view="Rickshawgraph" data-title="Docker System Cpu" | |
data-moreinfo="" | |
data-renderer="line" | |
data-min="0" | |
data-max="100" | |
data-summary-method="none" | |
data-legend="true" | |
data-colors="rgba(192,132,255,1):rgba(96,170,255,1)" | |
style="background-color:#333A52;" | |
data-color-scheme="rainbow" | |
></div> | |
</li> | |
</ul> | |
</div> | |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
require 'newrelic_api' | |
# Newrelic API key | |
key = '' | |
# Monitored application | |
app_ids = ['' ,'', ''] | |
# Emitted metrics: | |
# - rpm_apdex | |
# - rpm_error_rate | |
# - rpm_throughput | |
# - rpm_errors | |
# - rpm_response_time | |
# - rpm_db | |
# - rpm_cpu | |
# - rpm_memory | |
NewRelicApi.api_key = key | |
newrelic_account = NewRelicApi::Account.find(:first) | |
newrelic_apps = newrelic_account.applications.select do |app| | |
app_ids.include? app.id.to_s if app_ids | |
end | |
SCHEDULER.every '10s', first_in: 0 do |job| | |
newrelic_apps.each do |newrelicapp| | |
newrelicapp.threshold_values.each do |v| | |
underscored_name = v.name.downcase.gsub(' ', '_') | |
event_name = sprintf('%s_rpm_%s', newrelicapp.id, underscored_name) | |
send_event(event_name, {value: v.metric_value, current: v.metric_value}) | |
end | |
end | |
end if app_ids |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Rickshawgraph v0.1.0 | |
class Dashing.Rickshawgraph extends Dashing.Widget | |
DIVISORS = [ | |
{number: 100000000000000000000000, label: 'Y'}, | |
{number: 100000000000000000000, label: 'Z'}, | |
{number: 100000000000000000, label: 'E'}, | |
{number: 1000000000000000, label: 'P'}, | |
{number: 1000000000000, label: 'T'}, | |
{number: 1000000000, label: 'G'}, | |
{number: 1000000, label: 'M'}, | |
{number: 1000, label: 'K'} | |
] | |
# Take a long number like "2356352" and turn it into "2.4M" | |
formatNumber = (number) -> | |
for divisior in DIVISORS | |
if number > divisior.number | |
number = "#{Math.round(number / (divisior.number/10))/10}#{divisior.label}" | |
break | |
return number | |
getRenderer: () -> return @get('renderer') or @get('graphtype') or 'area' | |
# Retrieve the `current` value of the graph. | |
@accessor 'current', -> | |
answer = null | |
# Return the value supplied if there is one. | |
if @get('displayedValue') != null and @get('displayedValue') != undefined | |
answer = @get('displayedValue') | |
if answer == null | |
# Compute a value to return based on the summaryMethod | |
series = @_parseData {points: @get('points'), series: @get('series')} | |
if !(series?.length > 0) | |
# No data in series | |
answer = '' | |
else | |
switch @get('summaryMethod') | |
when "sum" | |
answer = 0 | |
answer += (point?.y or 0) for point in s.data for s in series | |
when "sumLast" | |
answer = 0 | |
answer += s.data[s.data.length - 1].y or 0 for s in series | |
when "highest" | |
answer = 0 | |
if @get('unstack') or (@getRenderer() is "line") | |
answer = Math.max(answer, (point?.y or 0)) for point in s.data for s in series | |
else | |
# Compute the sum of values at each point along the graph | |
for index in [0...series[0].data.length] | |
value = 0 | |
for s in series | |
value += s.data[index]?.y or 0 | |
answer = Math.max(answer, value) | |
when "none" | |
answer = '' | |
else | |
# Otherwise if there's only one series, pick the most recent value from the series. | |
if series.length == 1 and series[0].data?.length > 0 | |
data = series[0].data | |
answer = data[data.length - 1].y | |
else | |
# Otherwise just return nothing. | |
answer = '' | |
answer = formatNumber answer | |
return answer | |
ready: -> | |
@assignedColors = @get('colors').split(':') if @get('colors') | |
@strokeColors = @get('strokeColors').split(':') if @get('strokeColors') | |
@graph = @_createGraph() | |
@graph.render() | |
clear: -> | |
# Remove the old graph/legend if there is one. | |
$node = $(@node) | |
$node.find('.rickshaw_graph').remove() | |
if @$legendDiv | |
@$legendDiv.remove() | |
@$legendDiv = null | |
# Handle new data from Dashing. | |
onData: (data) -> | |
series = @_parseData data | |
if @graph | |
# Remove the existing graph if the number of series has changed or any names have changed. | |
needClear = false | |
needClear |= (series.length != @graph.series.length) | |
if @get("legend") then for subseries, index in series | |
needClear |= @graph.series[index]?.name != series[index]?.name | |
if needClear then @graph = @_createGraph() | |
# Copy over the new graph data | |
for subseries, index in series | |
@graph.series[index] = subseries | |
@graph.render() | |
# Create a new Rickshaw graph. | |
_createGraph: -> | |
$node = $(@node) | |
$container = $node.parent() | |
@clear() | |
# Gross hacks. Let's fix this. | |
width = (Dashing.widget_base_dimensions[0] * $container.data("sizex")) + Dashing.widget_margins[0] * 2 * ($container.data("sizex") - 1) | |
height = (Dashing.widget_base_dimensions[1] * $container.data("sizey")) | |
if @get("legend") | |
# Shave 20px off the bottom of the graph for the legend | |
height -= 20 | |
$graph = $("<div style='height: #{height}px;'></div>") | |
$node.append $graph | |
series = @_parseData {points: @get('points'), series: @get('series')} | |
graphOptions = { | |
element: $graph.get(0), | |
renderer: @getRenderer(), | |
width: width, | |
height: height, | |
series: series | |
} | |
if !!@get('stroke') then graphOptions.stroke = true | |
if @get('min') != null then graphOptions.max = @get('min') | |
if @get('max') != null then graphOptions.max = @get('max') | |
try | |
graph = new Rickshaw.Graph graphOptions | |
catch err | |
if err.toString() is "x and y properties of points should be numbers instead of number and object" | |
# This will happen with older versions of Rickshaw that don't support nulls in the data set. | |
nullsFound = false | |
for s in series | |
for point in s.data | |
if point.y is null | |
nullsFound = true | |
point.y = 0 | |
if nullsFound | |
# Try to create the graph again now that we've patched up the data. | |
graph = new Rickshaw.Graph graphOptions | |
if !@rickshawVersionWarning | |
console.log "#{@get 'id'} - Nulls were found in your data, but Rickshaw didn't like" + | |
" them. Consider upgrading your rickshaw to 1.4.3 or higher." | |
@rickshawVersionWarning = true | |
else | |
# No nulls were found - this is some other problem, so just re-throw the exception. | |
throw err | |
graph.renderer.unstack = !!@get('unstack') | |
xAxisOptions = { | |
graph: graph | |
} | |
if Rickshaw.Fixtures.Time.Local | |
xAxisOptions.timeFixture = new Rickshaw.Fixtures.Time.Local() | |
x_axis = new Rickshaw.Graph.Axis.Time xAxisOptions | |
y_axis = new Rickshaw.Graph.Axis.Y(graph: graph, tickFormat: Rickshaw.Fixtures.Number.formatKMBT) | |
if @get("legend") | |
# Add a legend | |
@$legendDiv = $("<div style='width: #{width}px;'></div>") | |
$node.append(@$legendDiv) | |
legend = new Rickshaw.Graph.Legend { | |
graph: graph | |
element: @$legendDiv.get(0) | |
} | |
return graph | |
# Parse a {series, points} object with new data from Dashing. | |
# | |
_parseData: (data) -> | |
series = [] | |
# Figure out what kind of data we've been passed | |
if data.series | |
dataSeries = if isString(data.series) then JSON.parse data.series else data.series | |
for subseries, index in dataSeries | |
try | |
series.push @_parseSeries subseries | |
catch err | |
console.log "Error while parsing series: #{err}" | |
else if data.points | |
points = data.points | |
if isString(points) then points = JSON.parse points | |
if points[0]? and !points[0].x? | |
# Not already in Rickshaw format; assume graphite data | |
points = graphiteDataToRickshaw(points) | |
series.push {data: points} | |
if series.length is 0 | |
# No data - create a dummy series to keep Rickshaw happy | |
series.push {data: [{x:0, y:0}]} | |
@_updateColors(series) | |
# Fix any missing data in the series. | |
if Rickshaw.Series.fill then Rickshaw.Series.fill(series, null) | |
return series | |
# Parse a series of data from an array passed to `_parseData()`. | |
# This accepts both Graphite and Rickshaw style data sets. | |
_parseSeries: (series) -> | |
if series?.datapoints? | |
# This is a Graphite series | |
answer = { | |
name: series.target | |
data: graphiteDataToRickshaw series.datapoints | |
color: series.color | |
stroke: series.stroke | |
} | |
else if series?.data? | |
# Rickshaw data. Need to clone, otherwise we could end up with multiple graphs sharing | |
# the same data, and Rickshaw really doesn't like that. | |
answer = { | |
name: series.name | |
data: series.data | |
color: series.color | |
stroke: series.stroke | |
} | |
else if !series | |
throw new Error("No data received for #{@get 'id'}") | |
else | |
throw new Error("Unknown data for #{@get 'id'}. series: #{series}") | |
answer.data.sort (a,b) -> a.x - b.x | |
return answer | |
# Update the color assignments for a series. This will assign colors to any data that | |
# doesn't have a color already. | |
_updateColors: (series) -> | |
# If no colors were provided, or of there aren't enough colors, then generate a set of | |
# colors to use. | |
if !@defaultColors or @defaultColors?.length != series.length | |
@defaultColors = computeDefaultColors @, @node, series | |
for subseries, index in series | |
# Preferentially pick supplied colors instead of defaults, but don't overwrite a color | |
# if one was supplied with the data. | |
subseries.color ?= @assignedColors?[index] or @defaultColors[index] | |
subseries.stroke ?= @strokeColors?[index] or "#000" | |
# Convert a collection of Graphite data points into data that Rickshaw will understand. | |
graphiteDataToRickshaw = (datapoints) -> | |
answer = [] | |
for datapoint in datapoints | |
# Need to convert potential nulls from Graphite into a real number for Rickshaw. | |
answer.push {x: datapoint[1], y: (datapoint[0] or 0)} | |
answer | |
# Compute a pleasing set of default colors. This works by starting with the background color, | |
# and picking colors of intermediate luminance between the background and white (or the | |
# background and black, for light colored backgrounds.) We use the brightest color for the | |
# first series, because then multiple series will appear to blend in to the background. | |
computeDefaultColors = (self, node, series) -> | |
defaultColors = [] | |
# Use a neutral color if we can't get the background-color for some reason. | |
backgroundColor = parseColor($(node).css('background-color')) or [50, 50, 50, 1.0] | |
hsl = rgbToHsl backgroundColor | |
alpha = if self.get('defaultAlpha')? then self.get('defaultAlpha') else 1 | |
if self.get('colorScheme') in ['rainbow', 'near-rainbow'] | |
saturation = (interpolate hsl[1], 1.0, 3)[1] | |
luminance = if (hsl[2] < 0.6) then 0.7 else 0.3 | |
hueOffset = 0 | |
if self.get('colorScheme') is 'rainbow' | |
# Note the first and last values in `hues` will both have the same hue as the background, | |
# hence the + 2. | |
hues = interpolate hsl[0], hsl[0] + 1, (series.length + 2) | |
hueOffset = 1 | |
else | |
hues = interpolate hsl[0] - 0.25, hsl[0] + 0.25, series.length | |
for hue, index in hues | |
if hue > 1 then hues[index] -= 1 | |
if hue < 0 then hues[index] += 1 | |
for index in [0...series.length] | |
defaultColors[index] = rgbToColor hslToRgb([hues[index + hueOffset], saturation, luminance, alpha]) | |
else | |
hue = if self.get('colorScheme') is "compliment" then hsl[0] + 0.5 else hsl[0] | |
if hsl[0] > 1 then hsl[0] -= 1 | |
saturation = hsl[1] | |
saturationSource = if (saturation < 0.6) then 0.7 else 0.3 | |
saturations = interpolate saturationSource, saturation, (series.length + 1) | |
luminance = hsl[2] | |
luminanceSource = if (luminance < 0.6) then 0.9 else 0.1 | |
luminances = interpolate luminanceSource, luminance, (series.length + 1) | |
for index in [0...series.length] | |
defaultColors[index] = rgbToColor hslToRgb([hue, saturations[index], luminances[index], alpha]) | |
return defaultColors | |
# Helper functions | |
# ================ | |
isString = (obj) -> | |
return toString.call(obj) is "[object String]" | |
# Parse a `rgb(x,y,z)` or `rgba(x,y,z,a)` string. | |
parseRgbaColor = (colorString) -> | |
match = /^rgb\(\s*([\d]+)\s*,\s*([\d]+)\s*,\s*([\d]+)\s*\)/.exec(colorString) | |
if match | |
return [parseInt(match[1]), parseInt(match[2]), parseInt(match[3]), 1.0] | |
match = /^rgba\(\s*([\d]+)\s*,\s*([\d]+)\s*,\s*([\d]+)\s*,\s*([\d]+)\s*\)/.exec(colorString) | |
if match | |
return [parseInt(match[1]), parseInt(match[2]), parseInt(match[3]), parseInt(match[4])] | |
return null | |
# Parse a color string as RGBA | |
parseColor = (colorString) -> | |
answer = null | |
# Try to use the browser to parse the color for us. | |
div = document.createElement('div') | |
div.style.color = colorString | |
if div.style.color | |
answer = parseRgbaColor div.style.color | |
if !answer | |
match = /^#([\da-fA-F]{2})([\da-fA-F]{2})([\da-fA-F]{2})/.exec(colorString) | |
if match then answer = [parseInt(match[1], 16), parseInt(match[2], 16), parseInt(match[3], 16), 1.0] | |
if !answer | |
match = /^#([\da-fA-F])([\da-fA-F])([\da-fA-F])/.exec(colorString) | |
if match then answer = [parseInt(match[1], 16) * 0x11, parseInt(match[2], 16) * 0x11, parseInt(match[3], 16) * 0x11, 1.0] | |
if !answer then answer = parseRgbaColor colorString | |
return answer | |
# Convert an RGB or RGBA color to a CSS color. | |
rgbToColor = (rgb) -> | |
if (!3 of rgb) or (rgb[3] == 1.0) | |
return "rgb(#{rgb[0]},#{rgb[1]},#{rgb[2]})" | |
else | |
return "rgba(#{rgb[0]},#{rgb[1]},#{rgb[2]},#{rgb[3]})" | |
# Returns an array of size `steps`, where the first value is `source`, the last value is `dest`, | |
# and the intervening values are interpolated. If steps < 2, then returns `[dest]`. | |
# | |
interpolate = (source, dest, steps) -> | |
if steps < 2 | |
answer =[dest] | |
else | |
stepSize = (dest - source) / (steps - 1) | |
answer = (num for num in [source..dest] by stepSize) | |
# Rounding errors can cause us to drop the last value | |
if answer.length < steps then answer.push dest | |
return answer | |
# Adapted from http://axonflux.com/handy-rgb-to-hsl-and-rgb-to-hsv-color-model-c | |
# | |
# Converts an RGBA color value to HSLA. Conversion formula | |
# adapted from http://en.wikipedia.org/wiki/HSL_color_space. | |
# Assumes r, g, and b are contained in the set [0, 255] and | |
# a in [0, 1]. Returns h, s, l, a in the set [0, 1]. | |
# | |
# Returns the HSLA representation as an array. | |
rgbToHsl = (rgba) -> | |
[r,g,b,a] = rgba | |
r /= 255 | |
g /= 255 | |
b /= 255 | |
max = Math.max(r, g, b) | |
min = Math.min(r, g, b) | |
l = (max + min) / 2 | |
if max == min | |
h = s = 0 # achromatic | |
else | |
d = max - min | |
s = if l > 0.5 then d / (2 - max - min) else d / (max + min) | |
switch max | |
when r then h = (g - b) / d + (g < b ? 6 : 0) | |
when g then h = (b - r) / d + 2 | |
when b then h = (r - g) / d + 4 | |
h /= 6; | |
return [h, s, l, a] | |
# Adapted from http://axonflux.com/handy-rgb-to-hsl-and-rgb-to-hsv-color-model-c | |
# | |
# Converts an HSLA color value to RGBA. Conversion formula | |
# adapted from http://en.wikipedia.org/wiki/HSL_color_space. | |
# Assumes h, s, l, and a are contained in the set [0, 1] and | |
# returns r, g, and b in the set [0, 255] and a in [0, 1]. | |
# | |
# Retunrs the RGBA representation as an array. | |
hslToRgb = (hsla) -> | |
[h,s,l,a] = hsla | |
if s is 0 | |
r = g = b = l # achromatic | |
else | |
hue2rgb = (p, q, t) -> | |
if(t < 0) then t += 1 | |
if(t > 1) then t -= 1 | |
if(t < 1/6) then return p + (q - p) * 6 * t | |
if(t < 1/2) then return q | |
if(t < 2/3) then return p + (q - p) * (2/3 - t) * 6 | |
return p | |
q = if l < 0.5 then l * (1 + s) else l + s - l * s | |
p = 2 * l - q; | |
r = hue2rgb(p, q, h + 1/3) | |
g = hue2rgb(p, q, h) | |
b = hue2rgb(p, q, h - 1/3) | |
return [Math.round(r * 255), Math.round(g * 255), Math.round(b * 255), a] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
<h1 class="title" data-bind="title"></h1> | |
<h2 class="value" data-bind="current | prepend prefix"></h2> | |
<p class="more-info" data-bind="moreinfo"></p> |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
// ---------------------------------------------------------------------------- | |
// Sass declarations | |
// ---------------------------------------------------------------------------- | |
$background-color: #dc5945; | |
$title-color: rgba(255, 255, 255, 0.7); | |
$moreinfo-color: rgba(255, 255, 255, 0.5); | |
$tick-color: rgba(255, 255, 255, 0.4); | |
// ---------------------------------------------------------------------------- | |
// Widget-graph styles | |
// ---------------------------------------------------------------------------- | |
.widget-rickshawgraph { | |
background-color: $background-color; | |
position: relative; | |
.rickshaw_graph { | |
position: absolute; | |
right: 0px; | |
top: 0px; | |
} | |
svg { | |
position: absolute; | |
left: 0px; | |
top: 0px; | |
} | |
.title, .value { | |
position: relative; | |
z-index: 99; | |
} | |
.title { | |
color: $title-color; | |
} | |
.more-info { | |
color: $moreinfo-color; | |
font-weight: 600; | |
font-size: 20px; | |
margin-top: 0; | |
} | |
.x_tick { | |
position: absolute; | |
bottom: 0; | |
.title { | |
font-size: 20px; | |
color: $tick-color; | |
opacity: 0.8; | |
padding-bottom: 30px; | |
} | |
} | |
.y_ticks { | |
font-size: 20px; | |
fill: $tick-color; | |
text { | |
opacity: 0.8; | |
} | |
} | |
.domain { | |
display: none; | |
} | |
.rickshaw_legend { | |
position: absolute; | |
left: 120px; | |
bottom: 0px; | |
white-space: nowrap; | |
overflow-x: hidden; | |
font-size: 15px; | |
height: 20px; | |
ul { | |
margin: 0; | |
padding: 0; | |
list-style-type: none; | |
text-align: center; | |
} | |
ul li { | |
display: inline; | |
} | |
.swatch { | |
display: inline-block; | |
width: 14px; | |
height: 14px; | |
margin-left: 5px; | |
} | |
.label { | |
display: inline-block; | |
margin-left: 5px; | |
} | |
} | |
} |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment