Skip to content

Instantly share code, notes, and snippets.

@c0psrul3
Created September 28, 2016 21:41
Show Gist options
  • Save c0psrul3/166c5cbacca1f032ec299a204279f62b to your computer and use it in GitHub Desktop.
Save c0psrul3/166c5cbacca1f032ec299a204279f62b to your computer and use it in GitHub Desktop.
Logstash with Grok for Syslog forwarded messages

Forwarding logs to Logstash 1.4 with rsyslog (working nginx logs too)

Posted on April 13, 2014 [[http://capnjosh.com/blog/forwarding-logs-to-logstash-1-4-with-rsyslog-working-nginx-logs-too/]]

See also: [[http://grokdebug.herokuapp.com/]]

rsyslog comes with everything. It’s fine. How do you easily, repeatably, obviously, exstensibly, etc. configure rsyslog to forward logs to a Logstash server?

Here’s how:

(note: I assume CentOS 6.5. I assume you installed Logstash via the rpm and did “rpm -i …” with it. I assume you installed nginx on the machine sending logs)

For each log file you want to show up in Logstash, create a file in /etc/rsyslog.d/, named with a “.conf” extension. Separate .conf files for each file (or service, such as nginx or httpd) makes it easier to see at a glance – it also sets you up for easier management with something like Puppet, Salt Stack, or Ansible. Make each file look like this (the stuff after the “#” in each line is a note; remove it if you want):

this section can show up in all your rsyslog config files, so just leave it here

$ModLoad imfile # Load the imfile input module $ModLoad imklog # for reading kernel log messages $ModLoad imuxsock # for reading local syslog messages

Watch /var/log/nginx/access.log

$InputFileName /var/log/nginx/access.log #can NOT use wildcards – this is where logstash-forwarder would be nice $InputFileTag nginx-access: #Logstash throws grok errors if the “:” is anywhere besides at the end; shows up as “Program” in Logstash $InputFileStateFile state-nginx-access #can be anything; unique id used by rsyslog $InputRunFileMonitor

Here’s a clean block, this time for an nginx error log

Watch /var/log/nginx/error.log

$InputFileName /var/log/nginx/error.lo* $InputFileTag nginx-error: $InputFileStateFile state-nginx-error $InputRunFileMonitor

Then, to forward all your syslogs to your Logstash server put the following in either the /etc/rsyslog.conf file or in a separate file in /etc/rsyslog.d/ :

Remote Logging (we use TCP for reliable delivery)

An on-disk queue is created for this action. If the remote host is

down, messages are spooled to disk and sent when it is up again.

#$WorkDirectory /var/lib/rsyslog # where to place spool files #keep this commented out for Ubuntu compatibility $ActionQueueFileName fwdRule1 # unique name prefix for spool files $ActionQueueMaxDiskSpace 1g # 1gb space limit (use as much as possible) $ActionQueueSaveOnShutdown on # save messages to disk on shutdown $ActionQueueType LinkedList # run asynchronously $ActionResumeRetryCount -1 # infinite retries if host is down

remote host is: name/ip:port, e.g. 192.168.0.1:514, port optional

. @@:5544 # the 2 “@” signs tells rsyslog to use TCP; 1 “@” sign tells rsyslog to use UDP

Then, on your Logstash server, make sure you have the following in a file called something like “input_rsyslog.conf” in /etc/logstash/conf.d/ (this makes Logstash cleanly parse and prep syslog-formatted messages; it also adds set the value you put for $InputFileTag to the “program” field in the Logstash output – this way you can later put a filter in place that does stuff on, say, “program”=”nginx-access” logs, running the “message” field through, say, the geoip filter and the apache unified grok filter):

input { syslog { type => syslog port => 5544 } }

Add another file called something like “filter_rsyslog.conf” in /etc/logstash/conf.d/ and put this in (it cleans up syslog messages):

filter { if [type] == “syslog” { syslog_pri { } date { match => [ “syslog_timestamp”, “MMM d HH:mm:ss”, “MMM dd HH:mm:ss” ] } } }

Add another file called something like “filter_nginx-access.conf” to /etc/logstash/conf.d/ and put this in (it cleans up access logs and adds in geoip data from the built-in GeoLite database):

filter { if [program] == “nginx-access” {

grok { match => [ “message” , “%{IPORHOST:remote_addr} – %{USERNAME:remote_user} [%{HTTPDATE:time_local}] %{QS:request} %{INT:status} %{INT:body_bytes_sent} %{QS:http_referer} %{QS:http_user_agent}” ]

}

geoip { source => “remote_addr” database => “/opt/logstash/vendor/geoip/GeoLiteCity.dat” } } }

Add another file called something like “filter_nginx-error.conf” to /etc/logstash/conf.d/ and put this in (it cleans up nginx error logs):

filter { if [program] == “nginx-error” {

grok { match => [ “message” , “%{DATA} %{WORD:webserver} %{HOST:myhost}-%{WORD:class}: (?%{YEAR}[./-]%{MONTHNUM}[./-]%{MONTHDAY}[- ]%{TIME}) [%{LOGLEVEL:severity}] %{POSINT:pid}#%{NUMBER}: %{GREEDYDATA:errormessage}(?:, client: (?%{IP}|%{HOSTNAME}))(?:, server: %{IPORHOST:server})(?:, request: %{QS:request})?(?:, host: %{QS:host})?(?:, referrer: \”%{URI:referrer})” ]

} } }

Add another file called something like “filter_sshd_authentication_failure.conf” and put this in (it looks for authentication failures and adds in GeoIP info):

filter { if [program] == “sshd” and [message] =~ “Failed password” {

grok { match => [ “message” , “Failed password for root from %{IP:remote_add$ }

geoip { source => “remote_addr” database => “/opt/logstash/vendor/geoip/GeoLiteCity.dat” } } }

Notice the pattern here? Just add another filter file, input file, output file, etc. For custom grok stuff, check out this: http://grokdebug.herokuapp.com/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment