I hereby claim:
- I am jayswan on github.
- I am jayswan (https://keybase.io/jayswan) on keybase.
- I have a public key ASALSzDdZ0ZJ1qox8-iZ3GEAkO0YiMifm7ET6hpsMpsEkAo
To claim this, I am signing this object:
I hereby claim:
To claim this, I am signing this object:
The White Rim Trail is a long 4x4 / moto / bike route in Canyonlands National Park near Moab UT. Depending on where you start and end it's anywhere from 90-105 miles. It's a classic mountain bike ride usually done over 3 to 4 days with camping and vehicle support, but also done as a single-day marathon adventure ride. Camping permits are very difficult to get (typically a year in advance) and guided tours are very expensive, so the single day option is good if you're fit enough. The route is quite remote with no water available, but you'll typically see some motorcycles, bike tour groups, and sometimes a park ranger.
Splunk vs ELK is complicated, depending on what you want to optimize. Probably the biggest issue is the ecosystem around post-search data manipulation.
ES is amazing at searching for tokens and returning documents. The aggregations are also superb -- actually much faster than Splunk under most conditions. Plugins can extend that functionality. Stuff like fuzzy search, regex queries, indexed terms lookups, significant terms aggregations, and nested aggregations can be extremely powerful if you know how to use them well.
ES has a reputation for stability problems. These are mostly solvable by running an appropriately sized cluster with new versions and proper circuit breaker settings. Much of the FUD I've seen about this is incorrect, but the biggest problem remains that you can't kill a misbehaving query or constrain its resource use after it has started; if your circuit breakers aren't working correctly then you're out of luck.
U
from __future__ import print_function | |
import os | |
import sys | |
from netmiko import ConnectHandler | |
target_mac = os.environ['TARGET_MAC'] | |
router_ip = os.environ['ROUTER_IP'] | |
router_user = os.environ['ROUTER_USER'] | |
password = os.environ['ROUTER_PW'] |
#!/bin/sh | |
# Usage: some_command_that_outputs_usernames | uexists.sh | |
# subject to anonymous API rate limits | |
xargs -I {} curl -w "%{http_code}\n" -sI -o /dev/null https://api.github.com/users/{} |
# Fastly | |
curl -s https://api.fastly.com/public-ip-list | jq -r '.addresses | .[]' | |
dig @8.8.8.8 +short txt _netblocks.google.com | awk '{gsub("ip4:","");for (col=2; col<NF;++col) print $col}' | |
# AWS | |
curl -s https://ip-ranges.amazonaws.com/ip-ranges.json | \ | |
jq --raw-output '.prefixes | map(.ip_prefix) | .[]' |
This script allows you to do SQL GROUPBY-like aggregations on multiple fields in an Elasticsearch index.
Performance will likely be poor on large data sets.
Saved Groovy script in <elasticsearch_dir>/config/scripts/join-param-list.groovy
:
return fields.collect { doc[it].value }.join(delimiter);
# Add additional JSON logging | |
module Log; | |
export { | |
## Enables JSON-logfiles for all active streams | |
const enable_all_json = T &redef; | |
## Streams not to generate JSON-logfiles for | |
const exclude_json: set[Log::ID] = { } &redef; | |
## Streams to generate JSON-logfiles for |
dig @8.8.8.8 +short txt _netblocks.google.com | awk '{gsub("ip4:","");for (col=2; col<NF;++col) print $col}' |