Both command line utilities allow to JSON output from a shell.
#!/bin/bash | |
# prereq `brew install coreutils jo` | |
defcount=1 | |
display_usage() { | |
echo -e "Generate NUMBER of random json objects." | |
printf " Usage: %s [NUMBER] (DEFAULT: %s)\n\n" $(basename "$0") $defcount | |
} |
<PRI>VER TIMESTAMP HOSTNAME APP-NAME PROCID MSGID [SOURCETYPE@NM_IANA key1="val1" key2="val2" etc.]
example:
<34>1 2003-10-11T22:14:15.003Z mymachine.example.com su ID47 d25c5bf1 - BOM'su root' failed for lonvick on /dev/pts/8
TIMESTAMP HOSTNAME APP-NAME[PROCID]: sourcetype="SOURCETYPE" key1="val1" key2="val2" etc.
KERL_DEFAULT_INSTALL_DIR=/opt/erlang | |
KERL_CONFIGURE_OPTIONS="--enable-debug --without-javac --enable-shared-zlib --enable-dynamic-ssl-lib --disable-hipe --enable-smp-support --enable-threads --enable-kernel-poll --with-wx --with-ssl=/usr/local/opt/openssl" |
Error: The operation couldn’t be completed. (com.apple.commerce.client error 500.)
Fix: defaults write com.apple.appstore.commerce Storefront -string "$(defaults read com.apple.appstore.commerce Storefront | sed s/,8/,13/)"
Ref: https://discussions.apple.com/thread/250154403?answerId=250306539022#250306539022
FROM node:8.11-alpine | |
RUN npm install --global gitbook-cli && \ | |
gitbook fetch 3.2.2 && \ | |
gitbook install && \ | |
npm cache clear --force && \ | |
rm -rf /tmp/* | |
# Fixes https://github.com/GitbookIO/gitbook/issues/1309 | |
RUN sed -i.bak 's/confirm: true/confirm: false/g' \ |
sub.DEFAULT_GOAL := check | |
.EXPORT_ALL_VARIABLES: | |
CURL_HOME = $(CURDIR) | |
JAR = $(CURDIR)/jar | |
DB = http://localhost:15984 | |
ADM_CRD = '{"username":"admin", "password": "admin"}' | |
USR_CRD = '{"name":"eiri", "password": "eiri"}' |
Command line utility cheat
https://github.com/cheat/cheat is kind of a man
alternative.
brew install cheat
mkdir -p ~/.config/cheat && cheat --init > ~/.config/cheat/conf.yml
edit to following (a list of avail styles at https://github.com/alecthomas/chroma/tree/master/styles)
Grab a data file. "American movies scraped from Wikipedia" is a nice condence set at size 3.4M
wget https://raw.githubusercontent.com/prust/wikipedia-movie-data/master/movies.json -O movies.json
Slice movies made from 1920 till 1930 and output them line by line
jq -cr '.[] | select(.year >= 1920 and .year <= 1930)' movies.json
Pass the output to reducer to group by year, calculate total movies per year and accumulate movies into "movies" array in each block