Source: https://til.simonwillison.net/sqlite/one-line-csv-operations
sqlite3 \
:memory: \
-cmd '.mode csv' \
-cmd '.import taxi.csv taxi' \
'SELECT passenger_count, COUNT(*), AVG(total_amount) FROM taxi GROUP BY passenger_count'| gitea-1 | 2024/11/15 07:58:19 ...eb/routing/logger.go:102:func1() [I] router: completed GET / for 192.168.65.1:60979, 200 OK in 7.5ms @ web/home.go:32(web.Home) | |
| gitea-1 | 2024/11/15 07:58:20 ...eb/routing/logger.go:102:func1() [I] router: completed GET /user/login?redirect_to=%2f for 192.168.65.1:60979, 200 OK in 8.5ms @ auth/auth.go:164(auth.SignIn) | |
| gitea-1 | 2024/11/15 07:58:28 ...eb/routing/logger.go:102:func1() [I] router: completed POST /user/login for 192.168.65.1:60979, 303 See Other in 52.2ms @ auth/auth.go:196(auth.SignInPost) | |
| gitea-1 | 2024/11/15 07:58:28 ...activities/action.go:207:loadRepo() [E] repo_model.GetRepositoryByID(4): repository does not exist [id: 4, uid: 0, owner_name: , name: ] | |
| gitea-1 | 2024/11/15 07:58:28 .../context_response.go:88:HTML() [E] Render failed: failed to render template: user/dashboard/dashboard, error: template error: builtin(bindata):user/dashboard/feeds:20:45 : executing "user/dashboard/feeds" at <.GetR |
| #!/usr/bin/env bash | |
| # Print all Airflow URLs you have access to with their environment name | |
| for env in $(aws mwaa list-environments | jq -r '.Environments[]'); do | |
| echo "Env: $env - url: https://$(aws mwaa get-environment --name $env | jq -r '.Environment.WebserverUrl')" | |
| done |
Source: https://til.simonwillison.net/sqlite/one-line-csv-operations
sqlite3 \
:memory: \
-cmd '.mode csv' \
-cmd '.import taxi.csv taxi' \
'SELECT passenger_count, COUNT(*), AVG(total_amount) FROM taxi GROUP BY passenger_count'L1 cache reference ......................... 0.5 ns
Branch mispredict ............................ 5 ns
L2 cache reference ........................... 7 ns
Mutex lock/unlock ........................... 25 ns
Main memory reference ...................... 100 ns
Compress 1K bytes with Zippy ............. 3,000 ns = 3 µs
Send 2K bytes over 1 Gbps network ....... 20,000 ns = 20 µs
SSD random read ........................ 150,000 ns = 150 µs
Read 1 MB sequentially from memory ..... 250,000 ns = 250 µs
| # Pass the env-vars to MYCOMMAND | |
| eval $(egrep -v '^#' .env | xargs) MYCOMMAND | |
| # … or ... | |
| # Export the vars in .env into your shell: | |
| export $(egrep -v '^#' .env | xargs) |
Each of these commands will run an ad hoc http static server in your current (or specified) directory, available at http://localhost:8000. Use this power wisely.
$ python -m SimpleHTTPServer 8000| #!/bin/bash | |
| ### | |
| ### my-script — does one thing well | |
| ### | |
| ### Usage: | |
| ### my-script <input> <output> | |
| ### | |
| ### Options: | |
| ### <input> Input file to read. | |
| ### <output> Output file to write. Use '-' for stdout. |
| # https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/mon-scripts.html | |
| #.ebextensions/01-memorymon.config | |
| packages: | |
| yum: | |
| perl-Switch: [] | |
| perl-DateTime: [] | |
| perl-Sys-Syslog: [] | |
| perl-LWP-Protocol-https: [] | |
| perl-Digest-SHA.x86_64: [] |
| # Show stats | |
| sudo du -d1 -h /var/lib/docker | sort -h | |
| # Cleanup images | |
| sudo docker rmi $(sudo docker images --filter "dangling=true" -q --no-trunc) | |
| # Cleanup volumes | |
| sudo docker ps -a | awk 'NR>1 {print $1}' | xargs sudo docker rm | |
| sudo docker volume rm $(sudo docker volume ls -qf dangling=true) |