$ brew install clamav
$ cd /usr/local/etc/clamav
$ cp freshclam.conf.sample freshclam.conf
Open freshclam.conf and comment the "Example" (in new version it may be "FooClam") line:
import re | |
from urllib.request import urlopen | |
def get_ip(): | |
d = str(urlopen("http://checkip.dyndns.com/").read()) | |
re_srch = re.compile(r"Address: (\d+\.\d+\.\d+\.\d+)").search(d) | |
if re_srch: | |
return re_srch.group(1) |
from typing import TypedDict, Union | |
# AWS Lambda Event Types | |
class ApiGatewayEvent(TypedDict): | |
requestContext: dict[str, str] | |
queryStringParameters: Union[dict[str, str], None] | |
body: str |
#!/usr/bin/env bash | |
LOG_GROUP_NAME=${1:?log group name is not set} | |
echo Getting stream names... | |
LOG_STREAMS=$( | |
aws logs describe-log-streams \ | |
--log-group-name ${LOG_GROUP_NAME} \ | |
--query 'logStreams[*].logStreamName' \ | |
--output table | |
I hereby claim:
To claim this, I am signing this object:
""" | |
Deletes old EC2 Snapshots created from the ConsistentSnapshot AWS RunCommand. | |
""" | |
import re | |
from datetime import datetime | |
from collections import defaultdict | |
from operator import itemgetter | |
import boto3 |
sts-decode() { | |
aws sts decode-authorization-message --encoded-message "$1" | jq '.DecodedMessage | fromjson' | |
} |
# Function to count lines of code in a passed file type | |
# use git ls-files to ignore any of the same file type passed in that we don't care about | |
count() { | |
git ls-files "*.$1" "**/*.$1" | xargs grep -H -c '[^[:space:]]' | sort -nr -t":" -k2 | less | |
} |
On Tue Oct 27, 2015, history.state.gov began buckling under load, intermittently issuing 500 errors. Nginx's error log was sprinkled with the following errors:
2015/10/27 21:48:36 [crit] 2475#0: accept4() failed (24: Too many open files) 2015/10/27 21:48:36 [alert] 2475#0: *7163915 socket() failed (24: Too many open files) while connecting to upstream...
An article at http://www.cyberciti.biz/faq/linux-unix-nginx-too-many-open-files/ provided directions that mostly worked. Below are the steps we followed. The steps that diverged from the article's directions are marked with an *.
su
to run ulimit
on the nginx account, use ps aux | grep nginx
to locate nginx's process IDs. Then query each process's file handle limits using cat /proc/pid/limits
(where pid
is the process id retrieved from ps
). (Note: sudo
may be necessary on your system for the cat
command here, depending on your system.)fs.file-max = 70000
to /etc/sysctl.confThis only really works if you don't mind losing any other keys (than your own).
gpg -a --export [email protected] > chrisroos-public-gpg.key
gpg -a --export-secret-keys [email protected] > chrisroos-secret-gpg.key
gpg --export-ownertrust > chrisroos-ownertrust-gpg.txt