This focuses on generating the certificates for loading local virtual hosts hosted on your computer, for development only.
Do not use self-signed certificates in production ! For online certificates, use Let's Encrypt instead (tutorial).
This focuses on generating the certificates for loading local virtual hosts hosted on your computer, for development only.
Do not use self-signed certificates in production ! For online certificates, use Let's Encrypt instead (tutorial).
A curated list of AWS resources to prepare for the AWS Certifications
A curated list of awesome AWS resources you need to prepare for the all 5 AWS Certifications. This gist will include: open source repos, blogs & blogposts, ebooks, PDF, whitepapers, video courses, free lecture, slides, sample test and many other resources.
#!/bin/bash | |
#================================================================ | |
# Let's Encrypt renewal script for Apache on Ubuntu/Debian | |
# @author Erika Heidi<[email protected]> | |
# Usage: ./le-renew.sh [base-domain-name] | |
# More info: http://do.co/1mbVihI | |
#================================================================ | |
domain=$1 | |
le_path='/opt/letsencrypt' | |
le_conf='/etc/letsencrypt' |
user web; | |
# One worker process per CPU core. | |
worker_processes 8; | |
# Also set | |
# /etc/security/limits.conf | |
# web soft nofile 65535 | |
# web hard nofile 65535 | |
# /etc/default/nginx |
For this configuration you can use web server you like, i decided, because i work mostly with it to use nginx.
Generally, properly configured nginx can handle up to 400K to 500K requests per second (clustered), most what i saw is 50K to 80K (non-clustered) requests per second and 30% CPU load, course, this was 2 x Intel Xeon
with HyperThreading enabled, but it can work without problem on slower machines.
You must understand that this config is used in testing environment and not in production so you will need to find a way to implement most of those features best possible for your servers.
#!/bin/bash | |
# This script is used by Nagios to post alerts into a Slack channel | |
# using the Incoming WebHooks integration. Create the channel, botname | |
# and integration first and then add this notification script in your | |
# Nagios configuration. | |
# | |
# All variables that start with NAGIOS_ are provided by Nagios as | |
# environment variables when an notification is generated. | |
# A list of the env variables is available here: |
#!/bin/bash | |
# herein we backup our indexes! this script should run at like 6pm or something, after logstash | |
# rotates to a new ES index and theres no new data coming in to the old one. we grab metadatas, | |
# compress the data files, create a restore script, and push it all up to S3. | |
TODAY=`date +"%Y.%m.%d"` | |
INDEXNAME="logstash-$TODAY" # this had better match the index name in ES | |
INDEXDIR="/usr/local/elasticsearch/data/logstash/nodes/0/indices/" | |
BACKUPCMD="/usr/local/backupTools/s3cmd --config=/usr/local/backupTools/s3cfg put" | |
BACKUPDIR="/mnt/es-backups/" | |
YEARMONTH=`date +"%Y-%m"` |
echo 'export PATH=$HOME/local/bin:$PATH' >> ~/.bashrc | |
. ~/.bashrc | |
mkdir ~/local | |
mkdir ~/node-latest-install | |
cd ~/node-latest-install | |
curl http://nodejs.org/dist/node-latest.tar.gz | tar xz --strip-components=1 | |
./configure --prefix=~/local | |
make install # ok, fine, this step probably takes more than 30 seconds... | |
curl https://www.npmjs.org/install.sh | sh |