I hereby claim:
- I am m1ke on github.
- I am m1ke (https://keybase.io/m1ke) on keybase.
- I have a public key ASDL9H7hQl69kVoA3M4_q95BAd9MkPqBY6p5Hz7N-redLwo
To claim this, I am signing this object:
function beep(){ | |
var exec = require('child_process').exec; | |
exec('canberra-gtk-play --file=/usr/share/sounds/gnome/default/alerts/glass.ogg'); | |
} |
#!/bin/bash | |
# We only run the deployment if we're already on master; we could do this directly in gulp but it allows for other commands to be added | |
if [ $(git rev-parse --abbrev-ref HEAD) == "master" ]; then | |
gulp deploy | |
else | |
echo "Must be on master" |
I hereby claim:
To claim this, I am signing this object:
# Taken from https://github.com/NotBobTheBuilder/dotfiles/blob/9dfb79548eab93c5b048b7ac535c53cee8db47f0/.bash_aliases | |
# Relies on .bash_colors also from the same file | |
IRed="\[\033[0;91m\]" # Red | |
Green="\[\033[0;32m\]" # Green | |
Color_Off="\[\033[0m\]" # Text Reset | |
export PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w$(git branch &>/dev/null;\ | |
if [ $? -eq 0 ]; then \ | |
echo "$(echo `git status` | grep "Changes" > /dev/null 2>&1; \ |
Using EC2 instances within an autoscaling group is the best way to guarantee redundancy, scalability and fault tolerance across your infrastrucutre.
This comes at the price of common paradigms such as being able to SSH into a known URL to manage configuration or logs, and the requirement that configurations must be applied to multiple machines.
DevOps provisioning tools such as Puppet can be used to manage configurations, but they
In order to get the most from the workshop you'll benefit from being able to follow along on your own machine. The setup for this is not complicated but it will require you to have carried out a few tasks beforehand to save time during the workshop - we only have 2 hours after all.
All the steps below should take you no more than 30 minutes, more likely around 15! If you need help Tweet me @m1ke
You can pre-register for the session here and you'll need a code that was emailed to you with subject "PHP UK Conference - Info for Attendees"
The key aspect is this: you must have CLI access to a working AWS account with billing enabled. You will incur some costs during the workshop but if you stick to the advice given these costs are likely to be in the region of cents to a dollar at most. Be aware that once you have an AWS account with billing enabled you should do your best to keep the credentials used to access that account secure. If you are conce
interface AsNumber { | |
public function asNumber(): int; | |
} | |
interface CanBeAdded { | |
public function add(AsNumber $b): int; | |
} | |
interface CanBeMultiplied { | |
public function multiply(AsNumber $b): int; |
Using AWS RDS we can export default MySQL logs to CloudWatch. This can include the slow query log.
As standard, CloudWatch logs will just show the log message, which allows reading the log, but not doing useful analysis, such as finding the slowest queries over a time range, or the slow query which repeats the most often.
Using CloudWatch Insights we can write custom log search queries to try and extract more information. It starts with a parse step, where the "glob" parser can be used to take a single block log message and pull out individual data points:
PARSE @message '# Time: * *\n# User@Host: *[*] @ * Id: *\n# Query_time: * Lock_time: * Rows_sent: * Rows_examined: *\n*' as date,time,user,host,ip,id,duration,lock,rows,examined,query
<?php | |
/* | |
* Declare the following consts: | |
* | |
* REPO_DEFAULT - Set this to avoid having to type each time | |
* TEAM - Find this in URLs | |
* USER, PASS - Bitbucket credentials | |
* | |
* Install Guzzle and require './vendor/autoload.php' | |
*/ |