Skip to content

Instantly share code, notes, and snippets.

@jonathanvx
jonathanvx / convertall.sh
Created May 21, 2012 14:24
Burst Convert all tables to InnoDB
#!/bin/bash
MAX1=`mysql -e"select @@global.max_connections;" -N -B`
MAX=$[$MAX1 - 5]
echo "select table_name from information_schema.tables where engine='myisam' and table_schema='dbname' order by DATA_LENGTH desc limit $MAX;" | mysql --database=dbname -B -N | awk '{print "mysql dbname -e\"set sql_log_bin=0; ALTER TABLE", $1, "engine=innodb;\" &"}' > /tmp/massconvert.sh
bash /tmp/massconvert.sh
rm -f /tmp/massconvert.sh
echo "select table_name from information_schema.tables where engine='myisam' and table_schema='dbname' order by DATA_LENGTH desc limit $MAX,99999;" | mysql --database=dbname -B -N | awk '{print "set sql_log_bin=0; ALTER TABLE", $1, "engine=innodb;"}' | mysql --database=dbname -f &
@jonathanvx
jonathanvx / gist:7656383
Last active December 29, 2015 10:19
My Windows Powershell script (.bat file) that automatically converts all your *.mov files in a directory (usually movies made from DSLRs) to very high quality (and very slow conversion) mp4 movies using Handbrake. Bitrate is set to 3500kb/s (take into account that the compression is very good - try it and see). Audio is 128/44.1k. Output directo…
for /r %%i in (*.mov) do "C:\Program Files\Handbrake\HandBrakeCLI.exe" -i "%%i" -t 1 -o "D:\%%~ni.mp4" -f mp4 --strict-anamorphic -e x264 -b 3500 -2 -T --vfr -a 1 -E faac -B 128 -6 dpl2 -R 44.1 -D 0 --gain=0 --audio-copy-mask none --audio-fallback ffac3 -x ref=6:weightp=1:rc-lookahead=10:b-adapt=2:bframes=6:me=umh:subq=9:merange=32
@jonathanvx
jonathanvx / gist:64cb8320ab83d3264893
Last active August 29, 2015 14:16
Update By Date - Stored Procedure
delimiter $$
drop procedure if exists update_by_date$$
CREATE PROCEDURE update_by_date(IN startdate DATE, IN enddate date)
PROC:BEGIN
DECLARE idate date;
DECLARE cr BIGINT;
DECLARE done INT DEFAULT FALSE;
-- requires data_dimension table - be found in my github repo
DECLARE curs1 CURSOR FOR SELECT date FROM date_dimension WHERE date between startdate and enddate;
echo $sql | mysql 2>&1 | awk '{print $0 ">", sql}' sql="$sql" >> $errorlog
@jonathanvx
jonathanvx / hosts.
Created December 2, 2016 16:09
NonSense News Websites
127.0.0.1 100PercentFedUp.com
127.0.0.1 21stcenturywire.com
127.0.0.1 369news.net
127.0.0.1 70news.wordpress.com
127.0.0.1 asheepnomore.net
127.0.0.1 abcnews.com.co
127.0.0.1 www.aim.org
127.0.0.1 latitudes.org
127.0.0.1 www.activistpost.com
127.0.0.1 addictinginfo.org
@jonathanvx
jonathanvx / json.sql
Last active November 26, 2020 17:12
Using JSON with MySQL Stored Procedures
delimiter $$
drop procedure if exists county_summary$$
create procedure county_summary(query JSON)
BEGIN
DECLARE i_county varchar(255) default null;
DECLARE i_year int default null;
DECLARE i_month int default null;
set i_county = JSON_UNQUOTE(JSON_EXTRACT(query,'$.county'));
set i_year = JSON_EXTRACT(query,'$.year');
/*
'year' and 'month' columns are in seperate integer columns.
This creates an issue when wanting reports between two different dates.
How can we fix this? One solution, a generated column with an index on it.
*/
MBP@landregistry> select * from summary order by total_price desc limit 3;
+----------+------+-------+------------+----------------+--------------+-------------+
| postcode | year | month | county | district | total_price | houses_sold |
+----------+------+-------+------------+----------------+--------------+-------------+
| W1T | 2015 | 7 | CAMDEN | GREATER LONDON | 367075000.00 | 7 |
@jonathanvx
jonathanvx / moodle.cnf
Created January 27, 2017 15:30
Optimised Moodle Config file
[mysqld]
## Performance Specific ##
##########################
innodb_buffer_pool_size = 12G ### How much innodb data to store in memory. Higher = faster performance
innodb_file_per_table = 1 ### Each innodb table is its own file on disk.
innodb_flush_log_at_trx_commit = 2 ### Flushes to disk in batches instead of per change
innodb_io_capacity = 800 ### Min IO speed expected to write to disk.
read_buffer_size = 2M ### Helps with reading temp tables faster, bulk inserts and nested queries
@jonathanvx
jonathanvx / createmycnf.sh
Created February 7, 2017 13:02
Self correcting my.cnf for MySQL
#!/bin/bash
(cat <<EOF
[mysqld]
## Performance Specific ##
##########################
innodb_buffer_pool_size = 12G ### How much innodb data to store in memory. Higher = faster performance
innodb_file_per_table = 1 ### Each innodb table is its own file on disk.
innodb_flush_log_at_trx_commit = 2 ### Flushes to disk in batches instead of per change
@jonathanvx
jonathanvx / large_redshift_tables.sql
Created October 11, 2017 07:34 — forked from subelsky/large_redshift_tables.sql
Quick SQL command to find large tables in redshift
-- based on http://stackoverflow.com/questions/21767780/how-to-find-size-of-database-schema-table-in-redshift
SELECT name AS table_name, ROUND((COUNT(*) / 1024.0),2) as "Size in Gigabytes"
FROM stv_blocklist
INNER JOIN
(SELECT DISTINCT id, name FROM stv_tbl_perm) names
ON names.id = stv_blocklist.tbl
GROUP BY name
ORDER BY "Size in Gigabytes" DESC