Install the following requirements:
brew info zeromq
npm install zmq
npm install socket.io
gem install ffi-rzmq
Within the app directory run the following commands in different panes.
ruby worker.rb
See my DASH-IF presentation from October, 2014: | |
https://s3.amazonaws.com/misc.meltymedia/dash-if-reveal/index.html#/ | |
1. encode multiple bitrates with keyframe alignment: | |
ffmpeg -i ~/Movies/5D2_Portrait.MOV -s 1280x720 -c:v libx264 -b:v 1450k -bf 2 \ | |
-g 90 -sc_threshold 0 -c:a aac -strict experimental -b:a 96k -ar 32000 out.mp4 | |
My input was 30 fps = 3000 ms. If it were 29.97, then a GOP size of 90 frames will yield a base segment | |
size of 3003 milliseconds. You can make the segment size some multiple of this, e.g.: 6006, 9009, 12012. |
Install the following requirements:
brew info zeromq
npm install zmq
npm install socket.io
gem install ffi-rzmq
Within the app directory run the following commands in different panes.
ruby worker.rb
{ | |
"env": { | |
"node": true, | |
"es6": true | |
}, | |
"parser": "babel-eslint", | |
"rules": { | |
"no-throw-literal": 1, | |
"strict": "never", | |
"semi": [2, "never"], |
{ | |
"env": { | |
"browser": true, | |
"node": true, | |
"es6": true | |
}, | |
"plugins": ["react"], | |
"ecmaFeatures": { |
#!/bin/bash | |
# delete files older than 14 days | |
DIR=/var/www/magento/var/session/ | |
/usr/bin/find $DIR -depth -mtime +14 -exec rm -rf '{}' \; |
#!/bin/sh | |
# Usage: | |
# ./clone.sh TESTENV_TO_CLONE_DIR NEW_TEST_ENV_DIR | |
# ./clone.sh TESTENV08-HOST TESTENV03-HOST | |
# The above will clone 08 to 03 and register the vm | |
# | |
# * Do not end the argument directories with a slash! | |
# Date: Mar 10, 2015 | |
DIR_DS=$(pwd) |
#!/bin/bash | |
# | |
# MongoDB Backup Script | |
# VER. 0.1 | |
# Note, this is a lobotomized port of AutoMySQLBackup | |
# (http://sourceforge.net/projects/automysqlbackup/) for use with | |
# MongoDB. | |
# | |
# This program is free software; you can redistribute it and/or modify | |
# it under the terms of the GNU General Public License as published by |
# This number should be, at maximum, the number of CPU cores on your system. | |
# (since nginx doesn't benefit from more than one worker per CPU.) | |
worker_processes 8; | |
# Determines how many clients will be served by each worker process. | |
# (Max clients = worker_connections * worker_processes) | |
# "Max clients" is also limited by the number of socket connections available on the system (~64k) | |
# run ss -s and u'll see a timewait param | |
# The reason for TIMED_WAIT is to handle the case of packets arriving after the socket is closed. |
upstream myapp { | |
server 127.0.0.1:8081; | |
} | |
limit_req_zone $binary_remote_addr zone=login:10m rate=1r/s; | |
server { | |
listen 443 ssl spdy; | |
server_name _; | |
// Reference: http://www.blackdogfoundry.com/blog/moving-repository-from-bitbucket-to-github/ | |
// See also: http://www.paulund.co.uk/change-url-of-git-repository | |
$ cd $HOME/Code/repo-directory | |
$ git remote rename origin bitbucket | |
$ git remote add origin https://github.com/mandiwise/awesome-new-repo.git | |
$ git push origin master | |
$ git remote rm bitbucket |