Examples here use the default settings, see the VidStab readme on GitHub for more advanced instructions.
Here's an example video I made
brew install ffmpeg --with-libvidstab
| """ FLIRjpg2HDF5 | |
| reads raw thermal images from a FLIR-camera JPG image series | |
| and stores them in a HDF5 file - using exiftool """ | |
| import glob | |
| import os | |
| import subprocess | |
| import PIL.Image | |
| import numpy as np |
| sudo: required #is required to use docker service in travis | |
| language: php #can be any language, just php for example | |
| services: | |
| - docker # required, but travis uses older version of docker :( | |
| install: | |
| - echo "install nothing!" # put your normal pre-testing installs here |
Examples here use the default settings, see the VidStab readme on GitHub for more advanced instructions.
Here's an example video I made
brew install ffmpeg --with-libvidstab
| " Share clipboards between vim and tmux without xsel or xclip (which require X and | |
| " X forwarding with SSH) and without changing tmux shortcuts. Requires only tail. | |
| " | |
| " Great for an ssh session to you linode or droplet. | |
| " | |
| " Uses z buffer in vim and writes output to ~/.clipboard and then to tmux's paste | |
| " buffer, and reads it back in cleanly for putting (puddin'). | |
| " | |
| " NOTE: tmux has an undocumented command limit! https://github.com/tmux/tmux/issues/254 | |
| " this means if you mean to copy larger bits of code (entire functions) tmux will |
When you call addSourceBuffer on a MediaSource, you need to pass in a string that is the MIME type for the codec. If this isn't correct, the video won't play. (You can also pass this to the MediaSource's isTypeSupported function, though there seems to be a gap between what it thinks it can play and what it will play.)
The string looks like this:
video/mp4; codecs="avc1.42E01E, mp4a.40.2"
The values required in that string can be obtained by running the mp4file tool from mp4v2 on a video file, like so:
mp4file --dump movie.mp4
| #!/bin/bash | |
| # Create an Iframe index from HLS segmented streams | |
| # $1: Filename to be created | |
| # $2: Location of segmented ts files | |
| # Check how many arguments | |
| if [ $# != 2 ]; then | |
| echo "Usage: $0 [Input filename] [Location of segmented streams]" | |
| exit 1; | |
| fi |
##VGG16 model for Keras
This is the Keras model of the 16-layer network used by the VGG team in the ILSVRC-2014 competition.
It has been obtained by directly converting the Caffe model provived by the authors.
Details about the network architecture can be found in the following arXiv paper:
Very Deep Convolutional Networks for Large-Scale Image Recognition
K. Simonyan, A. Zisserman
| """ | |
| Minimal character-level Vanilla RNN model. Written by Andrej Karpathy (@karpathy) | |
| BSD License | |
| """ | |
| import numpy as np | |
| # data I/O | |
| data = open('input.txt', 'r').read() # should be simple plain text file | |
| chars = list(set(data)) | |
| data_size, vocab_size = len(data), len(chars) |
| import os | |
| from subprocess import check_call | |
| def post_save(model, os_path, contents_manager): | |
| """post-save hook for converting notebooks to .py and .html files.""" | |
| if model['type'] != 'notebook': | |
| return # only do this for notebooks | |
| d, fname = os.path.split(os_path) | |
| check_call(['jupyter', 'nbconvert', '--to', 'script', fname], cwd=d) | |
| check_call(['jupyter', 'nbconvert', '--to', 'html', fname], cwd=d) |