(C-x means ctrl+x, M-x means alt+x)
The default prefix is C-b. If you (or your muscle memory) prefer C-a, you need to add this to ~/.tmux.conf:
| -- Remove the history from | |
| rm -rf .git | |
| -- recreate the repos from the current content only | |
| git init | |
| git add . | |
| git commit -m "Initial commit" | |
| -- push to the github remote repos ensuring you overwrite history | |
| git remote add origin [email protected]:<YOUR ACCOUNT>/<YOUR REPOS>.git |
| #!/bin/bash | |
| cd `dirname $0` | |
| curl\ | |
| --remote-name \ | |
| --remote-header-name\ | |
| http://www.topcoder.com/contest/arena/ContestAppletProd.jnlp | |
| JAVAWS="/Library/Internet Plug-Ins/JavaAppletPlugin.plugin/Contents/Home/bin/javaws" |
| """Information Retrieval metrics | |
| Useful Resources: | |
| http://www.cs.utexas.edu/~mooney/ir-course/slides/Evaluation.ppt | |
| http://www.nii.ac.jp/TechReports/05-014E.pdf | |
| http://www.stanford.edu/class/cs276/handouts/EvaluationNew-handout-6-per.pdf | |
| http://hal.archives-ouvertes.fr/docs/00/72/67/60/PDF/07-busa-fekete.pdf | |
| Learning to Rank for Information Retrieval (Tie-Yan Liu) | |
| """ | |
| import numpy as np |
| Latency Comparison Numbers (~2012) | |
| ---------------------------------- | |
| L1 cache reference 0.5 ns | |
| Branch mispredict 5 ns | |
| L2 cache reference 7 ns 14x L1 cache | |
| Mutex lock/unlock 25 ns | |
| Main memory reference 100 ns 20x L2 cache, 200x L1 cache | |
| Compress 1K bytes with Zippy 3,000 ns 3 us | |
| Send 1K bytes over 1 Gbps network 10,000 ns 10 us | |
| Read 4K randomly from SSD* 150,000 ns 150 us ~1GB/sec SSD |
As configured in my dotfiles.
start new:
tmux
start new with session name:
| # Next time you need to install something with python setup.py -- which should be never but things happen. | |
| python setup.py install --record files.txt | |
| # This will cause all the installed files to be printed to that directory. | |
| # Then when you want to uninstall it simply run; be careful with the 'sudo' | |
| cat files.txt | xargs sudo rm -rf | |
| import nltk | |
| with open('sample.txt', 'r') as f: | |
| sample = f.read() | |
| sentences = nltk.sent_tokenize(sample) | |
| tokenized_sentences = [nltk.word_tokenize(sentence) for sentence in sentences] | |
| tagged_sentences = [nltk.pos_tag(sentence) for sentence in tokenized_sentences] | |
| chunked_sentences = nltk.batch_ne_chunk(tagged_sentences, binary=True) |