This document describes how to install nvidia drivers & CUDA in one go on a fresh debian install.
Work in progress
- Start with a fresh Debian install.
| """ | |
| A keras attention layer that wraps RNN layers. | |
| Based on tensorflows [attention_decoder](https://github.com/tensorflow/tensorflow/blob/c8a45a8e236776bed1d14fd71f3b6755bd63cc58/tensorflow/python/ops/seq2seq.py#L506) | |
| and [Grammar as a Foreign Language](https://arxiv.org/abs/1412.7449). | |
| date: 20161101 | |
| author: wassname | |
| url: https://gist.github.com/wassname/5292f95000e409e239b9dc973295327a | |
| """ |
| class AttentionLSTM(LSTM): | |
| """LSTM with attention mechanism | |
| This is an LSTM incorporating an attention mechanism into its hidden states. | |
| Currently, the context vector calculated from the attended vector is fed | |
| into the model's internal states, closely following the model by Xu et al. | |
| (2016, Sec. 3.1.2), using a soft attention model following | |
| Bahdanau et al. (2014). | |
| The layer expects two inputs instead of the usual one: |
| /projects/private/hadoop-config-differ [git:master]$ sh target/bin/run-differ /projects/opensource/hbase/hbase-r1130916/src/main/resources/hbase-default.xml r1130916 /projects/opensource/hbase/hbase-0.92-rw/src/main/resources/hbase-default.xml 0.92 /projects/opensource/hbase/hbase-0.94-rw/src/main/resources/hbase-default.xml 0.94 /projects/opensource/hbase/hbase-0.96-rw/hbase-common/src/main/resources/hbase-default.xml 0.96 /projects/opensource/hbase/hbase-0.98-rw/hbase-common/src/main/resources/hbase-default.xml 0.98 /projects/opensource/hbase/hbase-trunk-rw-git/hbase-common/src/main/resources/hbase-default.xml 0.99 | |
| ========================================================= | |
| Start | |
| ========================================================= | |
| Checking differences across versions... | |
| Added or renamed keys in 0.92: | |
| added: Property{key='dfs.support.append', value='true', description='Does HDFS allow appends to files? This is an hdfs config. set in here so the hdfs client will do append support. You must ensure tha |
| #!/bin/bash | |
| # Setup | |
| # | |
| # - Create a new Jenkins Job | |
| # - Mark "None" for Source Control Management | |
| # - Select the "Build Periodically" build trigger | |
| # - configure to run as frequently as you like | |
| # - Add a new "Execute Shell" build step | |
| # - Paste the contents of this file as the command |
L1 cache reference ......................... 0.5 ns
Branch mispredict ............................ 5 ns
L2 cache reference ........................... 7 ns
Mutex lock/unlock ........................... 25 ns
Main memory reference ...................... 100 ns
Compress 1K bytes with Zippy ............. 3,000 ns = 3 µs
Send 2K bytes over 1 Gbps network ....... 20,000 ns = 20 µs
SSD random read ........................ 150,000 ns = 150 µs
Read 1 MB sequentially from memory ..... 250,000 ns = 250 µs
| Latency Comparison Numbers (~2012) | |
| ---------------------------------- | |
| L1 cache reference 0.5 ns | |
| Branch mispredict 5 ns | |
| L2 cache reference 7 ns 14x L1 cache | |
| Mutex lock/unlock 25 ns | |
| Main memory reference 100 ns 20x L2 cache, 200x L1 cache | |
| Compress 1K bytes with Zippy 3,000 ns 3 us | |
| Send 1K bytes over 1 Gbps network 10,000 ns 10 us | |
| Read 4K randomly from SSD* 150,000 ns 150 us ~1GB/sec SSD |
| 1 b java.lang.String::charAt (33 bytes) | |
| 2 b java.lang.Math::max (11 bytes) | |
| 3 b java.util.jar.Manifest$FastInputStream::readLine (167 bytes) | |
| 4 b sun.nio.cs.UTF_8$Decoder::decodeArrayLoop (553 bytes) | |
| 5 b java.util.Properties$LineReader::readLine (383 bytes) | |
| 6 b java.lang.String::hashCode (60 bytes) | |
| 7 b java.lang.String::indexOf (151 bytes) | |
| 8 b sun.nio.cs.ext.DoubleByteDecoder::decodeSingle (10 bytes) | |
| 9 b java.lang.String::lastIndexOf (156 bytes) | |
| 10 b java.lang.String::replace (142 bytes) |