- By Edmond Lau
- Highly Recommended 👍
- http://www.theeffectiveengineer.com/
- They are the people who get things done. Effective Engineers produce results.
#! /usr/bin/env python3.5 | |
""" | |
Fixing bluetooth stereo headphone/headset problem in ubuntu 16.04 and also debian jessie, with bluez5. | |
Workaround for bug: https://bugs.launchpad.net/ubuntu/+source/indicator-sound/+bug/1577197 | |
Run it with python3.5 or higher after pairing/connecting the bluetooth stereo headphone. | |
This will be only fixes the bluez5 problem mentioned above . |
from scipy.stats import norm, shapiro, kstest, anderson | |
import bokeh.plotting as bplt | |
from bokeh import layouts | |
from bokeh.charts import Histogram, Scatter | |
from bokeh.models import Span | |
import pandas as pd | |
import numpy as np | |
def vertical_histogram(y): |
I hereby claim:
To claim this, I am signing this object:
The conference was a great experience but there didn't seem to be any groundbreaking work, mostly new tricks or combination of methods. Probabilistic modelling using neural networks and GAN's seem very popular and applying neural networks to new datasets/areas is still enough to get a NIPS poster. Some of the orals were good but personally I think most of them were only poster level while many posters were oral talk level though people more intelligent than me made that choice which probably means the selection process is quite random.
Differentiable Neural Computers (Memory Networks) will probably play a big role in the more complex tasks e.g. dialog systems and reasoning where the model needs to keep an internal representation of an entity and it's properties (also see the EntityNet from LeCun). As Graves explained it is conceptually a nice idea to separate the computation and the memory and it gives the model the capability to le
# https://people.eecs.berkeley.edu/~kjamieson/hyperband.html | |
# you need to write the following hooks for your custom problem | |
from problem import get_random_hyperparameter_configuration,run_then_return_val_loss | |
max_iter = 81 # maximum iterations/epochs per configuration | |
eta = 3 # defines downsampling rate (default=3) | |
logeta = lambda x: log(x)/log(eta) | |
s_max = int(logeta(max_iter)) # number of unique executions of Successive Halving (minus one) | |
B = (s_max+1)*max_iter # total number of iterations (without reuse) per execution of Succesive Halving (n,r) |
#!/usr/bin/env python | |
# gpu_stat.py [DELAY [COUNT]] | |
# dump gpu stats as a line of json | |
# {"time": 1474168378.146957, "pci_tx": 146000, "pci_rx": 1508000, | |
# "gpu_util": 42, "mem_util": 24, "mem_used": 11710, | |
# "temp": 76, "fan_speed": 44, "power": 65 } |
#!/usr/bin/env python | |
# gpu_stat.py [DELAY [COUNT]] | |
# dump some gpu stats as a line of json | |
# {"util":{"PCIe":"0", "memory":"11", "video":"0", "graphics":"13"}, "used_mem":"161"} | |
import json, socket, subprocess, sys, time | |
try: | |
delay = int(sys.argv[1]) | |
except: | |
delay = 1 |
# From https://gist.github.com/narphorium/d06b7ed234287e319f18 | |
import tensorflow as tf | |
def kMeansCluster(vector_values, num_clusters, max_num_steps, stop_coeficient = 0.0): | |
vectors = tf.constant(vector_values) | |
centroids = tf.Variable(tf.slice(tf.random_shuffle(vectors), | |
[0,0],[num_clusters,-1])) | |
old_centroids = tf.Variable(tf.zeros([num_clusters,2])) | |
centroid_distance = tf.Variable(tf.zeros([num_clusters,2])) |