CMD+b
open/close left panel
Ctrl + -
[+ shit] go back/forward
Ctrl + CMD + left
[right] split editor
CMD + T
open search panel
CMD + down
open file from explorer
""" Trains an agent with (stochastic) Policy Gradients on Pong. Uses OpenAI Gym. """ | |
import numpy as np | |
import cPickle as pickle | |
import gym | |
# hyperparameters | |
H = 200 # number of hidden layer neurons | |
batch_size = 10 # every how many episodes to do a param update? | |
learning_rate = 1e-4 | |
gamma = 0.99 # discount factor for reward |
CMD+b
open/close left panel
Ctrl + -
[+ shit] go back/forward
Ctrl + CMD + left
[right] split editor
CMD + T
open search panel
CMD + down
open file from explorer
--- | |
- name: Create Instance in AWS | |
hosts: localhost | |
connection: local | |
gather_facts: false | |
vars: | |
aws_access_key: "xxxxxx" | |
aws_secret_key: "xxxxxx" | |
security_token: "xxxxxx" |
Kubernetes spread like a wildfire in 2017, No kidding! Here are some numbers from Scott's post:
“For companies with more than 5000 employees, Kubernetes is used by 48% and the primary orchestration tool for 33%.”
“79% of the sample chose Docker as their primary container technology.”
Riding the wave of Kubernetes, 2017 was a particular fun year for Infrastructure/DevOps folks. Finally we have some cool tools to play with after years of darkness. We started thinking what we could do with such paradigm shift. We tried to optimize the developer velocity with Jenkins and Helm Chart and many other more to come :D
One thing I hold dear in my heart is democratizing Kubernetes for Data team. It's a well known fact that today's Data teams have to muster an array of bleeding edge technologies in or
# requried env | |
# KUBE_NAMESPACE | |
# patch.json | |
KUBE_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token) | |
if [ -z "$KUBE_TOKEN" ]; then | |
echo "Error: KUBE_TOKEN is empty" | |
exit 1 | |
fi |
import os | |
import io, os, sys, types | |
from IPython import get_ipython | |
from nbformat import read | |
from IPython.core.interactiveshell import InteractiveShell | |
class NotebookLoader(object): | |
"""Module Loader for Jupyter Notebooks""" | |
def __init__(self, path=None): | |
self.shell = InteractiveShell.instance() |
#!/bin/bash | |
if [[ -z ${MFA_DEVICE} ]]; then echo 'MFA_DEVICE is required'; exit -1; else echo 'MFA_DEVICE found'; fi | |
if [[ -z ${AWS_ACCESS_KEY_ID} ]]; then echo 'AWS_ACCESS_KEY_ID is required'; exit -1; else echo 'AWS_ACCESS_KEY_ID found'; fi | |
if [[ -z ${AWS_SECRET_ACCESS_KEY} ]]; then echo 'AWS_SECRET_ACCESS_KEY is required'; exit -1; else echo 'AWS_SECRET_ACCESS_KEY found'; fi | |
function aws_auth { | |
CODE=$1 | |
RESPONSE=$2 | |
unset AWS_SESSION_TOKEN |
def load_app_config(config_file_path): | |
''' | |
load json config file, example: | |
{ | |
"query": { | |
"port": 8000, | |
"env1": "${ENV1:-default_value}", | |
"env2": "${ENV2:-12}" | |
} | |
} |
// avro schema path | |
val avroSchemaPath = "path-to/avro-schema.avsc" | |
val avroSchemaStr= scala.io.Source.fromFile(avroSchemaPath).mkString | |
val avroSchemaParser = new Schema.Parser | |
val avroSchema = avroSchemaParser.parse(avroSchemaStr) | |
// create avro generic record reader | |
val avroGenericRecordReader = new GenericDatumReader[GenericRecord](avroSchema) |
"\e[A": history-search-backward | |
"\e[B": history-search-forward | |
set show-all-if-ambiguous on | |
set completion-ignore-case on | |
set bell-style none |