Skip to content

Instantly share code, notes, and snippets.

View tonyyang-svail's full-sized avatar

Yang Yang(Tony) tonyyang-svail

  • Meta
  • Bay Area
  • LinkedIn in/tyy
View GitHub Profile
class MyClass(object):
pass
# is identical to
# type(name, bases, dct)
# - name is a string giving the name of the class to be constructed
# - bases is a tuple giving the parent classes of the class to be constructed
# - dct is a dictionary of the attributes and methods of the class to be constructed
MyClass = type('MyClass', (), {})

Python Notes

Data Structures

List

from __future__ import print_function
import os
import psutil

process = psutil.Process(os.getpid())
Despite the characteristic of a programming language, compiled or scripted,
static typing or dynamic typing, the large scale software project written in
that language is essentailly a tree of files.
When coding in C, and many other compilation languages, the following things
are done separately
- Building System: a practical building system (e.g. Makefile, CMake,
Bazel) does not come with the language itself, one need to install one.
Extra code has to be written.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
import tensorflow.contrib.eager as tfe
tfe.enable_eager_execution()
NOISE_DIMENSION = 10
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
import tensorflow.contrib.eager as tfe
tfe.enable_eager_execution()
INPUT_DIMENSION = 784
with parameters() as params:
fc1 = layers.Dense(hidden_dim, input_shape=(input_dim,))
fc2 = layers.Dense(output_dim, input_shape=(hidden_dim,))
def forward(images, labels):
x = fc1(images)
x = layers.relu(x)
x = fc2(x)
logits = layers.relu(x)
loss = losses.softmax_cross_entropy(logits, labels)

Main inspiration comes from [here][1].

“”” Here is what a deep learning system stack would look like in nowdays.

  1. Build operator level graph description language: name whatever dl frameworks you care about, and [ONNX][2]
  2. Tensor primitive level graph description languages: [NNVM][3], [HLO/XLA][4], [NGraph][5]. It is close enough to the first one that you can also build graph optimization on first layer and bypass this layer.
  3. DSL for description and codegen: TVM, image processing languages like [halide][6], [darkroom][7].
  4. Hardcoded optimized kernel library: [nnpack][8], [cudnn][9], [libdnn][10]
  5. Device dependent library: [maxas][11](assembler for NVIDIA Maxwell architecture)

Start a pod:

kubectl run $POD_NAME --image=$IMAGE_NAME --port=$PORT_NUMBER --image-pull-policy=Never

Get stdout of POD/CONTAINER

kubectl logs two-ubuntu-bash bash1