Skip to content

Instantly share code, notes, and snippets.

View KellenSunderland's full-sized avatar

Kellen Sunderland KellenSunderland

View GitHub Profile
@KellenSunderland
KellenSunderland / Optimized
Created March 5, 2018 10:38
Docker layer optimizations
FROM ubuntu:16.04
COPY install/ubuntu_install_core.sh /install/
RUN /install/ubuntu_install_core.sh
COPY install/ubuntu_install_python.sh /install/
RUN /install/ubuntu_install_python.sh
COPY install/ubuntu_install_scala.sh /install/
RUN /install/ubuntu_install_scala.sh
COPY install/ubuntu_install_r.sh /install/
RUN /install/ubuntu_install_r.sh
@KellenSunderland
KellenSunderland / Dockerfile.cpu_clang
Created March 5, 2018 10:05
MKL docker container
FROM ubuntu:16.04
COPY install/ubuntu_install_core.sh /install/
RUN /install/ubuntu_install_core.sh
COPY install/ubuntu_install_python.sh /install/
RUN /install/ubuntu_install_python.sh
COPY install/ubuntu_install_scala.sh /install/
RUN /install/ubuntu_install_scala.sh
COPY install/ubuntu_install_r.sh /install/
RUN /install/ubuntu_install_r.sh
@KellenSunderland
KellenSunderland / main.cpp
Created February 14, 2018 08:47
Reduce Test
#include <iostream>
#include <cuda_runtime.h>
#include <cstring>
#include <chrono>
int gpu_reduce(int size, const dim3 &block, const dim3 &grid, size_t bytes, int *h_idata, int *h_odata,
int *d_idata, int *d_odata);
void cpu_reduce(int size, int *h_idata, int &cpu_sum) {
cpu_sum= 0;
@KellenSunderland
KellenSunderland / lighthead.py
Created February 2, 2018 14:19
Autotune repro.
import mxnet as mx
from collections import namedtuple
import numpy as np
import cv2
Batch = namedtuple('Batch', ['data'])
from scipy.misc import imread, imresize
import time
import os
from mxnet.gluon.model_zoo import vision
@KellenSunderland
KellenSunderland / Dockerfile.build.master.jetson
Last active August 19, 2023 16:12
Jetson MXNet build recipe
# -*- mode: dockerfile -*-
# Work in progress, some of the manual steps below will be fixed in a subsequent release.
# Dockerfile to build libmxnet.so, and a python wheel for the Jetson TX1 and TX2
# Builds from Github MXNet master branch
# Once complete copy artifacts from /work/build to target device.
# Install by running 'pip wheel name_of_wheel.whl' and copying the .so to a folder on your LD_LIBRARY_PATH
FROM nvidia/cuda:8.0-cudnn5-devel as cudabuilder
FROM dockcross/linux-arm64
@KellenSunderland
KellenSunderland / Dockerfile
Created September 25, 2017 16:38
MXNet Arm Cross Compilation Config and Dockerfile to build a relatively portable armv6 linux binary.
# -*- mode: dockerfile -*-
# Dockerfile to build libmxnet.so for armv6
FROM dockcross/linux-armv6
ENV ARCH armv6l
ENV BUILD_OPTS "USE_BLAS=openblas USE_SSE=0 USE_OPENCV=0"
ENV CC /usr/bin/arm-linux-gnueabihf-gcc
ENV CXX /usr/bin/arm-linux-gnueabihf-g++
ENV FC /usr/bin/arm-linux-gnueabihf-gfortran
ENV HOSTCC gcc
@KellenSunderland
KellenSunderland / Dockerfile
Last active September 25, 2017 19:17
Dockerfile to build a relatively portable armv6 linux binary.
# -*- mode: dockerfile -*-
# Dockerfile to build openblas for armv6
FROM dockcross/linux-armv6
ENV ARCH armv6l
ENV BUILD_OPTS "USE_BLAS=openblas USE_SSE=0 USE_OPENCV=0"
ENV CC /usr/bin/arm-linux-gnueabihf-gcc
ENV CXX /usr/bin/arm-linux-gnueabihf-g++
ENV FC /usr/bin/arm-linux-gnueabihf-gfortran
ENV HOSTCC gcc