Skip to content

Instantly share code, notes, and snippets.

@sundisee
sundisee / .curlrc
Created December 18, 2015 06:49 — forked from asmega/.curlrc
default proxy for curl in .curlrc
proxy=http://username:password@host:port
@sundisee
sundisee / sci_classifier.py
Created October 10, 2016 13:52 — forked from 2shou/sci_classifier.py
scikit-learn nb example
# coding: utf-8
import sys
import jieba
import numpy
from sklearn import metrics
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.naive_bayes import MultinomialNB
@sundisee
sundisee / capture_rect.js
Created October 11, 2016 03:39 — forked from kaid/capture_rect.js
phantomjs批量截取网页某一区域脚本
var base = 'http://dev.kaid.me/c';
var ids = [1, 2, 3, 41, 42, 43, 51, 52, 53];
ids.forEach(function(id) {
var page = require('webpage').create(),
url = base + id,
output = 'c' + id + '.png';
page.clipRect = {top:122, left:65, width:360, height:600};
@sundisee
sundisee / ntfs_mount.py
Created May 4, 2017 14:38 — forked from selfboot/ntfs_mount.py
mac os x:自动挂载ntfs硬盘为读写权限。 只要ntfs硬盘连接到电脑即可使用 ./ntfs_mount_auto.py 挂载ntfs磁盘为可读写,ntfs_unmount.py 为卸载磁盘。 ntfs_mount.py 是较早的版本,只有电脑先识别除硬盘,在/Volumes 可读到硬盘内容时才可以使用此脚本挂载为可读写。 建议使用./ntfs_mount_auto.py
#! /usr/bin/env python
# -*- coding: utf-8 -*-
import subprocess
import re
ntfs_pattern = re.compile(r'File System Personality: NTFS')
ntfs_device_node = re.compile(r'.*Device Node:.*')
device_dict = {}
@sundisee
sundisee / compile_tensorflow_serving.sh
Created March 10, 2018 06:13 — forked from jorgemf/compile_tensorflow_serving.sh
Compile TensorFlow Serving with CUDA support (October 2017) needs bazel 5.3 , cuda 8 , cudnn 7 # UNSUPPORTED -> https://gist.github.com/jorgemf/c791841f769bff96718fd54bbdecfd4e
#!/bin/bash
TENSORFLOW_COMMIT=9e76bf324f6bac63137a02bb6e6ec9120703ea9b # August 16, 2017
TENSORFLOW_SERVING_COMMIT=267d682bf43df1c8e87332d3712c411baf162fe9 # August 18, 2017
MODELS_COMMIT=78007443138108abf5170b296b4d703b49454487 # July 25, 2017
if [ -z $TENSORFLOW_SERVING_REPO_PATH ]; then
TENSORFLOW_SERVING_REPO_PATH="serving"
fi
INITIAL_PATH=$(pwd)
@sundisee
sundisee / Dockerfile_TFserving_1_5
Created March 10, 2018 16:33 — forked from jorgemf/Dockerfile_TFserving_1_6
Dockerfile to compile TensorFlow Serving 1.5 using GPU
# docker build --pull -t $USER/tensorflow-serving-devel -f Dockerfile .
# export TF_SERVING_PORT=9000
# export TF_SERVING_MODEL_PATH=/tf_models/mymodel
# export CONTAINER_NAME=my_container
# CUDA_VISIBLE_DEVICES=0 docker run --runtime=nvidia -it -p $TF_SERVING_PORT:$TF_SERVING_PORT -v $TF_SERVING_MODEL_PATH:/root/tf_model --name $CONTAINER_NAME $USER/tensorflow-serving-devel /root/serving/bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=$TF_SERVING_PORT --enable_batching=true --model_base_path=/root/tf_model/
# docker start -ai $CONTAINER_NAME
FROM nvidia/cuda:9.0-cudnn7-devel-ubuntu16.04
ENV TF_CUDA_VERSION=9.0 \
VMware Workstation Pro 16.x Serials
YA7RA-F6Y46-H889Z-LZMXZ-WF8UA
ZV7HR-4YX17-M80EP-JDMQG-PF0RF
UC3XK-8DD1J-089NP-MYPXT-QGU80
GV100-84W16-M85JP-WXZ7E-ZP2R6
YF5X2-8MW91-4888Y-DNWGC-W68TF
AY1XK-0NG5P-0855Y-K6ZXG-YK0T4
VMware Workstation Player 16.x Serials