Skip to content

Instantly share code, notes, and snippets.

View benkant's full-sized avatar

Ben Giles benkant

View GitHub Profile
@benkant
benkant / simplevm.c
Created June 19, 2023 08:45 — forked from imbushuo/simplevm.c
Demonstrates Hypervisor.Framework usage in Apple Silicon
// simplevm.c: demonstrates Hypervisor.Framework usage in Apple Silicon
// Based on the work by @zhuowei
// @imbushuo - Nov 2020
// To build:
// Prepare the entitlement with BOTH com.apple.security.hypervisor and com.apple.vm.networking WHEN SIP IS OFF
// Prepare the entitlement com.apple.security.hypervisor and NO com.apple.vm.networking WHEN SIP IS ON
// ^ Per @never_released, tested on 11.0.1, idk why
// clang -o simplevm -O2 -framework Hypervisor -mmacosx-version-min=11.0 simplevm.c
// codesign --entitlements simplevm.entitlements --force -s - simplevm
@benkant
benkant / tot.sh
Created November 27, 2020 19:37 — forked from MW3000/tot.sh
A shell script for Tot
#!/usr/bin/env bash
# Fork of zrzk's tot.sh https://gist.github.com/zrzka/5948256ac72c3f3820aebff1fb4b1b70
# which is a fork of gruber's tot.sh https://gist.github.com/gruber/b18d8b53385fa612713754799ed4d0a2
# which is a fork of chockenberry's tot.sh https://gist.github.com/chockenberry/d33ef5b6e6da4a3e4aa9b07b093d3c23
# Add possibility to access dots by default color names in addition to numbers and
# access first empty dot by 'empty' in addition to 0.
# Exit immediately if a pipeline returns a non-zero status.
@benkant
benkant / pg-pong.py
Created April 29, 2018 14:38 — forked from etienne87/pg-pong.py
Training a Neural Network ATARI Pong agent with Policy Gradients from raw pixels
""" Trains an agent with (stochastic) Policy Gradients on Pong. Uses OpenAI Gym. """
import numpy as np
import cPickle as pickle
import gym
from chainer import cuda
import cupy as cp
import time, threading
#backend
@benkant
benkant / cuda-setup.sh
Last active July 8, 2018 09:04 — forked from abdel/cuda-setup.sh
Install CUDA Toolkit v9.0 and cuDNN v7.0.5 on Ubuntu 16.04
#!/bin/bash
# Install CUDA Toolkit v9.0
# Instructions from https://developer.nvidia.com/cuda-downloads (linux -> x86_64 -> Ubuntu -> 16.04 -> deb (network))
sudo apt-key adv --fetch-keys http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/7fa2af80.pub
CUDA_REPO_PKG="cuda-repo-ubuntu1604_9.0.176-1_amd64.deb"
wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/${CUDA_REPO_PKG}
sudo dpkg -i ${CUDA_REPO_PKG}
sudo apt-get update
sudo apt-get -y install cuda-9-0
@benkant
benkant / mt940toOFX.py
Created March 1, 2018 04:39
Convert SWIFT MT940 data to OFX.
#!/usr/bin/env python
# encoding: utf-8
"""
mt940toOFX.py - Dieses Progrtamm liesst MT940 SWIFT Kontostände und konvertiert sie in OFX.
OFX wurde mit xero.com getestet.
Created by Maximillian Dornseif on 2010-06-05.
Copyright (c) 2010, 2013, 2014 HUDORA. All rights reserved.
"""
@benkant
benkant / tensorflow-cpu.sh
Last active December 11, 2017 15:54 — forked from Zhomart/tensorflow-cpu.sh
Compile and install tensorflow on Ubuntu 16.04. You might want to create a virtual machine on GCP, AWS, then compile tensorflow there. And then download compiled `*.wheel` file for further using it. List of compiled tensorflow with different options can be found here https://github.com/yaroslavvb/tensorflow-community-wheels/issues.
#!/bin/bash
##
## FROM https://github.com/floydhub/dl-docker
##
## Before running the script change versions and compilation flags below.
## If you're having trouble running the whole script, try running
## each command separately.
##
## List of compiled tensorflow packages https://github.com/yaroslavvb/tensorflow-community-wheels/issues
@benkant
benkant / pg-pong.py
Created September 4, 2017 05:52 — forked from karpathy/pg-pong.py
Training a Neural Network ATARI Pong agent with Policy Gradients from raw pixels
""" Trains an agent with (stochastic) Policy Gradients on Pong. Uses OpenAI Gym. """
import numpy as np
import cPickle as pickle
import gym
# hyperparameters
H = 200 # number of hidden layer neurons
batch_size = 10 # every how many episodes to do a param update?
learning_rate = 1e-4
gamma = 0.99 # discount factor for reward