CQF Lecture, 09. April 2018, London
Dr. Yves J. Hilpisch, The Python Quants GmbH
Resources
def resample_img(itk_image, out_spacing=[2.0, 2.0, 2.0], is_label=False): | |
# Resample images to 2mm spacing with SimpleITK | |
original_spacing = itk_image.GetSpacing() | |
original_size = itk_image.GetSize() | |
out_size = [ | |
int(np.round(original_size[0] * (original_spacing[0] / out_spacing[0]))), | |
int(np.round(original_size[1] * (original_spacing[1] / out_spacing[1]))), | |
int(np.round(original_size[2] * (original_spacing[2] / out_spacing[2])))] |
# Compute FWHM(x,y) using 2D Gaussian fit, min-square optimization | |
# Optimization fits 2D gaussian: center, sigmas, baseline and amplitude | |
# works best if there is only one blob and it is close to the image center. | |
# author: Nikita Vladimirov @nvladimus (2018). | |
# based on code example: https://stackoverflow.com/questions/21566379/fitting-a-2d-gaussian-function-using-scipy-optimize-curve-fit-valueerror-and-m | |
import numpy as np | |
import scipy.optimize as opt | |
def twoD_GaussianScaledAmp((x, y), xo, yo, sigma_x, sigma_y, amplitude, offset): |
# This is a note of https://blog.pjsen.eu/?p=440 | |
I did a little research and have found that GIT Bash uses MINGW compilation of GNU tools. | |
It uses only selected ones. | |
You can install the whole distribution of the tools from https://www.msys2.org/ | |
and run a command to install Tmux. And then copy some files to installation folder of Git. | |
This is what you do: | |
Install before-mentioned msys2 package and run bash shell | |
Install tmux using the following command: pacman -S tmux |
import logging | |
import sys | |
from logging.handlers import TimedRotatingFileHandler | |
FORMATTER = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s") | |
LOG_FILE = "my_app.log" | |
def get_console_handler(): | |
console_handler = logging.StreamHandler(sys.stdout) | |
console_handler.setFormatter(FORMATTER) |
Many aircraft that offer wifi only permit access to machines on port 80/443, the standard http(s) ports. If you want to SSH, you have to set up an intermediate machine that hosts the SSH service on either port 80 or 443. An easy (and free) way to do this is via a Google free-tier micro instance. These instances have a 1 GB transfer ceiling per month, but so long are you are only transmitting textual data a few days per month, this limit should not be easily exceeded. Set up one of these VMs via the Google Cloud console, and select CentOS 7 as the disk image. Make sure that you allow http/https traffic on the instance, the two checkboxes in the Firewalls section of the VM settings. Optionally, set a static external IP address for your server in the VM config, in case you don't want to look up the IP each time. Then, ssh into the new VM (the IP address will be listed as the "external IP" in the list of instances) and edi
# Full example for my blog post at: | |
# https://danijar.com/building-variational-auto-encoders-in-tensorflow/ | |
import numpy as np | |
import matplotlib.pyplot as plt | |
import tensorflow as tf | |
from tensorflow.examples.tutorials.mnist import input_data | |
tfd = tf.contrib.distributions |
import boto3 | |
import json | |
def dump_task_conatiner_defs( | |
output_path='services.', | |
cluster_filter=lambda c_arn: True, | |
service_filter=lambda s_arn: True, | |
task_filter=lambda t_arn: True): | |
""" |
# Reliable persistent SSH-Tunnel via systemd (not autossh) | |
# https://gist.github.com/guettli/31242c61f00e365bbf5ed08d09cdc006#file-ssh-tunnel-service | |
[Unit] | |
Description=Tunnel for %i | |
After=network.target | |
[Service] | |
User=tunnel | |
ExecStart=/usr/bin/ssh -o "ExitOnForwardFailure yes" -o "ServerAliveInterval 60" -N tunnel@%i |
# docker build --pull -t tf/tensorflow-serving --label 1.6 -f Dockerfile . | |
# export TF_SERVING_PORT=9000 | |
# export TF_SERVING_MODEL_PATH=/tf_models/mymodel | |
# export CONTAINER_NAME=tf_serving_1_6 | |
# CUDA_VISIBLE_DEVICES=0 docker run --runtime=nvidia -it -p $TF_SERVING_PORT:$TF_SERVING_PORT -v $TF_SERVING_MODEL_PATH:/root/tf_model --name $CONTAINER_NAME tf/tensorflow-serving /usr/local/bin/tensorflow_model_server --port=$TF_SERVING_PORT --enable_batching=true --model_base_path=/root/tf_model/ | |
# docker start -ai $CONTAINER_NAME | |
FROM nvidia/cuda:9.0-cudnn7-devel-ubuntu16.04 | |