Skip to content

Instantly share code, notes, and snippets.

View duncangh's full-sized avatar
💭
👨‍💻

Graham Duncan duncangh

💭
👨‍💻
View GitHub Profile
@duncangh
duncangh / postgres_queries_and_commands.sql
Created October 31, 2019 03:53 — forked from rgreenjr/postgres_queries_and_commands.sql
Useful PostgreSQL Queries and Commands
-- show running queries (pre 9.2)
SELECT procpid, age(clock_timestamp(), query_start), usename, current_query
FROM pg_stat_activity
WHERE current_query != '<IDLE>' AND current_query NOT ILIKE '%pg_stat_activity%'
ORDER BY query_start desc;
-- show running queries (9.2)
SELECT pid, age(clock_timestamp(), query_start), usename, query
FROM pg_stat_activity
WHERE query != '<IDLE>' AND query NOT ILIKE '%pg_stat_activity%'
@duncangh
duncangh / readme.md
Created October 9, 2019 22:19 — forked from baraldilorenzo/readme.md
VGG-16 pre-trained model for Keras

##VGG16 model for Keras

This is the Keras model of the 16-layer network used by the VGG team in the ILSVRC-2014 competition.

It has been obtained by directly converting the Caffe model provived by the authors.

Details about the network architecture can be found in the following arXiv paper:

Very Deep Convolutional Networks for Large-Scale Image Recognition

K. Simonyan, A. Zisserman

@duncangh
duncangh / cache_example.py
Created October 6, 2019 02:18 — forked from treuille/cache_example.py
This demonstrates the st.cache function
import streamlit as st
import pandas as pd
# Reuse this data across runs!
read_and_cache_csv = st.cache(pd.read_csv)
BUCKET = "https://streamlit-self-driving.s3-us-west-2.amazonaws.com/"
data = read_and_cache_csv(BUCKET + "labels.csv.gz", nrows=1000)
desired_label = st.selectbox('Filter to:', ['car', 'truck'])
st.write(data[data.label == desired_label])
@duncangh
duncangh / happy_git_on_osx.md
Created September 19, 2019 02:07 — forked from trey/happy_git_on_osx.md
Creating a Happy Git Environment on OS X

Creating a Happy Git Environment on OS X

Step 1: Install Git

brew install git bash-completion

Configure things:

git config --global user.name "Your Name"

git config --global user.email "[email protected]"

@duncangh
duncangh / .bash_profile
Created September 19, 2019 01:03 — forked from natelandau/.bash_profile
Mac OSX Bash Profile
# ---------------------------------------------------------------------------
#
# Description: This file holds all my BASH configurations and aliases
#
# Sections:
# 1. Environment Configuration
# 2. Make Terminal Better (remapping defaults and adding functionality)
# 3. File and Folder Management
# 4. Searching
# 5. Process Management
@duncangh
duncangh / picblast.sh
Created September 2, 2019 15:21 — forked from bwhitman/picblast.sh
Make an audio collage out of your live photos
mkdir /tmp/picblast; cd ~/Pictures/Photos\ Library.photoslibrary; for i in `find . | grep jpegvideocompl`;do ffmpeg -i $i /tmp/picblast/${i:(-8)}.wav; done; cd /tmp/picblast; ffmpeg -safe 0 -f concat -i <( for f in *.wav; do echo "file '$(pwd)/$f'"; done ) ~/Desktop/picblast.wav; rm -rf /tmp/picblast
from ftplib import FTP
from datetime import datetime

start = datetime.now()
ftp = FTP('your-ftp-domain-or-ip')
ftp.login('your-username','your-password')

# Get All Files
files = ftp.nlst()
@duncangh
duncangh / pandas_s3_streaming.py
Created April 7, 2019 03:03 — forked from uhho/pandas_s3_streaming.py
Streaming pandas DataFrame to/from S3 with on-the-fly processing and GZIP compression
def s3_to_pandas(client, bucket, key, header=None):
# get key using boto3 client
obj = client.get_object(Bucket=bucket, Key=key)
gz = gzip.GzipFile(fileobj=obj['Body'])
# load stream directly to DF
return pd.read_csv(gz, header=header, dtype=str)
def s3_to_pandas_with_processing(client, bucket, key, header=None):
@duncangh
duncangh / s3_to_pandas.py
Created April 7, 2019 03:01 — forked from jaklinger/s3_to_pandas.py
Read CSV (or JSON etc) from AWS S3 to a Pandas dataframe
import boto3
import pandas as pd
from io import BytesIO
bucket, filename = "bucket_name", "filename.csv"
s3 = boto3.resource('s3')
obj = s3.Object(bucket, filename)
with BytesIO(obj.get()['Body'].read()) as bio:
df = pd.read_csv(bio)
import token
import tokenize
from six.moves import cStringIO as StringIO
import json
from pandas.io.json import json_normalize
import requests
import pandas as pd
def __fix_lazy_json(in_text):