As configured in my dotfiles.
start new:
tmux
start new with session name:
#!/usr/bin/env python2 | |
# -*- coding: utf-8 -*- | |
""" | |
Usage: %prog [width [columns]] < table.csv | |
Pretty-print CSV file with fixed width columns. | |
Arguments: |
#!/usr/bin/env python2.7 | |
# | |
# Install this in your PATH as `pyhash`. | |
# | |
# curl https://gist.github.com/jbenet/6502583/raw/pyhash.py -o pyhash | |
# mv pyhash /usr/bin/local/pyhash | |
# chmod +x /usr/bin/local/pyhash | |
# | |
# If you want more cryptographic hashing functions, try the PassLib module. | |
# |
def my_local(init): | |
key = object() | |
def getter(): | |
t = _app_ctx_stack.top | |
l = getattr(t, 'my_locals') | |
if l is None: | |
t.my_locals = l = {} | |
if key not in l: | |
l[key] = init() | |
return l[key] |
from inspect import getattr_static | |
class dep: | |
name = None | |
def __init__(self, type): | |
self.type = type |
__author__ = 'archeg' | |
import httplib | |
import urllib | |
import urllib2 | |
import re | |
def URLRequest(url, params, headers, method="GET"): | |
if method == "POST": |
As configured in my dotfiles.
start new:
tmux
start new with session name:
http://geekgirl.io/concurrent-http-requests-with-python3-and-asyncio/
My friend who is a data scientist had wipped up a script that made lots (over 27K) of queries to the Google Places API. The problem was that it was synchronous and thus took over 2.5 hours to complete.
Given that I'm currently attending Hacker School and get to spend all day working on any coding problems that interests me, I decided to go about trying to optimise it.
I'm new to Python so had to do a bit of groundwork first to determine which course of action was best.
import logging | |
from contextlib import contextmanager | |
from timeit import default_timer | |
time_logger = logging.getLogger(__package__ + ".timer") | |
@contextmanager | |
def timed_code(name=None): | |
next_unit = iter(("s", "ms", "ns", "us")).next | |
msg = "section %s took" % (name,) if name else "section took" |