start new:
tmux
start new with session name:
tmux new -s myname
package main | |
import ( | |
"fmt" | |
"reflect" | |
) | |
type Foo struct { | |
FirstName string `tag_name:"tag 1"` | |
LastName string `tag_name:"tag 2"` |
This is free and unencumbered software released into the public domain. | |
Anyone is free to copy, modify, publish, use, compile, sell, or | |
distribute this software, either in source code form or as a compiled | |
binary, for any purpose, commercial or non-commercial, and by any | |
means. | |
In jurisdictions that recognize copyright laws, the author or authors | |
of this software dedicate any and all copyright interest in the | |
software to the public domain. We make this dedication for the benefit |
import os | |
import subprocess | |
def get_google_news_homepage(): | |
print("this will fetch the current google news home page as text.") | |
print("it will use the requests and lxml libaries") | |
print("press enter to continue") | |
input() | |
import requests |
#!/bin/bash | |
########################################################################################### | |
# bootstrap.sh | |
# | |
# To run directly from web: | |
# wget -O - https://gist.githubusercontent.com/whoshuu/11159710/raw/bootstrap.sh | bash | |
# Else: | |
# ./bootstrap.sh | |
# |
from __future__ import division | |
import os | |
import numpy | |
from PIL import Image, ImageDraw | |
class Renderer(): | |
def __init__( | |
self, size=(640, 480), sampler=None, |
Note on how to install caffe on Ubuntu. Sucessfully install using CPU, more information for GPU see this link
###Installation
lspci | grep -i nvidia
# internal | |
import re | |
import subprocess | |
# external | |
import twython | |
# prosaic, obviously | |
twitter = twython.Twython( | |
ENV['API_KEY'], |
A lot of us are interested in doing more analysis with our service logs so I thought I'd share an experiment I'm doing with Sync. The main idea is to transform the raw logs into something that'll be nice to query and generate reports with in Redshift.
Logs make their way into an S3 bucket (lets call it the 'raw' bucket) where we've got a lambda listening for new data. This lambda reads the raw heka protobuf gzipped data, does some transformation and writes a new file to a different S3 bucket (the 'processed' bucket) in a format that is redshift friendly (like json or csv). There's another lambda listening on the processed bucket that loads this data into Redshift.