Skip to content

Instantly share code, notes, and snippets.

View marians's full-sized avatar

Marian Steinbach marians

View GitHub Profile
@marians
marians / CouchDB_Python.md
Last active June 14, 2025 02:00
The missing Python couchdb tutorial

This is an unofficial manual for the couchdb Python module I wish I had had.

Installation

pip install couchdb

Importing the module

@marians
marians / Question.md
Last active September 3, 2015 16:27
A question to Giant Swarm users regarding updating a service definition

Dear users,

we have an important question for you regarding the improvement of one aspect of our product.

When you create a service on Giant Swarm using the service definition (usually in a swarm.json file), after creation of the service you currently can't modify your service.

We know that many of you want this to change. The question is, how would you like this to work?

1) Submitting new swarm.json

@marians
marians / main.go
Last active August 29, 2015 14:24
Some colored terminal output using mgutz/ansi
package main
import (
"fmt"
"github.com/mgutz/ansi"
"github.com/ryanuber/columnize"
)
func main() {
config := columnize.DefaultConfig()
@marians
marians / Dockerfile
Created May 12, 2015 14:47
Giant Swarm Websocket example
FROM python:2.7-slim
ENV DEBIAN_FRONTEND noninteractive
RUN set -x \
&& apt-get -q update \
&& apt-get install -yq --no-install-recommends git-core build-essential \
&& pip install cython \
&& pip install git+https://github.com/gevent/gevent.git#egg=gevent \
&& pip install Flask \
@marians
marians / Why_Ghost_with_CouchDB.md
Last active August 29, 2015 14:16
Why Ghost with CouchDB?

Dear Hannah Wolfe,

in your tweet you say you'd like to know why we need a database cluster for our blog. Thanks for that question, I'm happy to respond.

Every now and then, every piece of hardware fails. That's why we employ clusters for every server app we are running and go to length to avoid single points or failure.

Our cluster is built on the idea of immutable infrastructure. Our apps run inside Docker containers. When a node in our cluster fails, the containers running on that node are started on a different node. In the meantime, identical instances of the apps running on other machines handle the requests.

This principle is common for stateless applications, but we also want it to be valid for databases. To avoid having a database server be a single point of failure, databases must be run on clusters which share parts of their data.

@marians
marians / Dockerfile
Last active August 29, 2015 14:16
Writing to a Giant Swarm volume from a jenkins container
FROM jenkins:latest
COPY test.sh /test.sh
ENTRYPOINT ["/bin/bash", "/test.sh"]
@marians
marians / test.py
Last active July 10, 2017 23:47
Using Tor Browser Bundle for anonymous HTTP requests in Python - supplement for http://www.sendung.de/2014-09-16/anonymous-scraping-via-python-tor/
import socket
import socks # pip install PySocks - https://github.com/Anorov/PySocks
# configure default proxy. 9150 is the Tor Browser Bundle socks proxy port
socks.set_default_proxy(socks.SOCKS5, "127.0.0.1", 9150)
socket.socket = socks.socksocket
import urllib
print(urllib.urlopen('http://icanhazip.com').read())
@marians
marians / bench.py
Last active March 13, 2025 19:48
Benchmarking serialization/unserialization in python using json, pickle and cPickle
import cPickle
import pickle
import json
import random
from time import time
from hashlib import md5
test_runs = 1000
def float_list():
@marians
marians / queue.py
Created May 24, 2013 14:33
Untested version of a job queue that relies on MongoDB
"""
Untested version of some job queue
Usage:
from pymongo import MongoClient
db = MongoClient()
queue = Queue("myqueue", db)
job = {
'key': 'foobar',
@marians
marians / hadoop-hadoop-datanode-Marians-MBP.local.log
Last active December 17, 2015 14:09
Hadoop problem logs as of 2013-05-21
2013-05-21 21:32:20,681 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:50010, dest: /127.0.0.1:50144, bytes: 1777, op: HDFS_READ, cliID: DFSClient_attempt_201305152304_0004_m_000000_1_722979034_1, offset: 0, srvID: DS-2043951618-192.168.0.102-50010-1368635777558, blockid: blk_-4766979604280382827_1040, duration: 240000
2013-05-21 21:32:20,804 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:50010, dest: /127.0.0.1:50145, bytes: 7517, op: HDFS_READ, cliID: DFSClient_attempt_201305152304_0004_m_000000_1_722979034_1, offset: 0, srvID: DS-2043951618-192.168.0.102-50010-1368635777558, blockid: blk_3289312915423223722_1003, duration: 736000
2013-05-21 21:35:59,387 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-4719156887958776590_1064 src: /127.0.0.1:50215 dest: /127.0.0.1:50010
2013-05-21 21:35:59,486 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:50215, dest: /127.0.0.1:50010, bytes: