This is an unofficial manual for the couchdb
Python module I wish I had had.
pip install couchdb
This is an unofficial manual for the couchdb
Python module I wish I had had.
pip install couchdb
Dear users,
we have an important question for you regarding the improvement of one aspect of our product.
When you create a service on Giant Swarm using the service definition (usually in a swarm.json
file), after creation of the service you currently can't modify your service.
We know that many of you want this to change. The question is, how would you like this to work?
package main | |
import ( | |
"fmt" | |
"github.com/mgutz/ansi" | |
"github.com/ryanuber/columnize" | |
) | |
func main() { | |
config := columnize.DefaultConfig() |
FROM python:2.7-slim | |
ENV DEBIAN_FRONTEND noninteractive | |
RUN set -x \ | |
&& apt-get -q update \ | |
&& apt-get install -yq --no-install-recommends git-core build-essential \ | |
&& pip install cython \ | |
&& pip install git+https://github.com/gevent/gevent.git#egg=gevent \ | |
&& pip install Flask \ |
Dear Hannah Wolfe,
in your tweet you say you'd like to know why we need a database cluster for our blog. Thanks for that question, I'm happy to respond.
Every now and then, every piece of hardware fails. That's why we employ clusters for every server app we are running and go to length to avoid single points or failure.
Our cluster is built on the idea of immutable infrastructure. Our apps run inside Docker containers. When a node in our cluster fails, the containers running on that node are started on a different node. In the meantime, identical instances of the apps running on other machines handle the requests.
This principle is common for stateless applications, but we also want it to be valid for databases. To avoid having a database server be a single point of failure, databases must be run on clusters which share parts of their data.
FROM jenkins:latest | |
COPY test.sh /test.sh | |
ENTRYPOINT ["/bin/bash", "/test.sh"] |
import socket | |
import socks # pip install PySocks - https://github.com/Anorov/PySocks | |
# configure default proxy. 9150 is the Tor Browser Bundle socks proxy port | |
socks.set_default_proxy(socks.SOCKS5, "127.0.0.1", 9150) | |
socket.socket = socks.socksocket | |
import urllib | |
print(urllib.urlopen('http://icanhazip.com').read()) |
import cPickle | |
import pickle | |
import json | |
import random | |
from time import time | |
from hashlib import md5 | |
test_runs = 1000 | |
def float_list(): |
""" | |
Untested version of some job queue | |
Usage: | |
from pymongo import MongoClient | |
db = MongoClient() | |
queue = Queue("myqueue", db) | |
job = { | |
'key': 'foobar', |
2013-05-21 21:32:20,681 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:50010, dest: /127.0.0.1:50144, bytes: 1777, op: HDFS_READ, cliID: DFSClient_attempt_201305152304_0004_m_000000_1_722979034_1, offset: 0, srvID: DS-2043951618-192.168.0.102-50010-1368635777558, blockid: blk_-4766979604280382827_1040, duration: 240000 | |
2013-05-21 21:32:20,804 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:50010, dest: /127.0.0.1:50145, bytes: 7517, op: HDFS_READ, cliID: DFSClient_attempt_201305152304_0004_m_000000_1_722979034_1, offset: 0, srvID: DS-2043951618-192.168.0.102-50010-1368635777558, blockid: blk_3289312915423223722_1003, duration: 736000 | |
2013-05-21 21:35:59,387 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-4719156887958776590_1064 src: /127.0.0.1:50215 dest: /127.0.0.1:50010 | |
2013-05-21 21:35:59,486 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:50215, dest: /127.0.0.1:50010, bytes: |