Skip to content

Instantly share code, notes, and snippets.

View qix's full-sized avatar

Josh Yudaken qix

View GitHub Profile
Traceback (most recent call last):
File "/home/josh/authbox/apps/s/s.py", line 100, in <module>
result = command.run(selector, argv)
File "/home/josh/authbox/apps/s/tooler/command.py", line 210, in run
return self.fn(*args, **vargs)
File "/home/josh/authbox/apps/s/sensu.py", line 276, in silence_hour
stash_nodes(expand_names(names), 3600)
File "/home/josh/authbox/apps/s/sensu.py", line 270, in stash_nodes
expire_seconds=expire_seconds,
File "/home/josh/authbox/apps/s/sensu.py", line 264, in stash_paths
apiVersion: v1
kind: Pod
metadata:
name: pod-w-message
spec:
containers:
- name: messager
image: "ubuntu:14.04"
command: ["/bin/sh","-c"]
args: ["sleep 60 && /bin/echo Sleep expired > /dev/termination-log"]
import asyncio
import hiredis
import logging
import sys
from collections import deque
logger = logging.getLogger('pylib.aio.redis_server')
class RedisProtocol(asyncio.Protocol):
def __init__(self, loop):
@qix
qix / bad
Created August 23, 2016 19:37
{"v":1,"timeMs":1471980865660,"action":"add","key":["cachedAccessor","iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii"],"value":"The length was: 257"}

Redis in Production at Smyte

To be clear we continue to run many Redis services in our production environment. It’s a great tool for prototyping and small workloads. For our use case however, we believe the cost and complexity of our setup justifies urgently finding alternate solutions.

  • Each of our Redis servers are clearly numbered with a current leader in one availability zone, and a follower in another zone.
  • The servers run ~16 different individual Redis processes. This helps us utilize CPUs (as Redis is single-threaded) but it also means we only need an extra 1/16th memory in order to safely perform a BGSAVE (due to copy-on-write), though in practice it’s closer to 1/8 because it’s not always evenly balanced.
  • Our leaders do not every run BGSAVE unless we’re bringing up a new slave which is carefully done manually. Since issues with the slave should not affect the leader and new slave connections might trigger an unsafe BGSAVE on the leader, slave Redis processes are set to not automatically rest
import time
class Bucket(object):
def __init__(self, max_amount, refill_time, refill_amount):
self.max_amount = max_amount
self.refill_time = refill_time
self.refill_amount = refill_amount
self.reset()
def _refill_count(self):
'use strict';
const Promise = require('bluebird');
const SocksAgent = require('socks5-http-client/lib/Agent');
const fetchHttp = require('../../lib/utils/fetchHttp');
defineTest('socks proxy works', {
tags: ['socks', 'httpEndpoint'],
}, Promise.coroutine(function*(test) {
'use strict';
const WebSocket = require('ws');
const express = require('express');
const http = require('http');
let debug = false;
function arrayShiftRight(array) {
### Keybase proof
I hereby claim:
* I am qix on github.
* I am jyud (https://keybase.io/jyud) on keybase.
* I have a public key whose fingerprint is 5BD6 FD20 2F22 980C 3913 DB4B D4D2 296A 1505 F31D
To claim this, I am signing this object:
{
const {
commands,
extensions,
} = options;
const {
CancelCommand,
ClickCommand,
ExecCommand,