Time clock and the ordering of events in a distributed system
Consensus on Transaction Commit
An empirical study on the correctness of formally verified distributed systems
#!/bin/bash | |
# This playbook assumes you have cloned https://github.com/openark/orchestrator | |
# and ran: ./script/dock system | |
# which landed you in orchestrator's playground environment. | |
# Further information available on the welcome screen once you've ran the docker image. | |
# FYI, orchestrator's config file is at /etc/orchestrator.conf.json | |
orchestrator-client -c topology-tabulated -alias ci | |
orchestrator-client -c topology-tabulated -alias ci | tr '|' '\t' |
#!/usr/bin/env python | |
import json | |
from jinja2 import Template | |
# git clone https://github.com/pingcap/tidb-docker-compose | |
# cd tidb-docker-compose | |
# git clone https://github.com/tennix/grafonnet-lib -b table | |
# python dashboard-to-jsonnet.py > pd.jsonnet | |
# jsonnet -J grafonnet-lib pd.jsonnet > config/dashboards/generated-pd.json | |
with open('config/dashboards/pd.json', 'r') as f: | |
data = json.load(f) |
1.Producer | |
1.request.required.acks=[0,1,all/-1] 0 no acknowledgement but ver fast, 1 acknowledged after leader commits, all acknowledged after replicated | |
2.use Async producer - use callback for the acknowledgement, using property producer.type=1 | |
3.Batching data - send multiple messages together. | |
batch.num.messages | |
queue.buffer.max.ms | |
4.Compression for Large files - gzip, snappy supported | |
very large files can be stored in shared location and just the file path can be logged by the kafka producer. | |
With NLTK version 3.1 and Stanford NER tool 2015-12-09, it is possible to hack the StanfordNERTagger._stanford_jar
to include other .jar
files that are necessary for the new tagger.
First set up the environment variables as per instructed at https://github.com/nltk/nltk/wiki/Installing-Third-Party-Software
filter { | |
if [type] == 'postgres_csv' { | |
csv { | |
columns => [ | |
"pg_timestamp", | |
"user_name", | |
"database_name", | |
"process_id", | |
"connection_from", | |
"session_id", |
# knife cheat | |
## Search Examples | |
knife search "name:ip*" | |
knife search "platform:ubuntu*" | |
knife search "platform:*" -a macaddress | |
knife search "platform:ubuntu*" -a uptime | |
knife search "platform:ubuntu*" -a virtualization.system | |
knife search "platform:ubuntu*" -a network.default_gateway |
// a list of useful queries for profiler analysis. Starting with the most basic. | |
// 2.4 compatible | |
// | |
// output explained: | |
// | |
{ | |
"ts" : ISODate("2012-09-14T16:34:00.010Z"), // date it occurred | |
"op" : "query", // the operation type | |
"ns" : "game.players", // the db and collection |
Previous versions used homebrew to install the various versions. As suggested in the comments, it's better to use pyenv
instead. If you are looking for the previous version of this document, see the revision history.
$ brew update
$ brew install pyenv
$ pyenv install 3.5.0
$ pyenv install 3.4.3
$ pyenv install 3.3.6
$ pyenv install 3.2.6
$ pyenv install 2.7.10
$ pyenv install 2.6.9