Building a distributed database/api on an eventually consistent distribution mechanism like scuttlebutt.
There are users and (possibly anonymous) posts with permalinks that only take strings/buffers.
Think of information you...
class Pubsub { | |
Map channels; | |
Pubsub() { | |
channels = new Map(); | |
} | |
subscribe(String channel, Function cb) { | |
if (channels[channel] == null) { | |
channels[channel] = new List<Function>(); |
traceroute nodejs.org | |
traceroute to nodejs.org (199.59.166.108), 30 hops max, 60 byte packets | |
1 ip-173-236-216-1.dreamhost.com (173.236.216.1) 0.327 ms 0.319 ms 0.496 ms | |
2 ip-66-33-201-222.dreamhost.com (66.33.201.222) 0.229 ms 0.262 ms 0.242 ms | |
3 64.124.196.85 (64.124.196.85) 0.291 ms 0.447 ms 0.311 ms | |
4 xe-1-0-1.mpr1.lax12.us.above.net (64.125.30.10) 0.804 ms 0.846 ms 0.826 ms | |
5 TenGE13-2.br02.lax04.pccwbtn.net (206.223.123.93) 1.201 ms 1.169 ms 1.186 ms |
var Contre = require('contre'); | |
var http = require('http'); | |
var contre = Contre({ | |
from : __dirname + '/repos', | |
to : __dirname + '/static' | |
}); | |
var contre = Contre({ | |
repos : __dirname + '/repos', |
/* | |
This generates 4Mio timeseries values and stores them in different data structures in memory, dependent on process.argv[2]. | |
Test results on my laptop are included. Performance was not important here, only mem usage. The background: In mem node DB. | |
Results: | |
- 115mb : { 1 : 'd,12,1d,lo', ... } | |
- 150mb : { 1 : [12,34,56,78], ... } |
Compile with -std=c++0x
.
double duration = time([](){
do_some_heavy_stuff();
})
var state = require('state')({ | |
getLastUpdate : db.get.bind(db, 'lastUpdate'), | |
setLastUpdate : db.put.bind(db) | |
}) | |
// 1) EE interface | |
// receive updates if not connected to master | |
state.on('update', function (update, ts) { | |
db.put(update.key, update.value) | |
}) |
var timeseries = require('timeseries') | |
var ts = timeseries('./db') | |
var s = ts.createStream() | |
s.pipe(net.connect(3000)).pipe(s) | |
// display chart with realtime updates! | |
ts('the-id').createRangeStream(start).pipe(listener) |
This is only for speed, Levelup provides some nice streaming abstractions that Leveled doesn't have.
Every benchmarked operation has to deal with 120000 entries, 29 chars each. Times were averaged from 3 runs.
The 2nd benchmark's output includes the factor of which leveled is faster/slower.