Skip to content

Instantly share code, notes, and snippets.

View hackergrrl's full-sized avatar
🌱

Kira hackergrrl

🌱
View GitHub Profile

osm-p2p-db vs kappa-osm

NOTE: time for spatial indexing was not included in the benchmark, since kappa-osm doesn't implement it yet.

5000 random documents

DB Insert Index Replicate
osm-p2p-db 6018 ms 7956 ms 29414 ms
var batch = [
{type: 'put', key: `metadata/${channel}`, value: metadata},
{type: 'put', key: key, value: m}
]

I'm not well versed with Matrix, so please let me know where I've erred.

My understanding is that Matrix is a federated chat protocol, using both server-to-server and server-to-client connections for moving messages around. As I understand it, chat data can be cached on clients, but fundamentally lives on the servers, which host anywhere between 1 and maybe 1000s of users.

Cabal is peer-to-peer, in the sense that there is no server/client distinction. Any peer currently has a full copy of chat history, and seeks out other peers in the cabal to send and receive new messages to. This means that nobody has to choose a server to join / entrust their identity to: it lives on your computer as your private/public keypair. Cabal is really just a database that anybody with the shared key can append new data to. Peers sync any new data around until everyone has the same eventual state. The clients (cabal-desktop, etc) scan everyone's append-only feed of messages to build a view of chat history for each channel.

Ther

@hackergrrl
hackergrrl / integration.md
Created September 13, 2018 17:15
osm-p2p-syncfile integration

syncfile integration

The new syncfile module is called osm-p2p-syncfile. It manages a specially formatted tarball that stores media blobs as well as a P2P database (currently [osm-p2p-db][]).

basic usage

Desktop and mobile will both need to open a syncfile (new or existing) and sync the locally running database with it. This'll probably look something like

var Syncfile = require('osm-p2p-syncfile')
// ------------------ KV ------------------
// API:
// - put
// - get
// EVENTS:
// - new value for key listened to
// id view
// requires 'msg.links' being populated to point to old version
var sha = require('sha.js')
var umkv = require('unordered-materialized-kv')
var EventEmitter = require('events').EventEmitter
module.exports = ContentAddressableStore
function ContentAddressableStore (db, opts) {
var kv = umkv(db)
var events = new EventEmitter()
opts = opts || {}
var stream = require('stream')
module.exports = function (id) {
// opts.allowHalfDuplex is _very_ important! Otherwise ending the readable
// half of the duplex will leave the writable side open!
var counter = new stream.Duplex({allowHalfOpen:false})
counter.processed = []
var payloadsLeft = 3

why I publish to npm + don't publish to my distro package manager

re: https://drewdevault.com/2018/01/10/Learn-your-package-manager.html

  • needs human effort to {write,modify} the package per distro, per update
    • do the write/update
    • someone working on the distro to review the work & merge it
  • need to wait until next release for people to use the package/update
    • need to maintain a private pkg server (per distro; or just fuck the others)
  • does my pkg then depend on distro pkgs or npm ones?
@hackergrrl
hackergrrl / corestorage.js
Last active October 30, 2019 20:41
corestorage api
let store = corestore('my-storage-dir' || ram() || webstore())
// Map a discovery key or local-name to a key.
store.keyFromDiscoveryKey(discoKey, cb)
store.keyFromLocalName('my-feed', cb)
// Create a new hypercore, which can include a local name. (creates + loads)
let core = store.create('localname', { valueEncoding: 'utf-8' })
core.ready(cb)