An API for libsodium pubkey crypto operations in the Beaker/Dat ecosystem. Includes mechanisms to:
- Sign
- Verify signatures
- Encrypt blobs
- Decrypt blobs
- Validate pubkey ownership
If you're in a rush: Skip the background material and see the proposal
I recently published a minimal Dat Identity Spec which establishes how we plan to move forward with user identities in Beaker. It establishes:
const info = network.info(key) | |
// high level automated: | |
network.join(key, { | |
announce: true, // announce self and listen for connections | |
lookup: true // seek out peers and connect to them | |
}) | |
network.leave(key) |
RSS was a social network for blogs. It was a way to self-publish a feed which users could follow. It was decentralized. It competed directly with Twitter and Facebook.
RSS had a hard time. It was a fairly simple design, but the protocol took a lot of coordinated work to use. You had to get bloggers to generate the feed files. You had to get browsers to detect RSS availability and give relevant UI controls. You had to train users to know what RSS was. And on and on.
RSS had high overhead and a subpar UX. It was beaten by better products created at Twitter and Facebook. Google didn't see an opportunity to capitalize on their RSS app -- the most popular reader at the time -- and shut it down. (The Google Reader shutdown was especially sad when you consider their followup attempt with Google Plus. You should've appreciated what you had, Google!)
RSS was a good idea with a bad execution. It turned self-hosted blogs into a social network - that's cool. People lik
const dat = require('@beaker/dat') | |
const server = dat.createServer({ | |
storage: './dat' | |
}) | |
server.swarm('dat://beakerbrowser.com') | |
server.unswarm('dat://beakerbrowser.com') | |
server.isSwarming('dat://beakerbrowser.com') | |
server.close() | |
server.createDebugLogStream() |
// control the session data on your connection | |
experimental.datPeers.getSessionData() | |
experimental.datPeers.setSessionData(obj) // obj must be no larger than 255 bytes when JSONified | |
// manage connected peers | |
var peers = experimental.datPeers.list() // list all peers connected to the current page's dat | |
var peer = experimental.datPeers.get(peerId) | |
await experimental.datPeers.broadcast(data) // send a message to all peers | |
experimental.datPeers.addEventListener('connect') // new peer | |
experimental.datPeers.addEventListener('disconnect') // peer closed connection |
There's a somewhat old-fashioned term in computing (old-fashioned being a relative concept) and the term is "live."
Back when Xerox PARC was making the first real GUI and building the first Object-Oriented programming language (Smalltalk) they were building a live environment. And "live" meant that the code was right there, available to the user, ready to edit. If you go back and look at the demos of Smalltalk, you see people jumping into that code and modding the environment on-the-fly, and that's what live meant! It meant you could mess with the code.
There's a complementary term, just as old-fashioned but much more relevant, and that's "dead." That's what cloud computing is: dead computing. It's compiled, packaged, shipped, and completely unchangeable -- a total black box to the user.
Live is vibrant. Live is community-owned. Live means that mods and plugins are going to emerge out of the userbase. Some of the most popular and well-known computer games
This Gist is a quick writeup for devs using the beta or master
build. We'll get a more complete writeup in the Beaker site docs on 0.8's final release. Feel free to open issues for discussion.
We've done some work on the DatArchive
API to make it easier to use. Prior to 0.8, Dats had a "staging area" folder which you had to commit()
to publish. In 0.8, Beaker will automatically sync that folder. As a result, the staging-area methods (diff()
commit()
and revert()
) were deprecated. There are also some new methods, and a few changes to how events work.
Here's a full reference: