Skip to content

Instantly share code, notes, and snippets.

View pfrazee's full-sized avatar

Paul Frazee pfrazee

View GitHub Profile

Using Dat with a github.io domain

https://i.imgur.com/akOrIOq.png

My personl site is hosted at hashbase.io (link) so that you can access it via dat and https. My canonical dat url is dat://pfrazee.hashbase.io.

Before I used hashbase I used Github Pages, and today it dawned on me that I can have dat://pfrazee.github.io work too. It's pretty simple if you know the dat dns spec.

All I had to do was add /.well-known/dat to my site (click to view). I put the raw dat URL of my site in that file, along with a TTL.

Ownership proofs (/.proofs folder)

Web services may choose to "bind" profile archives to user accounts. An example of this is being planned in Hashbase: by reading profile dats, we can use the same username and avatar as they use with their Dat applications.

However, to stop users' identities from being stolen, we at Hashbase need to verify ownership of an archive before binding it to the profile. To accomplish this, we use "proofs."

Proofs are a relatively simple concept. They are a way to prove that the connecting user can write to an archive.

To create a proof, the service creates an unguessable token. The token needs to be wrapped in a data format to avoid giving away signing-control of your key. The token may also be signed by the service, but that's not necessary.

@pfrazee
pfrazee / index.js
Created June 26, 2017 21:14
Hyperdrive download error case
const hyperdrive = require('hyperdrive')
const tempy = require('tempy')
const fs = require('fs')
const path = require('path')
const pda = require('pauls-dat-api')
const toZipStream = require('hyperdrive-to-zip-stream')
go()
async function go () {
require('sodium-native')
import path from 'path'
import raf from 'random-access-file'
import multi from 'multi-random-access'
import pda from 'pauls-dat-api'
module.exports = function (metadataDir, contentDir) {
return {
metadata: function (name, opts) {
return raf(path.join(metadataDir, 'metadata', name))
},
url = 'dat://' pubkey_or_name '@' version '/' path
version = meta_seq_num | label
meta_seq_num = 'c' integer
integer = [0-9]+
label = [A-z0-9.-]+
examples
dat://584faa05d394190ab1a3f0240607f9bf2b7e2bd9968830a11cf77db0cea36a21@c1/
// --------------------------------------------
// INTERNAL
// =
// site-data
// =
beaker.sitedata.get(url, key)
beaker.sitedata.set(url, key, value)
// generate key
var secretKey
if (!key) {
var keyPair = signatures.keyPair()
key = keyPair.publicKey
secretKey = keyPair.secretKey
}
// create the archive instance

So far, here's what I've uncovered. I made tempfixes as I went. I'm going to make PRs now, and see what all these fixes get us.

1. beaker wasnt listening to additional requests

Due to a bad bit of logic, Beaker isn't taking advantage of multiplexing. This actually isn't a significant factor though, because hypercloud shouldn't be using active replication anyway.

2. hypercloud dns config wasnt set

Hypercloud just isnt using dns discovery at all.

if (+(/^v([\d]+)/.exec(process.version)[1]) < 6) {
console.log('Detected node version <6, transpiling es2015 features')
try {
require('babel-register')({ presets: ['es2015', 'transform-async-to-generator'] })
} catch (e) {
console.log('Call `npm run install-transpiler` first. You\'re on node <6, so we need extra deps.')
process.exit(1)
}
} else {
console.log('Detected node version 6, transpiling async to generators')