-
Rule: Restrict all code to very simple control flow constructs – do not use
goto
statements,setjmp
orlongjmp
constructs, and direct or indirect recursion.Rationale: Simpler control flow translates into stronger capabilities for verification and often results in improved code clarity. The banishment of recursion is perhaps the biggest surprise here. Without recursion, though, we are guaranteed to have an acyclic function call graph, which can be exploited by code analyzers, and can directly help to prove that all executions that should be bounded are in fact bounded.
// Copyright (C) 2024 C8 Labs | |
// SPDX-License-Identifier: AGPL-3.0-or-later | |
// This file illustrates the notion of an "index builder" or "dataset builder" using Go, GORM, and ClickHouse. While | |
// working at Indeed, we leveraged some fairly robust data pipelines that were built on top of our analytics engine. | |
// One of those parts were this idea of an index builder which allowed us to derive higher level metrics from data | |
// stored in our data warehouse. | |
// | |
// For most projects, I wind up using Go and GORM for quick and easy schema management. This offered a convenient | |
// way to define and iterate on schema over time and build migrations in as part of the index building job. |
package uplink | |
import ( | |
"context" | |
"crypto/aes" | |
"crypto/sha256" | |
"encoding/json" | |
"net/http" | |
"github.com/zeebo/errs" |
package main_test | |
func Test(t testing.T) { | |
testplanet.Test( | |
storagenodetp.Configure(...), | |
satellitetp.Configure(...), | |
)(func() {}) | |
} |
I hereby claim:
- I am mjpitz on github.
- I am mjpitz (https://keybase.io/mjpitz) on keybase.
- I have a public key ASAZhypY8jMENYr7mgIk7AijguSzbO9acsBYDamMX4cNSgo
To claim this, I am signing this object:
// The guide online generally works, but has values that do not correspond to the primary active satellites. | |
// https://github.com/storj-thirdparty/uplink-nodejs/blob/7fb4a9ab35b04c7338d503d6d3d76d9264c53867/docs/tutorial.md | |
const http = require('http'); | |
// Step 2 / 3 | |
const storj = require("uplink-nodejs"); | |
const uplink = new storj.Uplink(); | |
// Step 1 |
Scripts for https://script.google.com/home/start
Create the project:
- Click new "New Project"
- Delete the contents of the default project
- Paste one of the following snippets into the editor
- Name and save your project
- In the editor bar, select the function that you want to run (for the tadpoles script, select
imageDownloader
) - Run the function and grant it the required permissions.
package io_test | |
import ( | |
"bytes" | |
"encoding/binary" | |
"encoding/hex" | |
"encoding/json" | |
"testing" | |
"time" |
FROM ubuntu:21.04 AS builder | |
WORKDIR /scratch | |
RUN apt-get update -y && apt-get install -y ca-certificates git build-essential libtool cmake libbsd-dev peg | |
ARG REDISRAFT_VERSION="4bbf5af1" | |
RUN git clone https://github.com/RedisLabs/redisraft.git redisraft && \ | |
cd redisraft && \ | |
git checkout ${REDISRAFT_VERSION} && \ |
As an exercise, I started to design and implement a database in some of my free time. As I started working through some of the details, there were a few things that I knew I wanted to work with and a few things I wanted to evaluate. Since I'm looking at more of a CP system, my mind immediately jumped to Raft. But which implemenation to use? And what storage mechanism? Since I had more familiarity with Hashicorps implemenation, I started there.
The first thing I wanted to do was consider the performance characteristics of the underlying data stores. One of the nice features of the hashicorp implementation is it allows callers to plugin in different stores for logs, stable, and snapshots. There's a whole slew of community implementations.