Skip to content

Instantly share code, notes, and snippets.

I have been categorizing enclaves into a hierarchy:

  • Level 0 (secret storage): allows you to store and retrieve the secret
  • Level 1 (identity verification): checks who you are, then performs an operation on the secret
  • Level 2 (policy enforcement): checks who you are, what you are doing, and then performs an operation on the secret
@hashbrowncipher
hashbrowncipher / magic.py
Created August 16, 2024 03:52
isolation of libmagic within Python
import os
import socket
import sys
import Pyro5.api
import Pyro5.server
import pyseccomp as seccomp
import serpent
@Pyro5.server.expose
@hashbrowncipher
hashbrowncipher / strong_beyondcorp.md
Last active August 10, 2024 17:31
VPN eats your brains

This post is for people who have already decided that layer 7 authN/Z controls are right for their needs. This leaves the question of whether it's worth deploying a separate layer 3 control, like a VPN. Defense in depth is good, right? Who can argue with a belt-and-suspenders approach?

My general thought is that a belt-and-suspenders security measure is always a good idea if the cost of maintaining it doesn't result in underinvestment in any other security control. We can divide this cost into two categories:

  1. opportunity cost
  2. psychological cost

Nonces are bad and we should stop using them

Once upon a time, there were Feistel ciphers. Because of Feistel ciphers, we got used to (1) block ciphers with (2) key schedules. This was a bad idea, but we didn't know better at the time.

Why was it a bad idea?

Block ciphers were a bad idea because they are required to be invertible: you must be able to unambiguously decrypt everything that you encrypt. This,

@hashbrowncipher
hashbrowncipher / single_version.md
Last active February 21, 2024 06:20
The single version rule is good for Google and bad for you

I had a bad brush with the single version rule at work, and I’d like to warn you so that you can avoid the pain it caused me.

The best thing about monorepos

The best thing about monorepos is that they require the author of any given change to make the entire repository’s tests pass before committing that change. This puts the onus of managing incompatible changes on the original author, which:

  1. Centralizes the responsibility of solving incompatibilities onto the person or team with the most knowledge of the change.
  2. (In the best case) exposes any incorrect assumptions they may have about their change early-on in the process.

Largely as a result of the above property, monorepos obviate most of the overhead around dependency versioning. When dependencies are consumed at HEAD, it is no longer necessary to maintain separate artifact versioning, and the build and release pipelines for intermediate artifacts (e.g. base images, lib

@hashbrowncipher
hashbrowncipher / golden_images.md
Last active July 19, 2023 19:44
Don't use golden images. Do this instead.

tl;dr: If you run apt-get install reproducibly, there's no reason to use "golden images".

Background (about me)

I've now run into the "how do we manage and update a base operating system" problem at three different roles over the course of many years. At each role, my colleagues and I landed on a reproducible installations using apt . Fair warning: the rest of this post will be apt flavored, although I hope that the general lessons will be useful for any operating system.

The most advanced incarnation of this system used Bazel as part of a monorepo to provide automated upgrades for hundreds of individual services. In that incarnation, it was possible to build new container images that remediated a given CVE org-wide in a single commit, which was very useful for me as a security engineer.

What are golden images?

@hashbrowncipher
hashbrowncipher / dont_trust_sourceip.md
Created June 23, 2023 09:35
Don't trust aws:SourceIP! I'll break your data perimeter with one neat trick.

One of my favorite talks from fwd:CloudSec this year was Jarom Brown's AWS Presigned URLs: The Good, The Bad, and the Ugly (abstract). In his talk, Jarom demonstrates a way to use javascript within an unsuspecting user's browser to make requests to AWS. Because most AWS services provide their APIs over HTTP, he was able to use standard browser-side facilities (e.g. XMLHTTPRequest, fetch()) to synthesize requests to AWS, and then exfiltrate their responses to a server he controlled. This is most relevant in a post-compromise context, where an attacker has access to a victim's AWS keys, and gives the attacker lots of leeway for disguising or changing the source IP address presented to AWS.

The first question on my mind was "I thought CORS was supposed to stop this. Why didn't it?". It looks like the answer is that most AWS services are very happy to send an Access-Control-Allow-Origin: * header. For

But actually, how do flamegraphs work?

Q: So when I make a flamegraph, how does that work?

The specific type of flamegraph you're seeing is an on-cpu flamegraph. It is synthesized by periodically stopping the CPU, asking "what are you doing right now", and storing the answer. The flamegraph itself is a convenient representation of the data produced by doing that very frequently (see the -F argument to perf record) over some period of time.

@hashbrowncipher
hashbrowncipher / contracts_do_not_bind.md
Last active August 15, 2022 16:21
Why contracts within engineering organizations don't work.

One time at work, my team was upgrading an open source search-engine-cum-database that had an unfortunate predilection for breaking its external API. We had already deployed the new version of the database with its breaking changes, and now it was time to herd our customers off of the old version and onto the new version. Our customers were naturally reticent: for most of them it was just a bunch of work for very little reward. The migration would require careful testing, and just generally it didn't sound like a fun time. To top the situation off, some of these customers' services hadn't been touched in years, and the original authors had long since left.

I'm proud to say that my team was significantly more interested in accommodating our customers' needs than some other DBA teams I've worked with or around. During the migration we spent a fair bit of time chewing on ways to lessen the burden we placed on our customers. At one point the possibility of simply "handing off" the outdated search engines was dis

@hashbrowncipher
hashbrowncipher / tiers_considered_harmful.md
Created June 10, 2022 07:36
Tiers considered harmful

tl;dr:

  • don't define security tiers; use security cells instead
  • each service should have its own security cell

One time at JOB_1, we had a problem: we were about to start serving a dataset that was significantly more sensitive than our typical dataset. Leaking it would likely produce scary consequences in the real world, with ripple effects beyond just the company and its shareholders. We hadn't dealt with issues of that kind before; and as engineers on the project, my colleagues and I felt duty-bound to find the "right" solution. We started looking for other places where high security data was stored, and we found one owned by another team in the org. Perfect! We would store our dataset alongside the existing dataset, and they'd both be safe together in the protective cocoon of the high security environment.

This didn't work. The folks who owned the existing dataset were rightly distrustful of sharing their meticulously constructed environment with our team. And for good reason: adding our application t