Skip to content

Instantly share code, notes, and snippets.

@oiler
Created April 7, 2026 13:27
Show Gist options
  • Select an option

  • Save oiler/ff4333b00455003e5155fcb66d869c52 to your computer and use it in GitHub Desktop.

Select an option

Save oiler/ff4333b00455003e5155fcb66d869c52 to your computer and use it in GitHub Desktop.
Running Untrusted Code in Docker on macOS

Running Untrusted Code in Docker on macOS

You downloaded a public repo from GitHub. It looked fine at the time. A month later, a dependency gets compromised, or a contributor pushes malicious code, or an auto-updater pulls something you didn't ask for. The code is on your filesystem. If it runs with your user permissions, it can read your SSH keys, your AWS credentials, your browser cookies — anything your user account can touch.

Docker fixes this. A container is an isolated Linux environment that only sees what you explicitly give it. No mounts means no access to your Mac's filesystem. No network flag means no outbound connections. The code runs, but it runs in a box.

This guide covers how to set that up from scratch on macOS.


Install Docker Desktop

Download Docker Desktop from docker.com/products/docker-desktop. Run the installer and follow the prompts. Once it's running, you'll see the whale icon in your menu bar.

Verify the install:

docker --version
docker run --rm hello-world

If both commands work, you're good.

Resource limits: Docker Desktop defaults to half your Mac's CPU cores and 2–4 GB of RAM. You can adjust this in Docker Desktop → Settings → Resources. For most repo work, the defaults are fine.


The Core Command

Here's the baseline for running any untrusted repo in a sandboxed container:

docker run --rm -it \
  --network none \
  -v "$(pwd)/my-repo:/app:ro" \
  -v "$(pwd)/output:/app/output" \
  -w /app \
  node:22 \
  bash

Every flag matters. Here's what each one does:

Flag What it does
--rm Deletes the container when you exit. No leftover state.
-it Interactive terminal. You get a shell inside the container.
--network none Kills all network access. The container can't reach the internet, your LAN, or any local services.
-v "$(pwd)/my-repo:/app:ro" Mounts the repo into the container at /app, read-only. The container can see the code but can't modify it.
-v "$(pwd)/output:/app/output" Mounts an output directory with write access. This is the only place the container can write files that persist after it exits.
-w /app Sets the working directory inside the container to /app.
node:22 The base image. Swap this for whatever runtime the repo needs.
bash Drops you into a shell instead of running the repo's default entrypoint.

The :ro flag on the repo mount is important. It means even if the code tries to modify its own files (or yours), the write fails. The output directory is the one exception — you control what goes there.


Choosing Your Base Image

Pick the image that matches the repo's runtime:

Language/Runtime Image Example
Node.js node:22 docker run --rm -it node:22 bash
Python python:3.12 docker run --rm -it python:3.12 bash
Go golang:1.22 docker run --rm -it golang:1.22 bash
Ruby ruby:3.3 docker run --rm -it ruby:3.3 bash
Rust rust:1.77 docker run --rm -it rust:1.77 bash
General / multi-language ubuntu:24.04 docker run --rm -it ubuntu:24.04 bash

If the repo has a Dockerfile, don't blindly build it. That Dockerfile could do anything — download scripts, set environment variables, run install hooks. Read it first. If you trust it, build it. If you don't, use a clean base image and install dependencies manually inside the container.


Filesystem Rules

This is where most people get it wrong. The container's isolation is only as good as what you mount into it.

Never mount these directories

These paths contain credentials, keys, tokens, and session data. Mounting any of them gives the container access to your identity:

  • ~ (your entire home directory)
  • ~/.ssh (SSH keys — access to every server and Git remote you use)
  • ~/.aws (AWS credentials)
  • ~/.config (app configs, tokens, API keys)
  • ~/.gnupg (GPG keys)
  • ~/.kube (Kubernetes configs)
  • ~/.docker (Docker credentials)
  • ~/Library (macOS app data, keychains, browser data)
  • /etc (system configuration)
  • /var (system state)
  • Any directory containing .env files with secrets

Safe to mount

  • The repo directory itself (always use :ro unless you have a specific reason to write)
  • A dedicated empty output directory you created for this purpose
  • Sample data files the repo needs to process (mount individually, not as a parent directory)

The output directory pattern

Create a fresh output directory before each run:

mkdir -p ~/docker-output/project-name

Mount it as the one writable path:

-v ~/docker-output/project-name:/app/output

After the container exits, check what it wrote before you trust those files. Treat output files with the same suspicion you'd give an email attachment from a stranger.


Network Control

--network none is the default you should use. It blocks all network access — outbound HTTP, DNS resolution, everything. The container is completely offline.

When you need network access

Some repos need to install dependencies (npm install, pip install, etc.) before they can run. Here's the workflow:

Step 1: Allow network for setup only.

docker run --rm -it \
  -v "$(pwd)/my-repo:/app" \
  -w /app \
  node:22 \
  bash

No --network none here. Inside the container, install dependencies:

npm install

Then exit the container.

Step 2: Run with network disabled.

Now the dependencies are installed (they live inside the container's filesystem, which is ephemeral). But since we used --rm, they're gone. We need a two-step approach with a named image.

The build-then-run pattern

This is the right way to handle repos that need dependencies:

# Step 1: Create a container, install deps, save it as an image
docker run -it \
  --name temp-setup \
  -v "$(pwd)/my-repo:/app:ro" \
  -w /app \
  node:22 \
  bash -c "cp -r /app /workspace && cd /workspace && npm install"

docker commit temp-setup my-repo-sandboxed
docker rm temp-setup

# Step 2: Run the saved image with no network
docker run --rm -it \
  --network none \
  -v "$(pwd)/output:/workspace/output" \
  -w /workspace \
  my-repo-sandboxed \
  bash

This gives you a snapshot with dependencies already installed. Every subsequent run uses --network none. The code can execute, but it can't phone home.

Temporary network access

If you need to allow network access for a specific task (like fetching an API response), run a separate container without --network none. Don't toggle network on an existing container — start clean so you control exactly when the network is available.


Limiting Container Resources

Beyond filesystem and network, you can cap CPU and memory so a compromised process can't eat your system:

docker run --rm -it \
  --network none \
  --memory 512m \
  --cpus 1.0 \
  -v "$(pwd)/my-repo:/app:ro" \
  -v "$(pwd)/output:/app/output" \
  -w /app \
  node:22 \
  bash
Flag What it does
--memory 512m Caps RAM at 512 MB. The container gets killed if it exceeds this.
--cpus 1.0 Limits the container to one CPU core.

These aren't strictly security — they're guardrails against runaway processes.


Running as a Non-Root User

By default, processes inside a Docker container run as root (within the container). This root user can't touch your Mac's filesystem beyond what's mounted, but it's still best practice to drop privileges:

docker run --rm -it \
  --network none \
  --user 1000:1000 \
  -v "$(pwd)/my-repo:/app:ro" \
  -v "$(pwd)/output:/app/output" \
  -w /app \
  node:22 \
  bash

--user 1000:1000 runs the process as a non-root user with UID/GID 1000. This limits what the process can do inside the container itself — it can't install packages system-wide or modify system files within the container.

Make sure the output directory has the right permissions:

chmod 777 ~/docker-output/project-name

Or use chown to match the UID:

sudo chown 1000:1000 ~/docker-output/project-name

Dropping Linux Capabilities

Docker containers get a subset of Linux capabilities by default. You can strip them further:

docker run --rm -it \
  --network none \
  --cap-drop ALL \
  -v "$(pwd)/my-repo:/app:ro" \
  -v "$(pwd)/output:/app/output" \
  -w /app \
  node:22 \
  bash

--cap-drop ALL removes every Linux capability. The process can't change file ownership, bind to privileged ports, or use raw sockets. For running and testing code, this is almost always fine. If something breaks, add back only the specific capability it needs with --cap-add.


Blocking the Auto-Updater Pattern

Some repos include scripts that pull live code from the internet — update checkers, self-updaters, post-install hooks. --network none already prevents these from connecting, but you can also neutralize them at the file level before mounting:

# Before running the container, stub out known updater scripts
echo '#!/usr/bin/env node
console.log("updates disabled");' > my-repo/update-system.mjs

Or mount the repo read-only and handle this inside the container:

# Inside the container (if repo was copied, not mounted read-only)
find /workspace -name "*.mjs" -exec grep -l "fetch\|download\|update" {} \;

Review anything that shows up. If a script fetches remote code at runtime, disable it.


Quick Reference: Copy-Paste Commands

Maximum isolation (read-only repo, no network, no capabilities, resource limits)

docker run --rm -it \
  --network none \
  --cap-drop ALL \
  --memory 512m \
  --cpus 1.0 \
  --user 1000:1000 \
  -v "$(pwd)/my-repo:/app:ro" \
  -v "$(pwd)/output:/app/output" \
  -w /app \
  node:22 \
  bash

Dependency install (network on, then save)

docker run -it --name temp-setup \
  -v "$(pwd)/my-repo:/app:ro" \
  -w /app \
  node:22 \
  bash -c "cp -r /app /workspace && cd /workspace && npm install"

docker commit temp-setup my-repo-sandboxed
docker rm temp-setup

Run saved image with full lockdown

docker run --rm -it \
  --network none \
  --cap-drop ALL \
  --memory 512m \
  --cpus 1.0 \
  -v "$(pwd)/output:/workspace/output" \
  -w /workspace \
  my-repo-sandboxed \
  bash

Shell Aliases for Convenience

Add these to your ~/.zshrc (or ~/.bashrc) so you don't have to type the full command every time:

# Run a sandboxed container with no network (Node)
sandbox-node() {
  local repo="${1:-.}"
  local output="${2:-./output}"
  mkdir -p "$output"
  docker run --rm -it \
    --network none \
    --cap-drop ALL \
    --memory 512m \
    --cpus 1.0 \
    -v "$(cd "$repo" && pwd):/app:ro" \
    -v "$(cd "$output" && pwd):/app/output" \
    -w /app \
    node:22 \
    bash
}

# Run a sandboxed container with no network (Python)
sandbox-python() {
  local repo="${1:-.}"
  local output="${2:-./output}"
  mkdir -p "$output"
  docker run --rm -it \
    --network none \
    --cap-drop ALL \
    --memory 512m \
    --cpus 1.0 \
    -v "$(cd "$repo" && pwd):/app:ro" \
    -v "$(cd "$output" && pwd):/app/output" \
    -w /app \
    python:3.12 \
    bash
}

# Same as above but WITH network (for installing dependencies)
sandbox-setup() {
  local repo="${1:-.}"
  local image="${2:-node:22}"
  docker run --rm -it \
    -v "$(cd "$repo" && pwd):/app" \
    -w /app \
    "$image" \
    bash
}

Usage:

sandbox-node ./my-repo ./my-output
sandbox-python ./my-repo
sandbox-setup ./my-repo python:3.12

Verifying Your Isolation

After starting a container, run these checks to confirm the sandbox is working:

# Verify no network
curl https://google.com
# Should fail: "Could not resolve host" or similar

# Verify filesystem isolation
ls /root
# Should show the container's /root, not your Mac's home directory

ls /Users
# Should fail or show nothing — your Mac's filesystem isn't visible

# Verify read-only mount
touch /app/test-write
# Should fail with "Read-only file system"

# Verify output is writable
touch /app/output/test-write
# Should succeed
rm /app/output/test-write

What Docker Does Not Protect Against

Docker is strong isolation for this use case, but it's not a security absolute. Know the edges:

  • Docker Desktop vulnerabilities. Docker itself can have bugs. Keep Docker Desktop updated.
  • Mounted volumes. Anything you mount is accessible. This is why the mount rules above matter. One bad -v flag undoes the isolation.
  • Container escape exploits. These exist but are rare and typically require root inside the container. Running as non-root with --cap-drop ALL reduces this risk significantly.
  • Data you put in the output directory. If the code writes a malicious script to your output dir and you later run it outside the container, Docker can't help you.
  • macOS-specific paths. Docker Desktop on Mac runs a Linux VM under the hood. File sharing between macOS and the VM is handled by Docker Desktop's settings. Make sure "File Sharing" in Docker Desktop settings only includes directories you intend to share.

Cleanup

Remove images you no longer need:

# List saved images
docker images

# Remove a specific image
docker image rm my-repo-sandboxed

# Remove all stopped containers and unused images
docker system prune

Containers created with --rm clean themselves up on exit. If you used --name without --rm, clean up manually:

docker rm container-name

Summary

The workflow is: isolate the filesystem, kill the network, drop privileges, and treat output with suspicion. Docker gives you all four. The key flags are --network none, volume mounts with :ro, --cap-drop ALL, and --user. Use the build-then-run pattern when you need dependencies. Keep your mount list tight and never expose your home directory or credential paths.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment