You downloaded a public repo from GitHub. It looked fine at the time. A month later, a dependency gets compromised, or a contributor pushes malicious code, or an auto-updater pulls something you didn't ask for. The code is on your filesystem. If it runs with your user permissions, it can read your SSH keys, your AWS credentials, your browser cookies — anything your user account can touch.
Docker fixes this. A container is an isolated Linux environment that only sees what you explicitly give it. No mounts means no access to your Mac's filesystem. No network flag means no outbound connections. The code runs, but it runs in a box.
This guide covers how to set that up from scratch on macOS.
Download Docker Desktop from docker.com/products/docker-desktop. Run the installer and follow the prompts. Once it's running, you'll see the whale icon in your menu bar.
Verify the install:
docker --version
docker run --rm hello-worldIf both commands work, you're good.
Resource limits: Docker Desktop defaults to half your Mac's CPU cores and 2–4 GB of RAM. You can adjust this in Docker Desktop → Settings → Resources. For most repo work, the defaults are fine.
Here's the baseline for running any untrusted repo in a sandboxed container:
docker run --rm -it \
--network none \
-v "$(pwd)/my-repo:/app:ro" \
-v "$(pwd)/output:/app/output" \
-w /app \
node:22 \
bashEvery flag matters. Here's what each one does:
| Flag | What it does |
|---|---|
--rm |
Deletes the container when you exit. No leftover state. |
-it |
Interactive terminal. You get a shell inside the container. |
--network none |
Kills all network access. The container can't reach the internet, your LAN, or any local services. |
-v "$(pwd)/my-repo:/app:ro" |
Mounts the repo into the container at /app, read-only. The container can see the code but can't modify it. |
-v "$(pwd)/output:/app/output" |
Mounts an output directory with write access. This is the only place the container can write files that persist after it exits. |
-w /app |
Sets the working directory inside the container to /app. |
node:22 |
The base image. Swap this for whatever runtime the repo needs. |
bash |
Drops you into a shell instead of running the repo's default entrypoint. |
The :ro flag on the repo mount is important. It means even if the code tries to modify its own files (or yours), the write fails. The output directory is the one exception — you control what goes there.
Pick the image that matches the repo's runtime:
| Language/Runtime | Image | Example |
|---|---|---|
| Node.js | node:22 |
docker run --rm -it node:22 bash |
| Python | python:3.12 |
docker run --rm -it python:3.12 bash |
| Go | golang:1.22 |
docker run --rm -it golang:1.22 bash |
| Ruby | ruby:3.3 |
docker run --rm -it ruby:3.3 bash |
| Rust | rust:1.77 |
docker run --rm -it rust:1.77 bash |
| General / multi-language | ubuntu:24.04 |
docker run --rm -it ubuntu:24.04 bash |
If the repo has a Dockerfile, don't blindly build it. That Dockerfile could do anything — download scripts, set environment variables, run install hooks. Read it first. If you trust it, build it. If you don't, use a clean base image and install dependencies manually inside the container.
This is where most people get it wrong. The container's isolation is only as good as what you mount into it.
These paths contain credentials, keys, tokens, and session data. Mounting any of them gives the container access to your identity:
~(your entire home directory)~/.ssh(SSH keys — access to every server and Git remote you use)~/.aws(AWS credentials)~/.config(app configs, tokens, API keys)~/.gnupg(GPG keys)~/.kube(Kubernetes configs)~/.docker(Docker credentials)~/Library(macOS app data, keychains, browser data)/etc(system configuration)/var(system state)- Any directory containing
.envfiles with secrets
- The repo directory itself (always use
:rounless you have a specific reason to write) - A dedicated empty output directory you created for this purpose
- Sample data files the repo needs to process (mount individually, not as a parent directory)
Create a fresh output directory before each run:
mkdir -p ~/docker-output/project-nameMount it as the one writable path:
-v ~/docker-output/project-name:/app/outputAfter the container exits, check what it wrote before you trust those files. Treat output files with the same suspicion you'd give an email attachment from a stranger.
--network none is the default you should use. It blocks all network access — outbound HTTP, DNS resolution, everything. The container is completely offline.
Some repos need to install dependencies (npm install, pip install, etc.) before they can run. Here's the workflow:
Step 1: Allow network for setup only.
docker run --rm -it \
-v "$(pwd)/my-repo:/app" \
-w /app \
node:22 \
bashNo --network none here. Inside the container, install dependencies:
npm installThen exit the container.
Step 2: Run with network disabled.
Now the dependencies are installed (they live inside the container's filesystem, which is ephemeral). But since we used --rm, they're gone. We need a two-step approach with a named image.
This is the right way to handle repos that need dependencies:
# Step 1: Create a container, install deps, save it as an image
docker run -it \
--name temp-setup \
-v "$(pwd)/my-repo:/app:ro" \
-w /app \
node:22 \
bash -c "cp -r /app /workspace && cd /workspace && npm install"
docker commit temp-setup my-repo-sandboxed
docker rm temp-setup
# Step 2: Run the saved image with no network
docker run --rm -it \
--network none \
-v "$(pwd)/output:/workspace/output" \
-w /workspace \
my-repo-sandboxed \
bashThis gives you a snapshot with dependencies already installed. Every subsequent run uses --network none. The code can execute, but it can't phone home.
If you need to allow network access for a specific task (like fetching an API response), run a separate container without --network none. Don't toggle network on an existing container — start clean so you control exactly when the network is available.
Beyond filesystem and network, you can cap CPU and memory so a compromised process can't eat your system:
docker run --rm -it \
--network none \
--memory 512m \
--cpus 1.0 \
-v "$(pwd)/my-repo:/app:ro" \
-v "$(pwd)/output:/app/output" \
-w /app \
node:22 \
bash| Flag | What it does |
|---|---|
--memory 512m |
Caps RAM at 512 MB. The container gets killed if it exceeds this. |
--cpus 1.0 |
Limits the container to one CPU core. |
These aren't strictly security — they're guardrails against runaway processes.
By default, processes inside a Docker container run as root (within the container). This root user can't touch your Mac's filesystem beyond what's mounted, but it's still best practice to drop privileges:
docker run --rm -it \
--network none \
--user 1000:1000 \
-v "$(pwd)/my-repo:/app:ro" \
-v "$(pwd)/output:/app/output" \
-w /app \
node:22 \
bash--user 1000:1000 runs the process as a non-root user with UID/GID 1000. This limits what the process can do inside the container itself — it can't install packages system-wide or modify system files within the container.
Make sure the output directory has the right permissions:
chmod 777 ~/docker-output/project-nameOr use chown to match the UID:
sudo chown 1000:1000 ~/docker-output/project-nameDocker containers get a subset of Linux capabilities by default. You can strip them further:
docker run --rm -it \
--network none \
--cap-drop ALL \
-v "$(pwd)/my-repo:/app:ro" \
-v "$(pwd)/output:/app/output" \
-w /app \
node:22 \
bash--cap-drop ALL removes every Linux capability. The process can't change file ownership, bind to privileged ports, or use raw sockets. For running and testing code, this is almost always fine. If something breaks, add back only the specific capability it needs with --cap-add.
Some repos include scripts that pull live code from the internet — update checkers, self-updaters, post-install hooks. --network none already prevents these from connecting, but you can also neutralize them at the file level before mounting:
# Before running the container, stub out known updater scripts
echo '#!/usr/bin/env node
console.log("updates disabled");' > my-repo/update-system.mjsOr mount the repo read-only and handle this inside the container:
# Inside the container (if repo was copied, not mounted read-only)
find /workspace -name "*.mjs" -exec grep -l "fetch\|download\|update" {} \;Review anything that shows up. If a script fetches remote code at runtime, disable it.
docker run --rm -it \
--network none \
--cap-drop ALL \
--memory 512m \
--cpus 1.0 \
--user 1000:1000 \
-v "$(pwd)/my-repo:/app:ro" \
-v "$(pwd)/output:/app/output" \
-w /app \
node:22 \
bashdocker run -it --name temp-setup \
-v "$(pwd)/my-repo:/app:ro" \
-w /app \
node:22 \
bash -c "cp -r /app /workspace && cd /workspace && npm install"
docker commit temp-setup my-repo-sandboxed
docker rm temp-setupdocker run --rm -it \
--network none \
--cap-drop ALL \
--memory 512m \
--cpus 1.0 \
-v "$(pwd)/output:/workspace/output" \
-w /workspace \
my-repo-sandboxed \
bashAdd these to your ~/.zshrc (or ~/.bashrc) so you don't have to type the full command every time:
# Run a sandboxed container with no network (Node)
sandbox-node() {
local repo="${1:-.}"
local output="${2:-./output}"
mkdir -p "$output"
docker run --rm -it \
--network none \
--cap-drop ALL \
--memory 512m \
--cpus 1.0 \
-v "$(cd "$repo" && pwd):/app:ro" \
-v "$(cd "$output" && pwd):/app/output" \
-w /app \
node:22 \
bash
}
# Run a sandboxed container with no network (Python)
sandbox-python() {
local repo="${1:-.}"
local output="${2:-./output}"
mkdir -p "$output"
docker run --rm -it \
--network none \
--cap-drop ALL \
--memory 512m \
--cpus 1.0 \
-v "$(cd "$repo" && pwd):/app:ro" \
-v "$(cd "$output" && pwd):/app/output" \
-w /app \
python:3.12 \
bash
}
# Same as above but WITH network (for installing dependencies)
sandbox-setup() {
local repo="${1:-.}"
local image="${2:-node:22}"
docker run --rm -it \
-v "$(cd "$repo" && pwd):/app" \
-w /app \
"$image" \
bash
}Usage:
sandbox-node ./my-repo ./my-output
sandbox-python ./my-repo
sandbox-setup ./my-repo python:3.12After starting a container, run these checks to confirm the sandbox is working:
# Verify no network
curl https://google.com
# Should fail: "Could not resolve host" or similar
# Verify filesystem isolation
ls /root
# Should show the container's /root, not your Mac's home directory
ls /Users
# Should fail or show nothing — your Mac's filesystem isn't visible
# Verify read-only mount
touch /app/test-write
# Should fail with "Read-only file system"
# Verify output is writable
touch /app/output/test-write
# Should succeed
rm /app/output/test-writeDocker is strong isolation for this use case, but it's not a security absolute. Know the edges:
- Docker Desktop vulnerabilities. Docker itself can have bugs. Keep Docker Desktop updated.
- Mounted volumes. Anything you mount is accessible. This is why the mount rules above matter. One bad
-vflag undoes the isolation. - Container escape exploits. These exist but are rare and typically require root inside the container. Running as non-root with
--cap-drop ALLreduces this risk significantly. - Data you put in the output directory. If the code writes a malicious script to your output dir and you later run it outside the container, Docker can't help you.
- macOS-specific paths. Docker Desktop on Mac runs a Linux VM under the hood. File sharing between macOS and the VM is handled by Docker Desktop's settings. Make sure "File Sharing" in Docker Desktop settings only includes directories you intend to share.
Remove images you no longer need:
# List saved images
docker images
# Remove a specific image
docker image rm my-repo-sandboxed
# Remove all stopped containers and unused images
docker system pruneContainers created with --rm clean themselves up on exit. If you used --name without --rm, clean up manually:
docker rm container-nameThe workflow is: isolate the filesystem, kill the network, drop privileges, and treat output with suspicion. Docker gives you all four. The key flags are --network none, volume mounts with :ro, --cap-drop ALL, and --user. Use the build-then-run pattern when you need dependencies. Keep your mount list tight and never expose your home directory or credential paths.