A USB-bootable Linux appliance that maintains a warm pool of real Chrome browsers, accessible over a private network via the Chrome DevTools Protocol.
Plug a USB stick into any x86-64 machine. It boots a minimal Linux system, connects to your Tailscale network, and offers a pool of ready-to-use Chrome browsers. Each browser runs fully headed with a real GPU, real display, and real fingerprints. You acquire a browser through a lightweight API, do your work through CDP, release it, and it resets itself for the next caller. No VMs, no containers, no cloud. Just real browsers on real hardware, accessible from anywhere on your tailnet.
The system is designed around one core principle: browsers should be warm, not cold. Starting Chrome takes seconds, but getting it into a clean, ready, authenticated state takes longer. The pool keeps browsers pre-warmed and pre-configured so callers get instant access. When a browser is released, it resets to a known-good state from a master profile and returns to the pool.
The reference machine is a desktop-class x86-64 with:
- CPU: Any modern x86-64 (Intel or AMD)
- RAM: 32 GB minimum. Each Chrome instance uses 800 MB--3 GB depending on workload. 32 GB supports 8--15 concurrent browsers with headroom for the OS.
- GPU: Intel integrated (UHD 630 or similar) via the i915 driver. This gives every browser a real WebGL context and authentic GPU fingerprints. Discrete GPUs work too but aren't required.
- Storage: A single USB stick (32 GB recommended). The OS runs from RAM after boot. The USB holds the live image and a persistent partition for configuration, profiles, and state.
- Network: Ethernet. Tailscale handles all connectivity -- no public IP needed, no port forwarding.
- Display: A physical monitor is optional. The system runs X11 with i3 at 1920x1080 regardless. VNC provides remote visual access.
The machine needs no hard drive. It boots entirely from USB, loads the squashfs image into RAM, and runs from there. The only writes go to the persistent partition on the USB stick.
Tailscale Network
|
+-----------+-----------+
| |
Port 7600 Ports 9220+
Control Plane Data Plane
(ZMQ ROUTER) (CDP WebSocket)
| |
v v
+---------------+ +-------------------+
| Daemon | | Chrome Pool |
| |---->| b-0 (port 9220) |
| acquire | | b-1 (port 9221) |
| release | | b-2 (port 9222) |
| save-profile | | ... |
| list | | b-N (port 922N) |
| status | +-------------------+
+---------------+ |
| |
Port 7601 i3 Window Manager
HTTP Health (one workspace per
(/health) browser, fullscreen)
|
X11 on :0
VNC on :5900
Control plane -- A ZMQ ROUTER socket on port 7600 handles pool management: acquiring browsers, releasing them, saving profiles, checking status. JSON protocol, sub-millisecond latency. A small HTTP server on port 7601 serves health checks for monitoring.
Data plane -- Each browser exposes CDP (Chrome DevTools Protocol) on its own port. Browser 0 gets port 9220, browser 1 gets 9221, and so on. Callers connect directly to the browser's CDP WebSocket. iptables DNAT rules forward traffic from the Tailscale interface to localhost where Chrome listens.
All access is through Tailscale. Nothing listens on public interfaces. iptables rules DNAT traffic arriving on the tailscale0 interface to 127.0.0.1 for each CDP port. The daemon itself listens on all interfaces (for localhost CLI access), but the practical entry point is always the machine's Tailscale IP.
SSH is available on the Tailscale IP (key-only, no passwords). VNC is available on port 5900 (also Tailscale-only) for visual debugging.
Since Tailscale ACLs control who can reach the machine, the API itself needs no authentication layer. If you can reach the Tailscale IP, you're authorized.
Every browser slot moves through a state machine:
STARTING ──> AVAILABLE ──> ACQUIRED ──> RESETTING ──> AVAILABLE
|
v
DEAD ──> (retry or remove)
STARTING -- Chrome is launching. The daemon waits for the CDP endpoint to respond (HTTP GET to /json/version). Timeout: 15 seconds with exponential backoff. Success transitions to AVAILABLE. Failure transitions to DEAD.
AVAILABLE -- Ready for checkout. The browser has a clean profile, CDP is responding, and the process is healthy. This is the resting state.
ACQUIRED -- Checked out by a caller. A lease token (32-byte cryptographic random, URL-safe) is issued. The lease has a TTL (default: 30 minutes). The caller must present this token to release the browser. If the TTL expires, the daemon force-releases and resets.
RESETTING -- Hard reset in progress. The sequence: kill the Chrome process (SIGTERM, wait 5 seconds, then SIGKILL if needed), delete the working profile directory, copy a fresh profile from the master, patch preferences, relaunch Chrome, verify CDP responds. On success, transitions to AVAILABLE.
DEAD -- The browser is gone. Process crashed, CDP unresponsive after repeated checks, or launch failed. The health loop will retry (up to 5 attempts for core slots). Overflow slots may be removed instead of retried.
- Acquire semaphore (max 3 concurrent resets to limit I/O storms)
- Send SIGTERM to Chrome process
- Wait up to 5 seconds for exit, then SIGKILL
- Delete working profile directory (
rm -rf) - rsync master profile to working directory (selective whitelist, ~25 MB)
- Patch
Preferencesfile: setexit_type: "Normal",exited_cleanly: true - Launch Chrome with standard flags
- Wait for CDP ready (15 seconds, exponential backoff)
- Release semaphore
- On success: transition to AVAILABLE, reset failure counters
Total reset time: ~4--5 seconds.
Each browser slot has a working profile in /tmp (RAM-backed, ephemeral). This is what Chrome actually uses. The master profile lives on the USB persistent partition. It's the golden copy that working profiles are cloned from.
The master profile is small by design. Instead of copying Chrome's entire profile directory (which balloons to 1+ GB with caches), only essential files are synced:
Default/Cookies
Default/Cookies-journal
Default/Preferences
Default/Secure Preferences
Default/Login Data
Default/Login Data-journal
Default/Web Data
Default/Web Data-journal
Default/Local Storage/
Default/Session Storage/
Default/IndexedDB/
Default/Service Worker/
Default/Extension State/
Local State
First Run
This whitelist captures authentication state, cookies, local storage, and service workers -- everything needed to maintain logged-in sessions -- while excluding caches, history, favicons, crash reports, and other expendable data. The result is a ~25 MB master profile that syncs in under a second.
When a caller wants to persist their browser's state back to the master:
- Send SIGSTOP to the Chrome process (freeze it)
- rsync from working directory to a temporary directory, using
--link-destagainst the current master (hard-links unchanged files, saves I/O) - Atomic rename dance:
master->master-old,master-new->master - Delete
master-old - Send SIGCONT to the Chrome process (resume it)
This is power-loss safe. If the machine dies mid-save, the mount script at next boot detects orphaned -new or -old directories and recovers.
The pool starts with a configurable number of core slots (default 8). These are permanent -- they never shrink below a minimum floor (default 2).
When a caller requests a browser and none are available, the daemon checks if it can grow:
- Current pool size < maximum slots (default 32)
- Available RAM >= headroom threshold (default 1024 MB)
If both conditions pass, a new overflow slot is created asynchronously. It gets its own CDP port, its own iptables DNAT rule, its own i3 workspace. Growth is serialized (one at a time via a lock) to prevent stampedes.
Overflow slots shrink automatically. If an overflow browser has been idle (AVAILABLE, unused) for longer than a cooldown period (default 300 seconds), the health loop removes it: kill the process, delete the working profile, remove the iptables rules, free the slot.
Core slots never shrink. The minimum pool size is the absolute floor.
| Parameter | Default | Description |
|---|---|---|
| Pool size | 8 | Core slots launched at boot |
| Minimum pool size | 2 | Absolute floor, never shrink below |
| Maximum slots | 32 | Upper bound for pool growth |
| RAM headroom | 1024 MB | Minimum free RAM to allow growth |
| Idle cooldown | 300 seconds | Idle time before an overflow slot is removed |
| Lease TTL | 1800 seconds | Max time a browser can be checked out |
| Health interval | 5 seconds | Time between health loop iterations |
| Max concurrent resets | 3 | Semaphore limit on parallel resets |
| Base CDP port | 9220 | Port for slot 0; slot N = base + N |
| Chrome disk cache | 50 MB | Per-browser in-memory disk cache |
Configuration is read from a TOML file on the persistent partition. CLI flags can override specific values. Changes require a daemon restart.
A health loop runs every 5 seconds and checks each browser:
- Process alive --
kill(pid, 0). If the process is gone, mark DEAD. - CDP responsive -- HTTP GET to
http://localhost:{port}/json/versionwith a 3-second timeout. Track consecutive failures. 2+ consecutive failures trigger a reset. - Memory -- Read
/proc/{pid}/statusfor VmRSS. If RSS exceeds 3 GB, force a reset. Browsers that leak memory get recycled before they destabilize the system. - Lease expiry -- If an ACQUIRED browser's lease has expired, force-release and reset it.
- Overflow idle check -- If an overflow browser has been AVAILABLE longer than the idle cooldown, shrink it.
- Dead retry -- DEAD core slots get retried (up to 5 launch attempts). DEAD overflow slots get removed.
An HTTP server on port 7601:
GET /health-- Returns 200 if the daemon is running. Body includes pool stats, system metrics (RAM, CPU load, disk usage, uptime), Tailscale connectivity status, and NTP sync state.GET /health/ready-- Returns 200 only if at least one browser is AVAILABLE. Returns 503 otherwise. Useful for load balancer probes.
JSON over ZMQ ROUTER. Multipart frames: [identity] [empty delimiter] [JSON payload].
Request:
{
"v": 1,
"action": "acquire-browser",
"params": {}
}Response:
{
"v": 1,
"ok": true,
"action": "acquire-browser",
"data": {
"id": "b-0",
"lease": "a1b2c3d4...",
"cdp_endpoint": "ws://100.65.242.60:9220",
"port": 9220,
"workspace": 1
}
}Error response:
{
"v": 1,
"ok": false,
"action": "acquire-browser",
"error": "no_browsers_available",
"message": "No browsers available. Pool growth triggered."
}acquire-browser -- Check out the first available browser. Returns its ID, a lease token, and the CDP endpoint URL (with the Tailscale IP baked in). If no browsers are available, returns an error and triggers async pool growth. The caller should retry after a short delay.
release-browser {id, lease} -- Return a browser to the pool. The lease token must match. Dispatches an async hard reset. The browser transitions through RESETTING back to AVAILABLE.
save-profile {id} -- Atomically save the running browser's working profile back to the master on the USB. Uses SIGSTOP/SIGCONT to freeze Chrome during the rsync.
focus-browser {id} -- Switch the i3 workspace to show the specified browser. Useful when watching via VNC.
list-browsers -- Return all browsers with their current state, port, workspace, acquisition time, and lease expiry.
show-browser {id} -- Detailed info for one browser: state, PID, CDP WebSocket URL, creation time, acquisition time.
show-health -- Full daemon health summary: pool stats, system metrics, per-browser details.
ZMQ ROUTER gives sub-millisecond request/response latency (vs 20-50 ms for HTTP). The control plane is high-frequency, low-payload -- perfect for ZMQ. CDP stays on raw WebSocket because that's what Chrome speaks natively. The HTTP health server exists only for standard monitoring tool compatibility.
Every browser gets its own i3 workspace. Browser 0 goes to workspace 1, browser 1 to workspace 2, and so on. Chrome is launched with --class=browser-{workspace} and i3 routes windows to their workspace by class, then sets them fullscreen.
# i3 config pattern for each slot
assign [instance="browser-1"] workspace number 1
for_window [instance="browser-1"] fullscreen enable
No window borders, no status bar. Each workspace is just a single fullscreen Chrome window at 1920x1080.
VNC (TigerVNC scraping server) connects to the X display and serves it on port 5900. The focus-browser API action switches the active workspace, so you can visually watch any browser through VNC.
Each browser is launched with flags tuned for pool operation:
--remote-debugging-port={port} # CDP access
--user-data-dir={profile_path} # Isolated profile per slot
--class=browser-{workspace} # i3 window routing
--disk-cache-size=52428800 # 50 MB in-memory cache
--start-maximized # Fill the 1920x1080 workspace
--no-first-run # Skip first-run dialogs
--disable-default-apps # No default app installation
--disable-background-networking # Reduce background traffic
--disable-sync # No Google sync
--disable-blink-features=AutomationControlled # Hide automation markers
The --disable-blink-features=AutomationControlled flag prevents sites from detecting navigator.webdriver and other automation markers. Combined with a real GPU, real display, and real window management, browsers are indistinguishable from manually operated ones.
The USB stick has two regions:
- Live image -- The squashfs filesystem containing the entire OS. Read-only, loaded into RAM at boot via
toram. - Persistent partition -- An ext4 filesystem labeled for discovery, validated by a sentinel file. This is the only writable storage.
/persist/
master-profile/ # The golden Chrome profile (rsync whitelist files only)
config/
config.toml # Pool and daemon configuration
ts-auth-key # Tailscale auth key (consumed on first boot)
ts-api-key # Tailscale API key (optional, for zombie cleanup)
ts-tailnet # Tailscale tailnet name (optional)
ts-hostname # Hostname to register on Tailscale
vncpasswd # VNC password (DES-encrypted binary format)
authorized_keys # SSH public keys
b2-key-id # Backblaze B2 key ID (for backups)
b2-app-key # Backblaze B2 application key
b2-bucket # B2 bucket name
restic-password # Restic encryption password
tailscale/ # Bind-mounted to /var/lib/tailscale (persists device identity)
logs/ # Application logs (excluded from backups)
.sentinel # Sentinel file for partition discovery
The persistent partition is formatted at flash time and populated with all necessary configuration. After flashing, the USB stick is ready to boot with no further setup.
The system uses systemd with explicit ordering:
Mount persistent storage (Type=oneshot) -- Find the labeled partition, fsck it, mount to /persist. Bind-mount the Tailscale state directory. Create any missing directories. Run crash recovery for interrupted profile saves.
Connect to Tailscale (Type=oneshot) -- Read credentials from /persist/config/. On first boot (no persisted Tailscale state), optionally delete zombie nodes with the same hostname via the Tailscale API, then register with the auth key. On subsequent boots, reconnect using persisted state. Retry with exponential backoff up to 60 seconds.
Install iptables rules (Type=oneshot) -- Get the Tailscale IP. Enable route_localnet on the Tailscale interface. Install DNAT rules forwarding CDP ports from the Tailscale interface to localhost. Install matching FORWARD rules.
# For each CDP port in the core pool range:
iptables -t nat -A PREROUTING -i tailscale0 -p tcp --dport $PORT \
-j DNAT --to-destination 127.0.0.1
iptables -A FORWARD -i tailscale0 -o lo -p tcp --dport $PORT -j ACCEPT
# Required for DNAT to localhost:
sysctl -w net.ipv4.conf.tailscale0.route_localnet=1Start X server (Type=simple) -- Xorg :0 with Intel i915 driver (falls back to dummy driver if no GPU). No TCP listening, access control disabled for the local user.
Start i3 (Type=simple) -- Window manager with pre-configured workspace rules for browser routing. No decorations, no bar.
Start VNC (Type=simple) -- TigerVNC scraping server bound to the Tailscale IP on port 5900.
Start the daemon (Type=simple) -- The control plane daemon. Launches the browser pool, opens the ZMQ socket, starts the health loop. Runs as a dedicated non-root user with CAP_NET_ADMIN for dynamic iptables rules.
Start SSH (Type=oneshot) -- Copy authorized keys, write sshd configuration (listen on Tailscale IP, key-only auth), start the SSH service.
Start backup timer -- Scheduled backup every 4 hours (15-minute delay after boot).
Chrome auto-update timer -- Periodic check for Chrome updates.
Restic backs up the entire persistent partition (excluding logs) to Backblaze B2 every 4 hours.
Repository path: b2:<bucket>:<hostname>. Each appliance gets its own restic repository keyed by hostname.
Everything under /persist/ except /persist/logs/. An exclude file catches Chrome cache junk that might leak into the master profile directory:
Cache/
Code Cache/
GPUCache/
DawnCache/
ShaderCache/
GrShaderCache/
blob_storage/
*.tmp
*.log
Crashpad/
crash_reports/
- Hourly snapshots for 24 hours
- Daily snapshots for 7 days
- Weekly snapshots for 4 weeks
- Older snapshots pruned automatically
On first backup, the restic repository is initialized automatically. No manual setup required beyond providing B2 credentials at flash time.
Four values are needed, all written to the persistent partition at flash time:
- B2 key ID
- B2 application key
- B2 bucket name
- Restic encryption password (separate from B2 credentials -- defense in depth)
A flash tool runs on the development machine (macOS or Linux). It:
-
Detects removable USB devices -- Filters
diskutil(macOS) orlsblk(Linux) for external, removable media. Rejects internal disks, root filesystem devices, and raw disk paths. -
Plans the partition layout -- Whole-disk mode writes the ISO and creates a persistence partition with remaining space. Partial-disk mode can coexist with existing partitions.
-
Writes the live image --
ddwith progress monitoring. On macOS, uses small block sizes (bs=512k) andconv=fsyncto prevent kernel buffer bloat on USB controllers. Monitors for stalls (60-second timeout). -
Creates the persistence partition -- GPT partition, ext4 formatted, labeled for discovery.
-
Populates configuration -- Mounts the new partition and writes all config files: Tailscale credentials, VNC password (DES-encrypted), SSH public keys, B2 backup credentials, and the daemon config TOML.
-
Unmounts and syncs -- Ensures all data is flushed to the USB before the user removes it.
| Input | Required | Description |
|---|---|---|
| Tailscale auth key | Yes | tskey-auth-... for device registration |
| Tailscale API key | No | For zombie node cleanup on re-flash |
| Tailscale tailnet | No | Your tailnet name (or - for personal) |
| Hostname | No | Tailscale device name (default: browserfarm) |
| VNC password | No | 6--8 character password for VNC access |
| SSH public key | No | Path to ~/.ssh/id_*.pub |
| B2 key ID | Yes | Backblaze B2 credentials for backup |
| B2 app key | Yes | |
| B2 bucket | Yes | |
| Restic password | Yes | Encryption password for backup repository |
After flashing, the USB is ready. Plug it in, boot, wait ~30 seconds, and the browsers are available on your tailnet.
The live image is built inside a container for reproducibility. The build:
- Packages a wheel of the daemon as a build artifact
- Runs a Debian container (bookworm, amd64) with
live-buildinstalled - Executes
lb configwith Debian Trixie as the target distribution, squashfs filesystem, andboot=live toramparameters - A chroot hook installs Chrome (from Google's APT repository), Tailscale (from Tailscale's APT repository), and the daemon wheel
- Creates a dedicated non-root user with access to video, audio, and sudo groups
- Enables all systemd services
- Outputs a hybrid ISO suitable for writing to USB
When building on ARM Macs (Apple Silicon), the container runs amd64 via Rosetta. A chroot wrapper handles the emulation boundary: mounting /proc in the target, binding binfmt interpreters if present, and cleaning up on exit.
The Dockerfile uses explicit COPY commands for each directory (auto/, config/, etc.) rather than copying the entire build context. This avoids sending large build artifacts (like previously built ISOs) to the container runtime.
- Display: xorg, i3, tigervnc-scraping-server
- Runtime: python3, rsync, jq, curl
- Networking: iproute2, iptables, net-tools
- Services: openssh-server, systemd-timesyncd, systemd-resolved, restic
- Tools: sudo, tmux, htop
- Filesystem: 4 GB tmpfs at
/dev/shm(Chrome's shared memory requirement)
The flash tool and ISO builder are CLI commands run on the development machine. The daemon also exposes a CLI for interacting with a running appliance.
build-iso -- Build the live image. Detects Docker or Apple Container as the runtime. Accepts --output directory and --no-cache to force a clean build.
flash-usb -- Flash a USB stick. Interactive device selection, partition planning, configuration collection. All the inputs from the table above.
These connect to a running appliance over the network (default: auto-detect Tailscale IP).
acquire-browser -- Check out a browser. Returns ID, lease token, CDP endpoint.
release-browser <id> --lease <token> -- Return a browser with hard reset.
save-profile <id> -- Save the browser's profile to the master.
focus-browser <id> -- Switch the VNC-visible workspace to this browser.
list-browsers -- Show all browsers and their states.
show-browser <id> -- Detailed info for one browser.
show-health -- Daemon health summary.
status -- Compact one-line pool overview: uptime, RAM, load, disk, pool counts, per-browser table. Designed for watch or status bars.
- Slot numbers are monotonically increasing and never reused within a daemon session
- Slot N maps to:
- Browser ID:
b-{N} - CDP port:
base_port + N(default:9220 + N) - i3 workspace:
N + 1 - Working profile:
/tmp/pool/b-{N}/ - Chrome window class:
browser-{N+1}
- Browser ID:
Core slots (0 through pool_size - 1) are permanent. Overflow slots (pool_size and above) are ephemeral.
- Network isolation -- Tailscale only. No public interfaces. If you can reach the IP, you're on the tailnet, and Tailscale ACLs are the authorization layer.
- SSH -- Key-only authentication, bound to the Tailscale interface. No password login.
- VNC -- Password-protected, bound to the Tailscale interface.
- Lease tokens -- 32-byte cryptographic random strings. Required for releasing browsers. Prevents accidental cross-session interference.
- Process isolation -- Each browser is its own Chrome process with its own profile directory. Kill one, the others are unaffected.
- Capabilities -- The daemon runs as a non-root user with only
CAP_NET_ADMIN(for dynamic iptables rules). No root access to Chrome processes. - Backup encryption -- Restic encrypts backups client-side with a password independent of B2 credentials.
- Anti-detection --
AutomationControlledBlink feature disabled. Real GPU, real display, real window manager. No headless mode artifacts.
| Metric | Value |
|---|---|
| Boot to browsers ready | ~30 seconds |
| Browser launch (cold) | ~2 seconds to CDP ready |
| Browser reset (full cycle) | ~4--5 seconds |
| Profile save (atomic) | 1--2 seconds |
| Profile size (master) | ~25 MB |
| RAM per browser (idle) | 800--1200 MB |
| RAM per browser (heavy use) | up to 3 GB (reset threshold) |
| OS + services overhead | 2--3 GB |
| ZMQ request/response latency | sub-millisecond |
| Health check interval | 5 seconds |
| CDP health check timeout | 3 seconds |
For incremental development, the system can be built in layers:
Build a minimal Debian live image that boots from USB into RAM. No persistence, no browsers. Just a working Linux system with X11 and i3 that you can VNC into. Verify it boots on target hardware with GPU acceleration (check glxinfo).
Add the persistence partition. Mount it at boot, store Tailscale state so the machine reconnects automatically. Verify data survives reboots.
Connect to the tailnet at boot. Install iptables DNAT rules. Verify SSH and VNC access over Tailscale. Test that a manually started Chrome browser is reachable via CDP over the tailnet.
Launch one Chrome browser at boot with CDP enabled. Verify you can connect to it from another machine on the tailnet, navigate pages, and interact via CDP.
Implement the pool state machine. Launch N browsers at boot, each on its own port and workspace. Implement acquire/release with lease tokens. Implement hard reset. Verify the full lifecycle works.
Add the master profile and rsync-based reset. Implement save-profile with atomic writes. Verify profiles persist across resets and reboots.
Add the health loop. Implement dynamic pool growth and overflow shrink. Add the HTTP health endpoint. Stress test with memory pressure and process kills.
Build the ZMQ daemon and CLI. Wire up all actions. Add the status command for monitoring.
Build the ISO builder (containerized live-build). Build the USB flash tool with interactive device selection and config population.
Add restic backup to B2 on a systemd timer. Verify backup and restore.
[pool]
size = 8
base_port = 9220
lease_ttl_seconds = 1800
health_check_interval_seconds = 5
max_concurrent_resets = 3
min_pool_size = 2
ram_headroom_mb = 1024
idle_cooldown_seconds = 300
max_slots = 32
[zmq]
port = 7600
[http]
port = 7601
[chrome]
disk_cache_size = 52428800
[paths]
persist_dir = "/persist"
working_dir = "/tmp/pool"- CLI flags (highest)
- TOML config file on persistent partition
- Compiled defaults (lowest)
These are the only files copied between master and working profiles:
Default/Cookies
Default/Cookies-journal
Default/Preferences
Default/Secure Preferences
Default/Login Data
Default/Login Data-journal
Default/Web Data
Default/Web Data-journal
Default/Local Storage/
Default/Session Storage/
Default/IndexedDB/
Default/Service Worker/
Default/Extension State/
Local State
First Run
Everything else Chrome generates (caches, history, favicons, crash reports, shader caches, GPU caches, blob storage, etc.) is intentionally excluded. This keeps the master profile small, fast to sync, and focused on the data that actually matters for maintaining authenticated sessions.
Safety net for Chrome junk that might leak into the persistent partition:
Cache/
Code Cache/
GPUCache/
DawnCache/
ShaderCache/
GrShaderCache/
blob_storage/
BudgetDatabase/
*.tmp
*.log
Crashpad/
crash_reports/
Crowd Deny/
Network/
Safe Browsing*/
AutofillStrikeDatabase/
History
History-journal
Bookmarks
Favicons
Favicons-journal
Top Sites
Top Sites-journal
Visited Links
LOCK
TransportSecurity
DIPS
The mount script runs these checks at boot before the daemon starts:
- If
master-profile-newexists butmaster-profilealso exists: deletemaster-profile-new(interrupted mid-copy, master is intact). - If
master-profile-oldexists butmaster-profiledoes not: renamemaster-profile-oldtomaster-profile(rename completed but cleanup didn't). - If both
master-profileandmaster-profile-oldexist: deletemaster-profile-old(save completed, just cleanup remaining).