Skip to content

Instantly share code, notes, and snippets.

@jollyjoker992
Created August 11, 2025 03:54
Show Gist options
  • Save jollyjoker992/c6d3f1f14e18370143ccfa42b0eb50de to your computer and use it in GitHub Desktop.
Save jollyjoker992/c6d3f1f14e18370143ccfa42b0eb50de to your computer and use it in GitHub Desktop.
DP-1 Feed — Deployment & Migration Notes

DP-1 Feed — Deployment & Migration Notes

Scope: quick reference describing the current Cloudflare-based deployment and a practical path to self‑hosting.

1) Current Deployment Structure (Cloudflare)

Runtime & App

  • Cloudflare Workers running the Hono-based API (TypeScript)
  • Middleware stack: error handling, request logging, CORS, content-type validation, bearer‑token auth
  • Routes: /playlists, /playlist-groups, /playlist-items, /health

Data & Async

  • KV namespaces (bindings):

    • DP1_PLAYLISTSplaylist:{id}
    • DP1_PLAYLIST_GROUPSplaylist-group:{id}
    • (Items are read through dedicated endpoints but stored within playlists)
  • Queue: DP1_WRITE_QUEUE used for async write operations (create/update), with batch size/timeout configurable; automatic retries by the platform

Secrets & Config

  • Secrets: API_SECRET (bearer token), ED25519_PRIVATE_KEY (hex)
  • Environment variables: ENVIRONMENT, IPFS_GATEWAY_URL, ARWEAVE_GATEWAY_URL
  • Deployment/config via wrangler.toml (KV & queue bindings, routes, envs)

CI/Dev Tooling

  • GitHub Actions for tests/lint/benchmarks
  • K6 scenarios (light/normal/stress/spike/soak) with P95 targets

2) Constraints Tied to Cloudflare

  • Platform bindings: code depends on env bindings provided by Workers (KVNamespace, queues.producer/consumer).
  • Async semantics: write endpoints return the signed resource immediately while persistence happens in the background via Cloudflare Queues (at‑least‑once, automatic retries, batch handling).
  • Storage model: Cloudflare KV (eventual consistency, global replication) and associated key schema; Cloudflare-specific CLI/deploy (wrangler).
  • Secrets: managed via wrangler secret and injected at runtime.
  • Routing: production hostnames are configured as Worker routes in wrangler.toml.

These assumptions leak into the code via:

  • Direct usage of env.DP1_PLAYLISTS/… and env.DP1_WRITE_QUEUE
  • Queue processor located in queue/processor.ts with CF batch handler shape
  • CF‑specific deployment & local-dev commands

3) Migration Path to Self‑Hosting

There are two viable tracks; you can adopt one or mix them during transition.

Track A — Stay on the Workers model using workerd (Cloudflare’s open-source runtime)

Goal: keep the Fetch API/Hono app the same, replace CF’s managed services with self‑hosted equivalents.

Plan

  1. Containerize runtime

    • Ship a Docker image running workerd with your service config (mount the built worker bundle and a workerd config file mapping service → module and environment).
  2. Abstract platform bindings

    • Introduce interfaces in code:

      • Storage (get/put/list/delete; atomic update helper)
      • WriteQueue (enqueue, ack, dead-letter)
    • Provide adapters:

      • CloudflareKVStorage & CloudflareQueue (current)
      • RedisStorage (or Valkey/Dragonfly), or DynamoDB/TiKV/etcd as alternatives
      • NATSJetStreamQueue (or RabbitMQ/Kafka) for writes
  3. Replace bindings in workerd

    • Map env.* to your own shim objects that call the adapters above.
  4. Background processing

    • Run a worker/processor service (Node.js or workerd service) that consumes from the queue and persists to the chosen KV/DB.
  5. Secrets/config

    • Use standard env vars or mounted secrets (Docker/Kubernetes), keep variable names intact to minimize code churn.
  6. Observability

    • Replace CF logs/metrics with your stack (e.g., OpenTelemetry → OTLP → Prometheus/Grafana; structured logs to stdout, log shipper).

Deliverables

  • Dockerfile for API (workerd) and one for processor
  • workerd.capnp or JSON config defining services, bindings, secrets
  • Adapter implementations (src/adapters/*), plus light dependency injection in index.ts
  • docker-compose.yml for local self‑host (API + queue + KV)

Track B — Run as a standard Node server (Hono/Node+HTTP)

Goal: shed the Workers runtime entirely.

Plan

  • Swap Hono’s Cloudflare adapter to the Node runtime; expose the same routes.
  • Reuse the adapters from Track A for storage/queue (Redis/NATS/etc.).
  • Package as a single API container (Node 22) + a separate queue consumer.

Storage & Queue Options (self-hosted)

  • KV / primary store: Redis/Valkey/Dragonfly (simple KV, fast), DynamoDB/TiKV/Badger (if you need persistence models beyond simple KV). Keep keys: playlist:{id}, playlist-group:{id}; consider secondary indices only if you need more complex queries.
  • Queue: NATS JetStream (light, durable, simple ops), RabbitMQ (routing/ack patterns), or Kafka (high‑throughput).
  • Semantics: retain at‑least‑once. Idempotent writes via deterministic keys and upserts.

Code Changes Summary

  • Introduce src/ports (interfaces) and src/adapters (impls). Example:

    export interface Storage { get(key: string): Promise<string|null>; put(key: string, value: string): Promise<void>; list(prefix: string, cursor?: string, limit?: number): Promise<{keys: string[]; cursor?: string}>; }
    export interface WriteQueue { enqueue(msg: WriteOp): Promise<void>; }
  • Replace direct env.DP1_* references with injected Storage/WriteQueue.

  • Keep the HTTP schema, Zod validation, signing, and “respond‑then‑persist” behavior unchanged.

Deployment Blueprint (example, Docker Compose)

version: "3.9"
services:
  api:
    image: dp1-feed-api:latest # workerd-based or Node+Hono
    env_file: .env
    ports: ["8080:8080"]
    depends_on: [queue, kv]
  processor:
    image: dp1-feed-processor:latest
    env_file: .env
    depends_on: [queue, kv]
  queue:
    image: nats:latest # or rabbitmq/kafka
  kv:
    image: valkey/valkey:latest # or redis/dragonflydb/bitnami/redis

Cutover Strategy

  1. Dual-run: Keep CF as primary; mirror writes to self‑hosted storage via the queue.
  2. Read shadowing: Add a read‑path feature flag to compare responses.
  3. Flip reads to self‑host; monitor latency and correctness.
  4. Flip writes to self‑host; keep CF queue as fallback for a window.
  5. Decommission CF resources once SLOs and error budgets are stable.

Result: You preserve the API contract, cryptographic signing, and async UX while swapping Cloudflare‑managed pieces for portable, self‑hosted components with minimal code churn.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment