Skip to content

Instantly share code, notes, and snippets.

@harold
Created April 4, 2026 19:31
Show Gist options
  • Select an option

  • Save harold/bca1b4ecaa20eb15e7073acd15818d12 to your computer and use it in GitHub Desktop.

Select an option

Save harold/bca1b4ecaa20eb15e7073acd15818d12 to your computer and use it in GitHub Desktop.
Prospect API reference — paste into your coding assistant for context

Prospect API Reference

Prospect is a hosted Bayesian optimization service. You define a parameter space, and Prospect suggests the next set of parameters to try based on the results you've reported so far. It works for any domain — hyperparameter tuning, A/B testing, manufacturing, simulation, formulation, etc.

Base URL: https://prospectopt.com

Authentication

All endpoints require a Bearer token:

Authorization: Bearer pro_YOUR_API_KEY

Workflow

The optimization loop is:

  1. Create an experiment — define axes (parameters) and objective
  2. Get a suggestion — Prospect picks the most promising point to try
  3. Post an observation — report what happened when you tried it
  4. Repeat steps 2–3 until satisfied

You can also post observations without getting suggestions first (e.g. to warm-start with historical data).

Endpoints

POST /v1/experiments

Create a new experiment.

Request body:

{
  "name": "optional human-readable name",
  "objective": "minimize" or "maximize",
  "axes": [
    {"name": "param1", "type": "continuous", "min": 0.0, "max": 1.0},
    {"name": "param2", "type": "integer", "min": 1, "max": 100},
    {"name": "param3", "type": "categorical", "values": ["a", "b", "c"]}
  ]
}

Axis types:

  • continuous — float in [min, max], optional step
  • integer — int in [min, max], optional step
  • categorical — one of the listed values (at least 2)

Response (201):

{
  "id": "experiment-uuid",
  "name": "...",
  "objective": "minimize",
  "axes": [...],
  "observation_count": 0
}

GET /v1/experiments

List all experiments for your API key.

Response (200):

{
  "experiments": [
    {"id": "...", "name": "...", "objective": "...", "axes": [...], "observation_count": 42}
  ]
}

GET /v1/experiments/:id

Get a single experiment by ID.

GET /v1/experiments/:id/suggestion

Get the next suggested parameters to try.

Response (200):

{
  "parameters": {
    "param1": 0.42,
    "param2": 17,
    "param3": "b"
  }
}

The first suggestion (before any observations) explores the space. After observations, suggestions are informed by a Bayesian model that balances exploration and exploitation.

POST /v1/experiments/:id/observations

Record one or more observations.

Request body:

{
  "observations": [
    {
      "parameters": {"param1": 0.42, "param2": 17, "param3": "b"},
      "value": 3.7
    }
  ]
}

Each observation has:

  • parameters — must match the experiment's axis names exactly
  • value — the measured outcome (number). Omit for failures.
  • failed (optional, boolean) — mark a failed trial
  • reason (optional, string) — why it failed
  • metadata (optional, object) — arbitrary metadata

You can post multiple observations at once (e.g. to warm-start with historical data).

Response (201):

{
  "observation_count": 43
}

GET /v1/experiments/:id/observations

Get all observations for an experiment.

Response (200):

{
  "observations": [
    {"parameters": {...}, "value": 3.7},
    {"parameters": {...}, "value": 2.1}
  ]
}

Error responses

All errors return:

{
  "error": {
    "message": "description of what went wrong",
    "code": "error_code"
  }
}

Common codes: missing_auth, invalid_api_key, experiment_not_found, validation_error, sandbox_limit.

Integration pattern

A typical optimization script:

import requests, time

API_KEY = "pro_..."
BASE = "https://prospectopt.com"
HEADERS = {"Authorization": f"Bearer {API_KEY}"}

# 1. Create experiment
resp = requests.post(f"{BASE}/v1/experiments", headers=HEADERS, json={
    "name": "my experiment",
    "objective": "minimize",
    "axes": [
        # ... define your parameter space here
    ],
})
exp_id = resp.json()["id"]

# 2. Optimization loop
for i in range(N_ITERATIONS):
    # Get suggestion
    suggestion = requests.get(
        f"{BASE}/v1/experiments/{exp_id}/suggestion", headers=HEADERS
    ).json()["parameters"]

    # Evaluate (your code here)
    result = evaluate(suggestion)

    # Report observation
    requests.post(
        f"{BASE}/v1/experiments/{exp_id}/observations",
        headers=HEADERS,
        json={"observations": [{"parameters": suggestion, "value": result}]},
    )

Replace evaluate() with whatever runs your trial — a training run, a simulation, a physical experiment, an A/B test measurement, etc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment