Prospect is a hosted Bayesian optimization service. You define a parameter space, and Prospect suggests the next set of parameters to try based on the results you've reported so far. It works for any domain — hyperparameter tuning, A/B testing, manufacturing, simulation, formulation, etc.
Base URL: https://prospectopt.com
All endpoints require a Bearer token:
Authorization: Bearer pro_YOUR_API_KEY
The optimization loop is:
- Create an experiment — define axes (parameters) and objective
- Get a suggestion — Prospect picks the most promising point to try
- Post an observation — report what happened when you tried it
- Repeat steps 2–3 until satisfied
You can also post observations without getting suggestions first (e.g. to warm-start with historical data).
Create a new experiment.
Request body:
{
"name": "optional human-readable name",
"objective": "minimize" or "maximize",
"axes": [
{"name": "param1", "type": "continuous", "min": 0.0, "max": 1.0},
{"name": "param2", "type": "integer", "min": 1, "max": 100},
{"name": "param3", "type": "categorical", "values": ["a", "b", "c"]}
]
}Axis types:
continuous— float in [min, max], optionalstepinteger— int in [min, max], optionalstepcategorical— one of the listedvalues(at least 2)
Response (201):
{
"id": "experiment-uuid",
"name": "...",
"objective": "minimize",
"axes": [...],
"observation_count": 0
}List all experiments for your API key.
Response (200):
{
"experiments": [
{"id": "...", "name": "...", "objective": "...", "axes": [...], "observation_count": 42}
]
}Get a single experiment by ID.
Get the next suggested parameters to try.
Response (200):
{
"parameters": {
"param1": 0.42,
"param2": 17,
"param3": "b"
}
}The first suggestion (before any observations) explores the space. After observations, suggestions are informed by a Bayesian model that balances exploration and exploitation.
Record one or more observations.
Request body:
{
"observations": [
{
"parameters": {"param1": 0.42, "param2": 17, "param3": "b"},
"value": 3.7
}
]
}Each observation has:
parameters— must match the experiment's axis names exactlyvalue— the measured outcome (number). Omit for failures.failed(optional, boolean) — mark a failed trialreason(optional, string) — why it failedmetadata(optional, object) — arbitrary metadata
You can post multiple observations at once (e.g. to warm-start with historical data).
Response (201):
{
"observation_count": 43
}Get all observations for an experiment.
Response (200):
{
"observations": [
{"parameters": {...}, "value": 3.7},
{"parameters": {...}, "value": 2.1}
]
}All errors return:
{
"error": {
"message": "description of what went wrong",
"code": "error_code"
}
}Common codes: missing_auth, invalid_api_key, experiment_not_found,
validation_error, sandbox_limit.
A typical optimization script:
import requests, time
API_KEY = "pro_..."
BASE = "https://prospectopt.com"
HEADERS = {"Authorization": f"Bearer {API_KEY}"}
# 1. Create experiment
resp = requests.post(f"{BASE}/v1/experiments", headers=HEADERS, json={
"name": "my experiment",
"objective": "minimize",
"axes": [
# ... define your parameter space here
],
})
exp_id = resp.json()["id"]
# 2. Optimization loop
for i in range(N_ITERATIONS):
# Get suggestion
suggestion = requests.get(
f"{BASE}/v1/experiments/{exp_id}/suggestion", headers=HEADERS
).json()["parameters"]
# Evaluate (your code here)
result = evaluate(suggestion)
# Report observation
requests.post(
f"{BASE}/v1/experiments/{exp_id}/observations",
headers=HEADERS,
json={"observations": [{"parameters": suggestion, "value": result}]},
)Replace evaluate() with whatever runs your trial — a training run, a
simulation, a physical experiment, an A/B test measurement, etc.