
Base configuration
- Endpoint:
https://api.modelriver.com/v1/ai
- Authentication:
Authorization: Bearer mr_live_... (project API key)
- Content-Type:
application/json
Direct provider request
curl -X POST https://api.modelriver.com/v1/ai \
-H "Authorization: Bearer mr_live_your_key" \
-H "Content-Type: application/json" \
-d '{
"provider": "openai",
"model": "gpt-4o-mini",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Write a cheerful product update."}
]
}'
Workflow request
curl -X POST https://api.modelriver.com/v1/ai \
-H "Authorization: Bearer mr_live_your_key" \
-H "Content-Type: application/json" \
-d '{
"workflow": "marketing-summary",
"messages": [
{"role": "user", "content": "Summarise this week's launch."}
],
"metadata": {
"audience": "enterprise customers"
}
}'
Optional fields
workflow: name of a saved workflow. Overrides provider and model.
provider, model: required when not using workflows.
messages: chat-style payload (the most common format across providers).
response_format / structured_output_schema: pass a JSON schema directly if you do not want to create a workflow.
inputs, metadata, context: free-form fields your workflow can access. Add them to cache fields to surface in responses/logs.
Response payload
{
"data": { "summary": "..." },
"cached_data": { "metadata.audience": "enterprise customers" },
"meta": {
"status": "success",
"http_status": 200,
"workflow": "marketing-summary",
"requested_provider": "openai",
"requested_model": "gpt-4o-mini",
"used_provider": "openai",
"used_model": "gpt-4o-mini",
"duration_ms": 1420,
"usage": {
"prompt_tokens": 123,
"completion_tokens": 45,
"total_tokens": 168
},
"structured_output": true,
"attempts": [
{"provider": "openai", "model": "gpt-4o-mini", "status": "success"}
]
},
"backups": [
{"position": 1, "provider": "anthropic", "model": "claude-3-5-sonnet"}
]
}
Error payloads
- ModelRiver returns
200 with an error object when the request was valid but the provider failed.
- Transport/authentication problems return standard HTTP status codes (
401, 403, 429, 5xx).
{
"data": null,
"cached_data": {},
"error": {
"message": "Provider request failed",
"details": {"status": 504, "message": "Upstream timeout"}
},
"meta": {
"status": "error",
"http_status": 502,
"workflow": "marketing-summary",
"attempts": [
{"provider": "openai", "model": "gpt-4o-mini", "status": "error", "reason": "timeout"}
]
}
}
Tips for production use
- Use workflows so you can change providers and prompts without redeploying applications.
- Leverage cache fields to echo request metadata in responses—especially helpful for tracing user IDs or experiment variants.
- Handle
backups in the response if you need to know which fallback succeeded.
- Respect rate limits; if you see
429, implement exponential backoff or reach out to increase limits.
- Store responses if you need historical context—ModelRiver retains logs, but you can export them or stream them elsewhere.