Quickstart

Use the OpenAI SDK with a ModelBridge key.

Register or log in from the dashboard, create a virtual API key, then point your SDK at the backend API URL configured for your deployment.

1

Set backend URL

Open the dashboard and save the API backend address for local or deployed testing.

2

Register or log in

A valid backend returns a session token and lets you create virtual API keys.

3

Call chat completions

Use the OpenAI client with `deepseek-chat` and your virtual key.

Python

pip install openai

from openai import OpenAI

client = OpenAI(
    api_key="sk-mb-your-virtual-api-key",
    base_url="https://YOUR_API_BACKEND/v1"
)

response = client.chat.completions.create(
    model="deepseek-chat",
    messages=[{"role": "user", "content": "Say hello"}]
)
print(response.choices[0].message.content)

REST

curl https://YOUR_API_BACKEND/v1/chat/completions \
  -H "Authorization: Bearer sk-mb-your-virtual-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "deepseek-chat",
    "messages": [{"role":"user","content":"Hello"}]
  }'
EndpointPurpose
POST /auth/registerCreate an account and initial virtual API key.
POST /auth/loginCreate a dashboard session token.
GET /v1/pricingFetch model pricing.
POST /v1/chat/completionsOpenAI-compatible chat completions.
GET /dashboard/usageRead usage and cost records for the account.