Overview
Codex is a terminal-first coding agent. With Nebula Api (OpenAI-compatible), you can use a single API key and route requests to mainstream models (GPT, Claude, Gemini, etc.).
Prerequisites
- Create an API key in the Nebula Api console
- Install Codex on your machine (follow Codex installation instructions for your OS)
Codex needs an OpenAI-compatible API key and Base URL.
Use https://llm.ai-nebula.com/v1 as the Base URL (must end with /v1).
Option 1: Environment variables
Set environment variables before starting Codex:
OPENAI_API_KEY: your Nebula Api key
OPENAI_BASE_URL: https://llm.ai-nebula.com/v1
macOS / Linux:
export OPENAI_API_KEY="sk-xxxxxxxxxxxxxxxx"
export OPENAI_BASE_URL="https://llm.ai-nebula.com/v1"
Windows (PowerShell):
$env:OPENAI_API_KEY="sk-xxxxxxxxxxxxxxxx"
$env:OPENAI_BASE_URL="https://llm.ai-nebula.com/v1"
Option 2: TOML config (example)
If you’re using a TOML config with custom providers, define a Nebula provider and point it to the /v1 base URL.
model = "gpt-5.2"
model_provider = "openai-custom"
personality = "pragmatic"
model_reasoning_effort = "high"
[model_providers.openai-custom]
name = "Nebula"
base_url = "https://llm.ai-nebula.com/v1"
wire_api = "responses"
# If your Codex config does not automatically inject the API key for custom providers:
[model_providers.openai-custom.http_headers]
Authorization = "Bearer YOUR_API_KEY"
Content-Type = "application/json"
wire_api = "responses" is useful for models/providers that require the Responses API.
- Don’t commit real API keys into git; prefer environment variables or local-only config.
Choose a model
Pick any model ID from the Nebula model list.
Some clients require GPT-family model IDs to be prefixed with openai/ (for example openai/gpt-5.4).
Troubleshooting
- 401 / invalid API key: re-check your key in the Nebula console and make sure you didn’t add extra spaces
- 404 / wrong endpoint: confirm the Base URL ends with
/v1
- Model not found: verify the model ID exists in the Nebula model list