Universal text chat API supporting OpenAI-compatible large language models for generating conversational responses. Through a unified API interface, you can call multiple mainstream large models including OpenAI, Claude, DeepSeek, Grok, and Tongyi Qianwen.
For strict structured output, it is recommended to lower the temperature value (e.g., 0.1-0.3) and set an appropriate max_tokens to improve consistency.
Some models support thinking capability (Thinking/Reasoning), which can display the reasoning process when generating responses. Different models implement this differently:
DeepSeek
Tongyi Qianwen
Gemini
DeepSeek models support enabling thinking capability through the thinking field:
curl -X POST "https://llm.ai-nebula.com/v1/chat/completions" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer sk-xxxxxxxxxx" \ -d '{ "model": "deepseek-v3-1-250821", "messages": [ {"role": "system", "content": "You are a helpful assistant"}, {"role": "user", "content": "Give a medium-difficulty geometry problem and solve it step by step"} ], "thinking": {"type": "enabled"} }'
Default thinking.type is "disabled", need to explicitly set to "enabled" to enable
The output form of thinking capability may vary by model version
It is recommended to use with stream: true for better interactive experience
Tongyi Qianwen supports deep thinking functionality, requires streaming output:
curl -N -X POST "https://llm.ai-nebula.com/v1/chat/completions" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer sk-xxxxxxxxxx" \ -d '{ "model": "qwen3-omni-flash", "stream": true, "enable_thinking": true, "parameters": { "incremental_output": true }, "messages": [ {"role": "system", "content": "You are an excellent mathematician"}, {"role": "user", "content": "What is the formula for Tower of Hanoi"} ] }'
Inline reasoning process into content:If the client does not display reasoning_content, you can use nebula_thinking_to_content: true to inline reasoning content into content:
curl -N -X POST "https://llm.ai-nebula.com/v1/chat/completions" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer sk-xxxxxxxxxx" \ -d '{ "model": "qwen3-omni-flash", "stream": true, "enable_thinking": true, "nebula_thinking_to_content": true, "parameters": { "incremental_output": true }, "messages": [ {"role": "user", "content": "What is the formula for Tower of Hanoi"} ] }'
Tongyi Qianwen’s deep thinking functionality must be used with stream: true. If enable_thinking: true is set but stream: false, the system will automatically disable deep thinking to avoid upstream errors.
Refer to the Gemini thinking mode guide. Main ways to enable:
extra_body config: extra_body.google.thinking_config.thinking_budget + include_thoughts; special values: -1 auto-enable, 0 disable, >0 specific budget; requires stream: true
reasoning_effort: usable when using -thinking and max_tokens is not set (low/medium/high ≈ 20%/50%/80% budget)
Gemini 3 Pro Preview: uses thinking_level (LOW/HIGH, default HIGH), can be combined with search
Enable search: recommended OpenAI-compatible tool "tools":[{"type":"function","function":{"name":"googleSearch"}}]; or pass through extra_body.google.tools:[{"googleSearch":{}}]
Notes: thinking adapter must be enabled server-side; thinking budget counts toward output tokens; use stream: true to view reasoning_content
Example (2.5 with specific budget):
curl -X POST "https://llm.ai-nebula.com/v1/chat/completions" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer sk-xxxxxxxxxx" \ -d '{ "model": "gemini-3-flash-preview", "messages": [ {"role":"user","content":"Give a medium-difficulty geometry problem and analyze it step by step."} ], "extra_body": { "google": { "thinking_config": { "thinking_budget": 6000, "include_thoughts": true } } }, "stream": true }'
Tongyi Qianwen models support extended features such as search, speech recognition, etc. All extended parameters need to be placed in the parameters object.
Search Feature
Speech Recognition
curl -X POST "https://llm.ai-nebula.com/v1/chat/completions" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer sk-xxxxxxxxxx" \ -d '{ "model": "qwen3-omni-flash", "messages": [ {"role": "user", "content": "Please first search for recent common misconceptions about Fermat'\''s Last Theorem, then answer"} ], "stream": true, "enable_thinking": true, "parameters": { "enable_search": true, "search_options": { "region": "CN", "recency_days": 30 }, "incremental_output": true } }'
All extended parameters for Tongyi Qianwen (such as enable_search, search_options, asr_options, temperature, top_p, etc.) need to be placed in the parameters object, not at the top level of the request body.
Some models support real-time web search, allowing access to the latest information and including citation sources in responses.
Claude Web Search
Grok Live Search
Claude models do not support enabling web search functionality through the web_search_options parameter, so it can only be implemented through tool calls, and may be unstable due to network and prompt reasons. For details, see Tool Calling (Functions / Tools) above.Basic Example (showing tool call flow):
curl -X POST "https://llm.ai-nebula.com/v1/chat/completions" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer sk-xxxxxxxxxx" \ -d '{ "model": "glm-5", "messages": [ {"role": "user", "content": "What are the latest news about artificial intelligence?"}, { "role": "assistant", "content": "I'\''ll help you search for the latest news about artificial intelligence.", "tool_calls": [ { "id": "toolu_xxx", "type": "function", "function": { "name": "WebSearch", "arguments": "{\"query\": \"artificial intelligence latest news 2025\"}" } } ] }, { "role": "tool", "tool_call_id": "toolu_xxx", "name": "WebSearch", "content": "Web search results for query: \"artificial intelligence latest news 2025\"..." } ], "web_search_options": { "search_context_size": "medium" } }'
Example with Location Information (showing tool call flow):
Search functionality will increase response time and token consumption (including search result content)
Search results will automatically include citation sources in the response
Supported models include Claude Sonnet 4, Claude 3 Opus, etc.
In multi-turn conversations, tool calls and results will be visible in message history, and the model can continue the conversation based on previous search results
Stability Notice:
Web search functionality depends on upstream proxy services and external search services, and may have the following instabilities:
Network fluctuations: Network connection issues may cause search requests to timeout or fail
Service limitations: Search services may have rate limits, timeout limits, or temporary unavailability
Search result quality: Some queries may not find relevant information, or search results may be of poor quality
Model judgment: The model will automatically determine whether a search is needed based on the question, and in some cases may not trigger a search
This is an inherent characteristic of web search functionality. It is recommended to:
Implement retry mechanisms in critical scenarios
Handle search failures with graceful degradation (e.g., using the model’s knowledge base to answer)
Avoid relying entirely on web search in scenarios with extremely high real-time requirements
Grok models support real-time search through the search_parameters parameter.
Grok models (especially grok-4-fast-reasoning) support reasoning capability. The usage in the response distinguishes between completion_tokens and reasoning_tokens: