Skip to main content
POST
https://llm.ai-nebula.com
/
v1
/
models
/
{model}
:embedContent
Gemini Text Embedding (embedContent)
curl --request POST \
  --url https://llm.ai-nebula.com/v1/models/{model}:embedContent \
  --header 'Authorization: <authorization>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "content": {},
  "outputDimensionality": 123,
  "taskType": "<string>"
}
'
{
  "embedding": {
    "values": [0.0023064255, -0.009327292, 0.015797347, ...]
  },
  "metadata": {
    "usage": {
      "prompt_tokens": 6,
      "total_tokens": 6
    }
  }
}

Introduction

Use the Gemini native API to convert text to vector embeddings. The model is specified in the URL path (e.g. gemini-embedding-001). Use this when you need Google embedding models or alignment with the Gemini API.
This complements the Embeddings (OpenAI-style) endpoint: this doc describes the Gemini native path; the same capability is also available via POST /v1/embeddings.

Authentication

Authorization
string
required
Bearer token, e.g. Bearer sk-xxxxxxxxxx

Path Parameters

model
string
required
Embedding model name, e.g. gemini-embedding-001. Do not send model in the request body.

Request Parameters

content
object
required
Content to embed. Must include a parts array; each item is { "text": "your text" }.
outputDimensionality
integer
Output vector dimension (supported only by some models, e.g. gemini-embedding-001, text-embedding-004).
taskType
string
Task type, e.g. RETRIEVAL_DOCUMENT, RETRIEVAL_QUERY (optional).

cURL Example

curl -X POST "https://llm.ai-nebula.com/v1/models/gemini-embedding-001:embedContent" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer sk-XyLy**************************mIqSt" \
  -d '{
    "content": {
      "parts": [
        { "text": "Text to embed" }
      ]
    }
  }'
With dimension:
curl -X POST "https://llm.ai-nebula.com/v1/models/gemini-embedding-001:embedContent" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer sk-XyLy**************************mIqSt" \
  -d '{
    "content": {
      "parts": [
        { "text": "Text to embed" }
      ]
    },
    "outputDimensionality": 768
  }'

Python Example

import requests

url = "https://llm.ai-nebula.com/v1/models/gemini-embedding-001:embedContent"
headers = {
    "Content-Type": "application/json",
    "Authorization": "Bearer sk-XyLy**************************mIqSt"
}
payload = {
    "content": {
        "parts": [
            { "text": "Text to embed" }
        ]
    }
}

response = requests.post(url, json=payload, headers=headers)
data = response.json()
embedding = data["embedding"]["values"]
print(f"Dimension: {len(embedding)}")
{
  "embedding": {
    "values": [0.0023064255, -0.009327292, 0.015797347, ...]
  },
  "metadata": {
    "usage": {
      "prompt_tokens": 6,
      "total_tokens": 6
    }
  }
}

Batch (batchEmbedContents)

For batch embedding use: POST /v1/models/{model}:batchEmbedContents with a requests array; each item has the same shape as a single request (including content.parts). Do not include model in each item.
curl -X POST "https://llm.ai-nebula.com/v1/models/gemini-embedding-001:batchEmbedContents" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer sk-XyLy**************************mIqSt" \
  -d '{
    "requests": [
      { "content": { "parts": [{ "text": "First text" }] } },
      { "content": { "parts": [{ "text": "Second text" }] } }
    ]
  }'

Supported Models

ModelDescription
gemini-embedding-001General-purpose embedding model; supports outputDimensionality
text-embedding-004High-accuracy embedding model

Notes

  • The model is specified in the URL path; do not include model in the request body
  • content.parts is required with at least one non-empty text
  • Usage is returned in metadata.usage (prompt_tokens, total_tokens)