Skip to main content
The completions API is the legacy text generation interface — you provide a raw prompt string and the model continues it. For most use cases, the Chat Completions API is simpler and recommended instead. The endpoint is:
POST https://api.deepinfra.com/v1/openai/completions
This is an advanced API. You need to know your model’s exact prompt format. Different models have different input formats. Check the model’s API section on its page for the expected format.

Example

The example below uses deepseek-ai/DeepSeek-V3 with its prompt format:
from openai import OpenAI

openai = OpenAI(
    api_key="$DEEPINFRA_TOKEN",
    base_url="https://api.deepinfra.com/v1/openai",
)

stream = True  # or False

completion = openai.completions.create(
    model="deepseek-ai/DeepSeek-V3",
    prompt="<|begin▁of▁sentence|><|User|>Hello!<|Assistant|>",
    stop=["<|end▁of▁sentence|>"],
    stream=stream,
)

if stream:
    for event in completion:
        if event.choices[0].finish_reason:
            print(event.choices[0].finish_reason,
                  event.usage.prompt_tokens,
                  event.usage.completion_tokens)
        else:
            print(event.choices[0].text, end="", flush=True)
else:
    print(completion.choices[0].text)
    print(completion.usage.prompt_tokens, completion.usage.completion_tokens)

Supported parameters

ParameterNotes
modelModel name or MODEL_NAME:VERSION
promptRaw prompt string in the model’s expected format
max_tokensMax tokens to generate. Defaults to the model’s max context length minus input length
streamStream output via SSE instead of returning the full response at once. Default: false
temperatureSampling temperature between 0 and 2. Higher values produce more random output; lower values more deterministic. Default: 1.0
top_pNucleus sampling threshold — only tokens comprising the top top_p probability mass are considered. Default: 1.0
stopUp to 4 sequences where the API will stop generating further tokens
nNumber of completion sequences to return. Default: 1
echoIf true, the prompt is included at the start of the returned text
logprobsReturn log probabilities for the generated tokens
For every model, you can check its prompt format in the API section on its page. For the complete parameter reference, see the API reference.