ChatRequest - TypeScript SDK

ChatRequest type definition

The TypeScript SDK and docs are currently in beta. Report issues on GitHub.

Chat completion request parameters

Example Usage

1import { ChatRequest } from "@openrouter/sdk/models";
2
3let value: ChatRequest = {
4 messages: [
5 {
6 role: "system",
7 content: "You are a helpful assistant.",
8 },
9 {
10 role: "user",
11 content: "What is the capital of France?",
12 },
13 ],
14};

Fields

FieldTypeRequiredDescriptionExample
providermodels.ChatRequestProviderWhen multiple model providers are available, optionally indicate your routing preference.
pluginsmodels.ChatRequestPluginUnion[]Plugins you want to enable for this request, including their settings.
userstringUnique user identifieruser-123
sessionIdstringA unique identifier for grouping related requests (e.g., a conversation or agent workflow) for observability. If provided in both the request body and the x-session-id header, the body value takes precedence. Maximum of 256 characters.
tracemodels.ChatRequestTraceMetadata for observability and tracing. Known keys (trace_id, trace_name, span_name, generation_name, parent_span_id) have special handling. Additional keys are passed through as custom metadata to configured broadcast destinations.
messagesmodels.ChatMessages[]✔️List of messages for the conversation[
{"role": "user","content": "Hello!"}
]
modelstringModel to use for completionopenai/gpt-4
modelsstring[]Models to use for completion[
“openai/gpt-4”,
“openai/gpt-4o”
]
frequencyPenaltynumberFrequency penalty (-2.0 to 2.0)0
logitBiasRecord<string, *number*>Token logit bias adjustments{"50256": -100}
logprobsbooleanReturn log probabilitiesfalse
topLogprobsnumberNumber of top log probabilities to return (0-20)5
maxCompletionTokensnumberMaximum tokens in completion100
maxTokensnumberMaximum tokens (deprecated, use max_completion_tokens). Note: some providers enforce a minimum of 16.100
metadataRecord<string, *string*>Key-value pairs for additional object information (max 16 pairs, 64 char keys, 512 char values){"user_id": "user-123","session_id": "session-456"}
presencePenaltynumberPresence penalty (-2.0 to 2.0)0
reasoningmodels.ChatRequestReasoningConfiguration options for reasoning models{"effort": "medium","summary": "concise"}
responseFormatmodels.ResponseFormatResponse format configuration{"type": "json_object"}
seednumberRandom seed for deterministic outputs42
stopmodels.StopStop sequences (up to 4)[
""
]
streambooleanEnable streaming responsefalse
streamOptionsmodels.ChatStreamOptionsStreaming configuration options{"include_usage": true}
temperaturenumberSampling temperature (0-2)0.7
parallelToolCallsbooleanWhether to enable parallel function calling during tool use. When true, the model may generate multiple tool calls in a single response.true
toolChoicemodels.ChatToolChoiceTool choice configurationauto
toolsmodels.ChatFunctionTool[]Available tools for function calling[
{"type": "function","function": {"name": "get_weather","description": "Get weather"}
}
]
topPnumberNucleus sampling parameter (0-1)1
debugmodels.ChatDebugOptionsDebug options for inspecting request transformations (streaming only){"echo_upstream_body": true}
imageConfigRecord<string, *models.ChatRequestImageConfig*>Provider-specific image configuration options. Keys and values vary by model/provider. See https://openrouter.ai/docs/guides/overview/multimodal/image-generation for more details.{"aspect_ratio": "16:9"}
modalitiesmodels.Modality[]Output modalities for the response. Supported values are “text”, “image”, and “audio”.[
“text”,
“image”
]
cacheControlmodels.AnthropicCacheControlDirectiveN/A
serviceTiermodels.ChatRequestServiceTierThe service tier to use for processing this request.auto