ChatRequest - Go SDK

ChatRequest type definition

The Go SDK and docs are currently in beta. Report issues on GitHub.

Chat completion request parameters

Fields

FieldTypeRequiredDescriptionExample
Provideroptionalnullable.OptionalNullable[components.ProviderPreferences]When multiple model providers are available, optionally indicate your routing preference.{"allow_fallbacks": true}
Plugins[]components.ChatRequestPluginPlugins you want to enable for this request, including their settings.
User*stringUnique user identifieruser-123
SessionID*stringA unique identifier for grouping related requests (e.g., a conversation or agent workflow) for observability. If provided in both the request body and the x-session-id header, the body value takes precedence. Maximum of 256 characters.
Trace*components.TraceConfigMetadata for observability and tracing. Known keys (trace_id, trace_name, span_name, generation_name, parent_span_id) have special handling. Additional keys are passed through as custom metadata to configured broadcast destinations.{"trace_id": "trace-abc123","trace_name": "my-app-trace"}
Messages[]components.ChatMessages✔️List of messages for the conversation[
{"role": "user","content": "Hello!"}
]
Model*stringModel to use for completionopenai/gpt-4
Models[]stringModels to use for completion[
“openai/gpt-4”,
“openai/gpt-4o”
]
FrequencyPenalty*float64Frequency penalty (-2.0 to 2.0)0
LogitBiasoptionalnullable.OptionalNullable[map[string]float64]Token logit bias adjustments{"50256": -100}
Logprobsoptionalnullable.OptionalNullable[bool]Return log probabilitiesfalse
TopLogprobs*int64Number of top log probabilities to return (0-20)5
MaxCompletionTokens*int64Maximum tokens in completion100
MaxTokens*int64Maximum tokens (deprecated, use max_completion_tokens). Note: some providers enforce a minimum of 16.100
Metadatamap[string]stringKey-value pairs for additional object information (max 16 pairs, 64 char keys, 512 char values){"user_id": "user-123","session_id": "session-456"}
PresencePenalty*float64Presence penalty (-2.0 to 2.0)0
Reasoning*components.ReasoningConfiguration options for reasoning models{"effort": "medium","summary": "concise"}
ResponseFormat*components.ResponseFormatResponse format configuration{"type": "json_object"}
Seed*int64Random seed for deterministic outputs42
Stopoptionalnullable.OptionalNullable[components.Stop]Stop sequences (up to 4)[
""
]
Stream*boolEnable streaming responsefalse
StreamOptionsoptionalnullable.OptionalNullable[components.ChatStreamOptions]Streaming configuration options{"include_usage": true}
Temperature*float64Sampling temperature (0-2)0.7
ParallelToolCallsoptionalnullable.OptionalNullable[bool]Whether to enable parallel function calling during tool use. When true, the model may generate multiple tool calls in a single response.true
ToolChoice*components.ChatToolChoiceTool choice configurationauto
Tools[]components.ChatFunctionToolAvailable tools for function calling[
{"type": "function","function": {"name": "get_weather","description": "Get weather"}
}
]
TopP*float64Nucleus sampling parameter (0-1)1
Debug*components.ChatDebugOptionsDebug options for inspecting request transformations (streaming only){"echo_upstream_body": true}
ImageConfigmap[string]components.ChatRequestImageConfigProvider-specific image configuration options. Keys and values vary by model/provider. See https://openrouter.ai/docs/guides/overview/multimodal/image-generation for more details.{"aspect_ratio": "16:9"}
Modalities[]components.ModalityOutput modalities for the response. Supported values are “text”, “image”, and “audio”.[
“text”,
“image”
]
CacheControl*components.AnthropicCacheControlDirectiveN/A{"type": "ephemeral"}
ServiceTieroptionalnullable.OptionalNullable[components.ChatRequestServiceTier]The service tier to use for processing this request.auto