PublicEndpoint - TypeScript SDK
PublicEndpoint method reference
The TypeScript SDK and docs are currently in beta. Report issues on GitHub.
Information about a specific model endpoint
Example Usage
1 import { PublicEndpoint } from "@openrouter/sdk/models"; 2 3 let value: PublicEndpoint = { 4 name: "OpenAI: GPT-4", 5 modelId: "openai/gpt-4", 6 modelName: "GPT-4", 7 contextLength: 8192, 8 pricing: { 9 prompt: "0.00003", 10 completion: "0.00006", 11 }, 12 providerName: "OpenAI", 13 tag: "openai", 14 quantization: "fp16", 15 maxCompletionTokens: 4096, 16 maxPromptTokens: 8192, 17 supportedParameters: [ 18 "temperature", 19 "top_p", 20 "max_tokens", 21 ], 22 uptimeLast30m: 99.5, 23 supportsImplicitCaching: true, 24 latencyLast30m: { 25 p50: 0.25, 26 p75: 0.35, 27 p90: 0.48, 28 p99: 0.85, 29 }, 30 throughputLast30m: { 31 p50: 45.2, 32 p75: 38.5, 33 p90: 28.3, 34 p99: 15.1, 35 }, 36 };
Fields
| Field | Type | Required | Description | Example |
|---|---|---|---|---|
name | string | ✔️ | N/A | |
modelId | string | ✔️ | The unique identifier for the model (permaslug) | openai/gpt-4 |
modelName | string | ✔️ | N/A | |
contextLength | number | ✔️ | N/A | |
pricing | models.Pricing | ✔️ | N/A | |
providerName | models.ProviderName | ✔️ | N/A | OpenAI |
tag | string | ✔️ | N/A | |
quantization | models.PublicEndpointQuantization | ✔️ | N/A | fp16 |
maxCompletionTokens | number | ✔️ | N/A | |
maxPromptTokens | number | ✔️ | N/A | |
supportedParameters | models.Parameter[] | ✔️ | N/A | |
status | models.EndpointStatus | ➖ | N/A | 0 |
uptimeLast30m | number | ✔️ | N/A | |
supportsImplicitCaching | boolean | ✔️ | N/A | |
latencyLast30m | models.PercentileStats | ✔️ | Latency percentiles in milliseconds over the last 30 minutes. Latency measures time to first token. Only visible when authenticated with an API key or cookie; returns null for unauthenticated requests. | |
throughputLast30m | models.PercentileStats | ✔️ | N/A |