The preview GPT-4 model with improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Training data: up to Dec 2023.
Note: heavily rate limited by OpenAI while in preview.
Uptime stats for GPT-4 Turbo Preview across all providers
Sample code and API for GPT-4 Turbo Preview
OpenRouter normalizes requests and responses across providers for you.
To get started, you can use GPT-4 Turbo Preview via API like this:
fetch("https://openrouter.ai/api/v1/chat/completions",{ method:"POST", headers:{"Authorization":`Bearer ${OPENROUTER_API_KEY}`,"HTTP-Referer":`${YOUR_SITE_URL}`,// Optional, for including your app on openrouter.ai rankings."X-Title":`${YOUR_SITE_NAME}`,// Optional. Shows in rankings on openrouter.ai."Content-Type":"application/json"}, body:JSON.stringify({"model":"openai/gpt-4-turbo-preview","messages":[{"role":"user","content":"What is the meaning of life?"}]})});
You can also use OpenRouter with OpenAI's client API:
import OpenAI from"openai"const openai =newOpenAI({ baseURL:"https://openrouter.ai/api/v1", apiKey: $OPENROUTER_API_KEY, defaultHeaders:{"HTTP-Referer": $YOUR_SITE_URL,// Optional, for including your app on openrouter.ai rankings."X-Title": $YOUR_SITE_NAME,// Optional. Shows in rankings on openrouter.ai.}})asyncfunctionmain(){const completion =await openai.chat.completions.create({ model:"openai/gpt-4-turbo-preview", messages:[{"role":"user","content":"What is the meaning of life?"}]})console.log(completion.choices[0].message)}main()
See the Request docs for all possible parameters, and Parameters for recommended values.