Speech-to-Text
Speech-to-Text
OpenRouter supports speech-to-text (STT) via a dedicated /api/v1/audio/transcriptions endpoint. Send base64-encoded audio and receive a JSON response with the transcribed text and usage statistics.
Model Discovery
You can find STT models in several ways:
Via the API
Use the output_modalities query parameter on the Models API to discover STT models:
On the Models Page
Visit the Models page and filter by output modalities to find models capable of audio transcription. You can also browse the Speech-to-Text collection for a curated list.
API Usage
Send a POST request to /api/v1/audio/transcriptions with a JSON body containing base64-encoded audio. The response is JSON with the transcribed text and optional usage statistics.
Basic Example
Request Parameters
Provider-Specific Options
You can pass provider-specific options using the provider parameter. Options are keyed by provider slug, and only the options for the matched provider are forwarded:
Response Format
The STT endpoint returns a JSON response with the transcribed text:
Response Fields
Response Headers
Supported Audio Formats
Supported audio formats vary by provider. Common formats include:
Pricing
STT models use different pricing strategies depending on the provider:
- Duration-based (e.g., OpenAI Whisper): Priced per second of audio input
- Token-based (e.g., newer OpenAI models): Priced per input/output token, similar to text models
You can check the cost for each model on the Models page or via the Models API. The usage.cost field in the response shows the actual cost for each request.
BYOK (Bring Your Own Key)
STT supports BYOK, allowing you to use your own provider API keys. When configured, requests are routed directly to the provider using your key, and OpenRouter charges only its platform fee rather than the per-usage model cost.
Playground
You can test STT models directly in the browser using the OpenRouter Playground. Navigate to any STT model’s page and use the playground tab to upload an audio file and see the transcription result.
Differences from Audio Input
OpenRouter supports two ways to process audio:
-
Speech-to-Text (this page): A dedicated
/api/v1/audio/transcriptionsendpoint optimized for transcription. Returns structured JSON with the transcribed text and usage data. Best for converting audio to text. -
Audio input via Chat Completions (Audio docs): Send audio as part of a
/api/v1/chat/completionsrequest using theinput_audiocontent type. The model processes the audio alongside text and responds conversationally. Best for audio analysis, question answering about audio content, or combining audio with other modalities.
Best Practices
- Choose the right format: WAV provides the best quality for transcription. MP3 and other compressed formats work well but may slightly reduce accuracy for borderline audio
- File size: For very long audio files, consider splitting them into smaller segments. The upstream provider timeout is 60 seconds, so very large files may time out
- Base64 encoding: Audio must be sent as base64-encoded data (raw bytes, not a data URI). Most programming languages have built-in base64 encoding utilities
Troubleshooting
Empty or incorrect transcription?
- Verify the audio format matches the
formatfield in your request - Ensure the audio quality is sufficient for transcription
Request timing out?
- Large audio files may exceed the 60-second timeout. Split long recordings into smaller segments
- Compressed formats (MP3, AAC) produce smaller payloads and transfer faster
Model not found?
- Use the Models page or the Models API with
output_modalities=transcriptionto find available STT models - Verify the model slug is correct (e.g.,
openai/whisper-1, notwhisper-1)
Authentication error?
- Ensure you’re using a valid API key from your OpenRouter dashboard
- The STT endpoint uses the same authentication as the Chat Completions API