Speech-to-Text

How to transcribe audio into text with OpenRouter models

OpenRouter supports speech-to-text (STT) via a dedicated /api/v1/audio/transcriptions endpoint. Send base64-encoded audio and receive a JSON response with the transcribed text and usage statistics.

Model Discovery

You can find STT models in several ways:

Via the API

Use the output_modalities query parameter on the Models API to discover STT models:

$# List only STT models
$curl "https://openrouter.ai/api/v1/models?output_modalities=transcription"

On the Models Page

Visit the Models page and filter by output modalities to find models capable of audio transcription. You can also browse the Speech-to-Text collection for a curated list.

API Usage

Send a POST request to /api/v1/audio/transcriptions with a JSON body containing base64-encoded audio. The response is JSON with the transcribed text and optional usage statistics.

Basic Example

1import { OpenRouter } from '@openrouter/sdk';
2import fs from 'fs';
3
4const openRouter = new OpenRouter({
5 apiKey: '{{API_KEY_REF}}',
6});
7
8const audioBuffer = await fs.promises.readFile('audio.wav');
9const base64Audio = audioBuffer.toString('base64');
10
11const result = await openRouter.stt.createTranscription({
12 sttRequest: {
13 model: '{{MODEL}}',
14 inputAudio: {
15 data: base64Audio,
16 format: 'wav',
17 },
18 },
19});
20
21console.log(result.text);

Request Parameters

ParameterTypeRequiredDescription
modelstringYesThe STT model to use (e.g., openai/whisper-1)
input_audioobjectYesAudio data to transcribe
input_audio.datastringYesBase64-encoded audio data (raw bytes, not a data URI)
input_audio.formatstringYesAudio format (e.g., wav, mp3, flac, m4a, ogg, webm, aac)
languagestringNoISO-639-1 language code (e.g., "en", "ja"). Auto-detected if omitted
temperaturenumberNoSampling temperature between 0 and 1. Lower values produce more deterministic results
providerobjectNoProvider-specific passthrough configuration

Provider-Specific Options

You can pass provider-specific options using the provider parameter. Options are keyed by provider slug, and only the options for the matched provider are forwarded:

1{
2 "model": "openai/whisper-large-v3",
3 "input_audio": {
4 "data": "UklGRiQA...",
5 "format": "wav"
6 },
7 "provider": {
8 "options": {
9 "groq": {
10 "prompt": "Expected vocabulary: OpenRouter, API, transcription"
11 }
12 }
13 }
14}

Response Format

The STT endpoint returns a JSON response with the transcribed text:

1{
2 "text": "Hello, this is a test of speech-to-text transcription.",
3 "usage": {
4 "seconds": 9.2,
5 "total_tokens": 113,
6 "input_tokens": 83,
7 "output_tokens": 30,
8 "cost": 0.000508
9 }
10}

Response Fields

FieldTypeDescription
textstringThe transcribed text
usage.secondsnumberDuration of the input audio in seconds
usage.total_tokensnumberTotal number of tokens used (input + output)
usage.input_tokensnumberNumber of input tokens billed
usage.output_tokensnumberNumber of output tokens generated
usage.costnumberTotal cost of the request in USD

Response Headers

HeaderDescription
X-Generation-IdUnique generation ID for the request, useful for tracking and debugging

Supported Audio Formats

Supported audio formats vary by provider. Common formats include:

FormatMIME TypeDescription
wavaudio/wavUncompressed audio, highest quality
mp3audio/mpegCompressed audio, widely compatible
flacaudio/flacLossless compressed audio
m4aaudio/mp4MPEG-4 audio
oggaudio/oggOgg Vorbis audio
webmaudio/webmWebM audio, common in browser recordings
aacaudio/aacAdvanced Audio Coding

Pricing

STT models use different pricing strategies depending on the provider:

  • Duration-based (e.g., OpenAI Whisper): Priced per second of audio input
  • Token-based (e.g., newer OpenAI models): Priced per input/output token, similar to text models

You can check the cost for each model on the Models page or via the Models API. The usage.cost field in the response shows the actual cost for each request.

BYOK (Bring Your Own Key)

STT supports BYOK, allowing you to use your own provider API keys. When configured, requests are routed directly to the provider using your key, and OpenRouter charges only its platform fee rather than the per-usage model cost.

Playground

You can test STT models directly in the browser using the OpenRouter Playground. Navigate to any STT model’s page and use the playground tab to upload an audio file and see the transcription result.

Differences from Audio Input

OpenRouter supports two ways to process audio:

  1. Speech-to-Text (this page): A dedicated /api/v1/audio/transcriptions endpoint optimized for transcription. Returns structured JSON with the transcribed text and usage data. Best for converting audio to text.

  2. Audio input via Chat Completions (Audio docs): Send audio as part of a /api/v1/chat/completions request using the input_audio content type. The model processes the audio alongside text and responds conversationally. Best for audio analysis, question answering about audio content, or combining audio with other modalities.

Best Practices

  • Choose the right format: WAV provides the best quality for transcription. MP3 and other compressed formats work well but may slightly reduce accuracy for borderline audio
  • File size: For very long audio files, consider splitting them into smaller segments. The upstream provider timeout is 60 seconds, so very large files may time out
  • Base64 encoding: Audio must be sent as base64-encoded data (raw bytes, not a data URI). Most programming languages have built-in base64 encoding utilities

Troubleshooting

Empty or incorrect transcription?

  • Verify the audio format matches the format field in your request
  • Ensure the audio quality is sufficient for transcription

Request timing out?

  • Large audio files may exceed the 60-second timeout. Split long recordings into smaller segments
  • Compressed formats (MP3, AAC) produce smaller payloads and transfer faster

Model not found?

  • Use the Models page or the Models API with output_modalities=transcription to find available STT models
  • Verify the model slug is correct (e.g., openai/whisper-1, not whisper-1)

Authentication error?

  • Ensure you’re using a valid API key from your OpenRouter dashboard
  • The STT endpoint uses the same authentication as the Chat Completions API