Liquid: LFM 7B

liquid/lfm-7b

Created Jan 25, 202532,768 context
$0.01/M input tokens$0.01/M output tokens

Providers for LFM 7B

OpenRouter routes requests to the best providers that are able to handle your prompt size and parameters, with fallbacks to maximize uptime.

Context
33K
Max Output
33K
Input
$0.01
Output
$0.01
Latency
0.55s
Throughput
35.19t/s

Apps using LFM 7B

Top public apps this week using this model

Recent activity on LFM 7B

Tokens processed per day

Jan 25Jan 28Jan 31Feb 3Feb 6Feb 9Feb 12Feb 15Feb 18Feb 21Feb 24Feb 27Mar 2Mar 5Mar 8Mar 110200M400M600M800M

Uptime stats for LFM 7B

Uptime stats for LFM 7B on the only provider

Sample code and API for LFM 7B

OpenRouter normalizes requests and responses across providers for you.

OpenRouter provides an OpenAI-compatible completion API to 300+ models & providers that you can call directly, or using the OpenAI SDK. Additionally, some third-party SDKs are available.

In the examples below, the OpenRouter-specific headers are optional. Setting them allows your app to appear on the OpenRouter leaderboards.

from openai import OpenAI

client = OpenAI(
  base_url="https://openrouter.ai/api/v1",
  api_key="<OPENROUTER_API_KEY>",
)

completion = client.chat.completions.create(
  extra_headers={
    "HTTP-Referer": "<YOUR_SITE_URL>", # Optional. Site URL for rankings on openrouter.ai.
    "X-Title": "<YOUR_SITE_NAME>", # Optional. Site title for rankings on openrouter.ai.
  },
  extra_body={},
  model="liquid/lfm-7b",
  messages=[
    {
      "role": "user",
      "content": "What is the meaning of life?"
    }
  ]
)
print(completion.choices[0].message.content)

Using third-party SDKs

For information about using third-party SDKs and frameworks with OpenRouter, please see our frameworks documentation.

See the Request docs for all possible fields, and Parameters for explanations of specific sampling parameters.

    LFM 7B - API, Providers, Stats | OpenRouter