Hermes Agent

Use Hermes Agent by Nous Research with OpenRouter

What is Hermes Agent?

Hermes Agent is an open-source, terminal-native autonomous coding and task agent built by Nous Research. It features persistent memory, agent-created skills, and a messaging gateway that supports 21+ platforms including Telegram, Discord, Slack, WhatsApp, Signal, SMS, Matrix, and more.

Hermes runs on local, Docker, SSH, Daytona, Modal, Vercel Sandbox, or Singularity backends and works with multiple LLM providers — including OpenRouter for multi-model access through a single API key.

Setup

The easiest way to configure Hermes with OpenRouter:

$hermes model

Select OpenRouter from the provider list, enter your API key, and choose your preferred model. This is the recommended approach for new users.

Quick Start (Environment Variable)

If you already have your OpenRouter API key:

$hermes config set OPENROUTER_API_KEY sk-or-...

Then start a chat:

$hermes chat --provider openrouter --model anthropic/claude-sonnet-4

Manual Configuration

Advanced users only: The following manual configuration is for users who need to edit config files directly. For most users, we recommend using hermes model above.

Step 1: Get Your OpenRouter API Key

  1. Sign up or log in at OpenRouter
  2. Navigate to your API Keys page
  3. Create a new API key
  4. Copy your key (starts with sk-or-...)

Step 2: Set Your API Key

Add your OpenRouter API key to ~/.hermes/.env:

$OPENROUTER_API_KEY=sk-or-...

Hermes separates secrets from non-secret settings. API keys go in ~/.hermes/.env, while model and provider configuration goes in ~/.hermes/config.yaml.

Step 3: Configure Your Model

Edit ~/.hermes/config.yaml:

1model:
2 provider: openrouter
3 default: anthropic/claude-sonnet-4

Browse all available models at openrouter.ai/models.

Step 4: Start Hermes

$hermes # classic CLI
$hermes --tui # modern TUI

Your agent will now route all requests through OpenRouter to your chosen model.

Model Format

When using OpenRouter as a provider, Hermes uses the standard OpenRouter model format <author>/<slug>:

  • anthropic/claude-sonnet-4
  • google/gemini-3-flash-preview
  • deepseek/deepseek-chat
  • openrouter/auto (auto-routes to an optimal/best-fit model for your prompt)

You can find the exact model ID for each model on the OpenRouter models page.

Provider Routing

OpenRouter routes your requests across multiple infrastructure providers for each model. You can control this routing behavior in ~/.hermes/config.yaml:

1provider_routing:
2 sort: "throughput" # "price" (default), "throughput", or "latency"
3 # only: ["anthropic"] # Only use these providers
4 # ignore: ["deepinfra"] # Skip these providers
5 # order: ["anthropic", "google"] # Try providers in this order
6 # data_collection: "deny" # Exclude providers that may store/train on data

Shortcuts: Append :nitro to any model name for throughput sorting (e.g., anthropic/claude-sonnet-4:nitro), or :floor for price sorting.

For a full breakdown of routing options, see the Provider Routing docs.

Fallback Providers

Configure a chain of backup providers Hermes tries when the primary model fails:

1fallback_providers:
2 - provider: openrouter
3 model: anthropic/claude-sonnet-4
4 - provider: openrouter
5 model: google/gemini-2.5-flash

This provides an additional layer of reliability. When activated, the fallback swaps the model mid-session without losing your conversation.

Auxiliary Models

Hermes uses “auxiliary models” for side tasks like context compression, vision analysis, session titles, and web summarization. By default these use your main model, but you can route them to cheaper models via OpenRouter:

1auxiliary:
2 title:
3 provider: openrouter
4 model: google/gemini-2.5-flash
5 vision:
6 provider: openrouter
7 model: google/gemini-2.5-flash
8 compression:
9 provider: openrouter
10 model: google/gemini-2.5-flash

This keeps your main model focused on complex reasoning while cheaper models handle lightweight tasks.

Pareto Code Router

OpenRouter’s experimental coding-model router auto-routes requests to the cheapest model meeting a coding-quality threshold. Configure it in ~/.hermes/config.yaml:

1model:
2 provider: openrouter
3 model: openrouter/pareto-code
4
5openrouter:
6 min_coding_score: 0.65 # 0.0–1.0; higher = stronger (more expensive) coders

This is useful for cost optimization on coding tasks — the router picks the cheapest model that meets your quality bar.

Hermes uses its own openrouter: config key to set min_coding_score. This maps to the plugins array in the OpenRouter API — you don’t need to construct the plugins payload yourself.

Monitoring Usage

Track your Hermes usage in real-time:

  1. Visit the OpenRouter Activity Dashboard
  2. See requests, costs, and token usage across all your Hermes sessions
  3. Filter by model, time range, or other criteria

Common Errors

”No API key” or provider not found

Hermes can’t find your OpenRouter API key.

Fix:

  1. Verify the key is set: cat ~/.hermes/.env | grep OPENROUTER
  2. Or re-run: hermes config set OPENROUTER_API_KEY sk-or-...
  3. Or use the interactive setup: hermes model

Authentication errors (401/403)

Fix:

  1. Verify your API key is valid at openrouter.ai/keys
  2. Check that you have sufficient credits in your account
  3. Ensure your key hasn’t expired or been revoked

Model not working

Fix:

  1. Verify the model ID on the OpenRouter models page
  2. Use the format <author>/<slug> (e.g., anthropic/claude-sonnet-4)
  3. Ensure the model is available and not deprecated

Context length errors

Hermes requires a model with at least 64K context tokens. Models with smaller context windows will be rejected at startup, since the system prompt and tool schemas can fill smaller windows and leave no room for conversation. If you see context-related errors, switch to a model with a larger context window.

Resources