Search/
Skip to content
/

NVIDIA: Llama 3.1 Nemotron Ultra 253B v1

nvidia/llama-3.1-nemotron-ultra-253b-v1

Created Apr 8, 2025131,072 context

Llama-3.1-Nemotron-Ultra-253B-v1 is a large language model (LLM) optimized for advanced reasoning, human-interactive chat, retrieval-augmented generation (RAG), and tool-calling tasks. Derived from Meta’s Llama-3.1-405B-Instruct, it has been significantly customized using Neural Architecture Search (NAS), resulting in enhanced efficiency, reduced memory usage, and improved inference latency. The model supports a context length of up to 128K tokens and can operate efficiently on an 8x NVIDIA H100 node.

Note: you must include detailed thinking on in the system prompt to enable reasoning. Please see Usage Recommendations for more.

OpenRouterOpenRouter
© 2026 OpenRouter, Inc

Product

  • Chat
  • Rankings
  • Models
  • Providers
  • Pricing
  • Enterprise

Company

  • About
  • Announcements
  • CareersHiring
  • Partners
  • Privacy
  • Terms of Service
  • Support
  • State of AI

Developer

  • Documentation
  • API Reference
  • SDK
  • Status

Connect

  • Discord
  • GitHub
  • LinkedIn
  • X
  • YouTube

Recent activity on Llama 3.1 Nemotron Ultra 253B v1

Total usage per day on OpenRouter

Prompt
97.9M
Completion
1.43M
Reasoning
0

Prompt tokens measure input size. Reasoning tokens show internal thinking before a response. Completion tokens reflect total output length.