Llama-3.1-Nemotron-Nano-8B-v1 is a compact large language model (LLM) derived from Meta's Llama-3.1-8B-Instruct, specifically optimized for reasoning tasks, conversational interactions, retrieval-augmented generation (RAG), and tool-calling applications. It balances accuracy and efficiency, fitting comfortably onto a single consumer-grade RTX GPU for local deployment. The model supports extended context lengths of up to 128K tokens.
Note: you must include detailed thinking on in the system prompt to enable reasoning. Please see Usage Recommendations for more.
Uptime stats for Llama 3.1 Nemotron Nano 8B v1
Uptime stats for Llama 3.1 Nemotron Nano 8B v1 across all providers