Skip to content
  1.  
  2. © 2023 – 2025 OpenRouter, Inc

    Meta: Llama 3.2 11B Vision Instruct

    meta-llama/llama-3.2-11b-vision-instruct

    Created Sep 25, 2024131,072 context
    $0.049/M input tokens$0.049/M output tokens$0.079/K input imgs

    Llama 3.2 11B Vision is a multimodal model with 11 billion parameters, designed to handle tasks combining visual and textual data. It excels in tasks such as image captioning and visual question answering, bridging the gap between language generation and visual reasoning. Pre-trained on a massive dataset of image-text pairs, it performs well in complex, high-accuracy image analysis.

    Its ability to integrate visual understanding with language processing makes it an ideal solution for industries requiring comprehensive visual-linguistic AI applications, such as content creation, AI-driven customer service, and research.

    Click here for the original model card.

    Usage of this model is subject to Meta's Acceptable Use Policy.

    Sample code and API for Llama 3.2 11B Vision Instruct

    OpenRouter normalizes requests and responses across providers for you.

    OpenRouter provides an OpenAI-compatible completion API to 400+ models & providers that you can call directly, or using the OpenAI SDK. Additionally, some third-party SDKs are available.

    In the examples below, the OpenRouter-specific headers are optional. Setting them allows your app to appear on the OpenRouter leaderboards.

    Using third-party SDKs

    For information about using third-party SDKs and frameworks with OpenRouter, please see our frameworks documentation.

    See the Request docs for all possible fields, and Parameters for explanations of specific sampling parameters.