Skip to content
  1.  
  2. © 2023 – 2025 OpenRouter, Inc
    Favicon for cognitivecomputations

    Cognitive Computations

    Browse models from Cognitive Computations

    6 models

    Tokens processed on OpenRouter

    • Venice: UncensoredUncensoredFree variant
      103M tokens

      Venice Uncensored Dolphin Mistral 24B Venice Edition is a fine-tuned variant of Mistral-Small-24B-Instruct-2501, developed by dphn.ai in collaboration with Venice.ai. This model is designed as an “uncensored” instruct-tuned LLM, preserving user control over alignment, system prompts, and behavior. Intended for advanced and unrestricted use cases, Venice Uncensored emphasizes steerability and transparent behavior, removing default safety and alignment layers typically found in mainstream assistant models.

    by cognitivecomputations33K context$0/M input tokens$0/M output tokens
  3. Dolphin3.0 R1 Mistral 24BDolphin3.0 R1 Mistral 24B

    Dolphin 3.0 R1 is the next generation of the Dolphin series of instruct-tuned models. Designed to be the ultimate general purpose local model, enabling coding, math, agentic, function calling, and general use cases. The R1 version has been trained for 3 epochs to reason using 800k reasoning traces from the Dolphin-R1 dataset. Dolphin aims to be a general purpose reasoning instruct model, similar to the models behind ChatGPT, Claude, Gemini. Part of the Dolphin 3.0 Collection Curated and trained by Eric Hartford, Ben Gitter, BlouseJury and DphnAI

    by cognitivecomputations33K context
  4. Dolphin3.0 Mistral 24BDolphin3.0 Mistral 24B

    Dolphin 3.0 is the next generation of the Dolphin series of instruct-tuned models. Designed to be the ultimate general purpose local model, enabling coding, math, agentic, function calling, and general use cases. Dolphin aims to be a general purpose instruct model, similar to the models behind ChatGPT, Claude, Gemini. Part of the Dolphin 3.0 Collection Curated and trained by Eric Hartford, Ben Gitter, BlouseJury and DphnAI

    by cognitivecomputations33K context
  5. Dolphin Llama 3 70B 🐬Dolphin Llama 3 70B 🐬

    Dolphin 2.9 is designed for instruction following, conversational, and coding. This model is a fine-tune of Llama 3 70B. It demonstrates improvements in instruction, conversation, coding, and function calling abilities, when compared to the original. Uncensored and is stripped of alignment and bias, it requires an external alignment layer for ethical use. Users are cautioned to use this highly compliant model responsibly, as detailed in a blog post about uncensored models at erichartford.com/uncensored-models. Usage of this model is subject to Meta's Acceptable Use Policy.

    by cognitivecomputations8K context
  6. Dolphin 2.9.2 Mixtral 8x22B 🐬Dolphin 2.9.2 Mixtral 8x22B 🐬

    Dolphin 2.9 is designed for instruction following, conversational, and coding. This model is a finetune of Mixtral 8x22B Instruct. It features a 64k context length and was fine-tuned with a 16k sequence length using ChatML templates. This model is a successor to Dolphin Mixtral 8x7B. The model is uncensored and is stripped of alignment and bias. It requires an external alignment layer for ethical use. Users are cautioned to use this highly compliant model responsibly, as detailed in a blog post about uncensored models at erichartford.com/uncensored-models. #moe #uncensored

    by cognitivecomputations66K context
  7. Dolphin 2.6 Mixtral 8x7B 🐬Dolphin 2.6 Mixtral 8x7B 🐬

    This is a 16k context fine-tune of Mixtral-8x7b. It excels in coding tasks due to extensive training with coding data and is known for its obedience, although it lacks DPO tuning. The model is uncensored and is stripped of alignment and bias. It requires an external alignment layer for ethical use. Users are cautioned to use this highly compliant model responsibly, as detailed in a blog post about uncensored models at erichartford.com/uncensored-models. #moe #uncensored

    by cognitivecomputations33K context