Skip to content
  • Status
  • Announcements
  • Docs
  • Support
  • About
  • Partners
  • Enterprise
  • Careers
  • Pricing
  • Privacy
  • Terms
  •  
  • © 2025 OpenRouter, Inc
    Favicon for neversleep

    NeverSleep

    Browse models from NeverSleep

    6 models

    Tokens processed on OpenRouter

    • NeverSleep: Lumimaid v0.2 70BLumimaid v0.2 70B

      Lumimaid v0.2 70B is a finetune of Llama 3.1 70B with a "HUGE step up dataset wise" compared to Lumimaid v0.1. Sloppy chats output were purged. Usage of this model is subject to Meta's Acceptable Use Policy.

      by neversleep131K context
    NeverSleep: Lumimaid v0.2 8BLumimaid v0.2 8B
    15.8M tokens

    Lumimaid v0.2 8B is a finetune of Llama 3.1 8B with a "HUGE step up dataset wise" compared to Lumimaid v0.1. Sloppy chats output were purged. Usage of this model is subject to Meta's Acceptable Use Policy.

    by neversleep33K context$0.09/M input tokens$0.60/M output tokens
  • NeverSleep: Llama 3 Lumimaid 70BLlama 3 Lumimaid 70B

    The NeverSleep team is back, with a Llama 3 70B finetune trained on their curated roleplay data. Striking a balance between eRP and RP, Lumimaid was designed to be serious, yet uncensored when necessary. To enhance it's overall intelligence and chat capability, roughly 40% of the training data was not roleplay. This provides a breadth of knowledge to access, while still keeping roleplay as the primary strength. Usage of this model is subject to Meta's Acceptable Use Policy.

    by neversleep8K context
  • NeverSleep: Llama 3 Lumimaid 8BLlama 3 Lumimaid 8B

    The NeverSleep team is back, with a Llama 3 8B finetune trained on their curated roleplay data. Striking a balance between eRP and RP, Lumimaid was designed to be serious, yet uncensored when necessary. To enhance it's overall intelligence and chat capability, roughly 40% of the training data was not roleplay. This provides a breadth of knowledge to access, while still keeping roleplay as the primary strength. Usage of this model is subject to Meta's Acceptable Use Policy.

    by neversleep25K context
  • Noromaid Mixtral 8x7B InstructNoromaid Mixtral 8x7B Instruct

    This model was trained for 8h(v1) + 8h(v2) + 12h(v3) on customized modified datasets, focusing on RP, uncensoring, and a modified version of the Alpaca prompting (that was already used in LimaRP), which should be at the same conversational level as ChatLM or Llama2-Chat without adding any additional special tokens.

    by neversleep8K context
  • Noromaid 20BNoromaid 20B
    4.46M tokens

    A collab between IkariDev and Undi. This merge is suitable for RP, ERP, and general knowledge. #merge #uncensored

    by neversleep4K context$1/M input tokens$1.75/M output tokens