- MythoMist 7B
From the creator of MythoMax, merges a suite of models to reduce word anticipation, ministrations, and other undesirable words in ChatGPT roleplaying data.
It combines Neural Chat 7B, Airoboros 7b, Toppy M 7B, Zepher 7b beta, Nous Capybara 34B, OpenHeremes 2.5, and many others.
#merge
by gryphe33k context$0.00/M input tkns$0.00/M output tkns73.7M tokens this week - Nous: Capybara 34B
This model is trained on the Yi-34B model for 3 epochs on the Capybara dataset. It's the first 34B Nous model and first 200K context length Nous model.
Note: This endpoint currently supports 32k context.
by nousresearch32k context$2.00/M input tkns$2.00/M output tkns3.6M tokens this week - Toppy M 7B
A wild 7B parameter model that merges several models using the new task_arithmetic merge method from mergekit. List of merged models:
- NousResearch/Nous-Capybara-7B-V1.9
- HuggingFaceH4/zephyr-7b-beta
- lemonilia/AshhLimaRP-Mistral-7B
- Vulkane/120-Days-of-Sodom-LoRA-Mistral-7b
- Undi95/Mistral-pippa-sharegpt-7b-qlora
#merge
by undi9533k context$0.00/M input tkns$0.00/M output tkns1.6B tokens this week - Nous: Hermes 70B
A state-of-the-art language model fine-tuned on over 300k instructions by Nous Research, with Teknium and Emozilla leading the fine tuning process.
by nousresearch4k context$0.90/M input tkns$0.90/M output tkns10.7M tokens this week - Nous: Hermes 13B
A state-of-the-art language model fine-tuned on over 300k instructions by Nous Research, with Teknium and Emozilla leading the fine tuning process.
by nousresearch4k context$0.15/M input tkns$0.15/M output tkns118.3M tokens this week