Models
- MythoMist 7B
From the creator of MythoMax, merges a suite of models to reduce word anticipation, ministrations, and other undesirable words in ChatGPT roleplaying data.
It combines Neural Chat 7B, Airoboros 7b, Toppy M 7B, Zepher 7b beta, Nous Capybara 34B, OpenHeremes 2.5, and many others.
#merge
by gryphe33k context$0.00/M input tkns$0.00/M output tkns73.2M tokens this week - Toppy M 7B
A wild 7B parameter model that merges several models using the new task_arithmetic merge method from mergekit. List of merged models:
- NousResearch/Nous-Capybara-7B-V1.9
- HuggingFaceH4/zephyr-7b-beta
- lemonilia/AshhLimaRP-Mistral-7B
- Vulkane/120-Days-of-Sodom-LoRA-Mistral-7b
- Undi95/Mistral-pippa-sharegpt-7b-qlora
#merge
by undi9533k context$0.00/M input tkns$0.00/M output tkns1.6B tokens this week - ReMM SLERP 13B
A recreation trial of the original MythoMax-L2-B13 but with updated models. #merge
by undi956k context$1.12/M input tkns$1.12/M output tkns511.1M tokens this week