- Toppy M 7B (nitro)
A wild 7B parameter model that merges several models using the new task_arithmetic merge method from mergekit. List of merged models: - NousResearch/Nous-Capybara-7B-V1.9 - HuggingFaceH4/zephyr-7b-beta - lemonilia/AshhLimaRP-Mistral-7B - Vulkane/120-Days-of-Sodom-LoRA-Mistral-7b - Undi95/Mistral-pippa-sharegpt-7b-qlora #merge #uncensored Note: this is a higher-throughput version of this model, and may have higher prices and slightly different outputs.
by undi954K context$0.07/M input tkns$0.07/M output tkns23K tokens this week - ReMM SLERP 13B (extended)
A recreation trial of the original MythoMax-L2-B13 but with updated models. #merge Note: this is an extended-context version of this model. It may have higher prices and different outputs.
by undi956K context$1.125/M input tkns$1.125/M output tkns21.1M tokens this week - MythoMist 7B (free)
From the creator of MythoMax, merges a suite of models to reduce word anticipation, ministrations, and other undesirable words in ChatGPT roleplaying data. It combines Neural Chat 7B, Airoboros 7b, Toppy M 7B, Zepher 7b beta, Nous Capybara 34B, OpenHeremes 2.5, and many others. #merge Note: this is a free, rate-limited version of this model. Outputs may be cached. Read about rate limits here.
by gryphe33K context$0/M input tkns$0/M output tkns1.02M tokens this week - MythoMist 7B
From the creator of MythoMax, merges a suite of models to reduce word anticipation, ministrations, and other undesirable words in ChatGPT roleplaying data. It combines Neural Chat 7B, Airoboros 7b, Toppy M 7B, Zepher 7b beta, Nous Capybara 34B, OpenHeremes 2.5, and many others. #merge
by gryphe33K context$0.375/M input tkns$0.375/M output tkns349K tokens this week - Toppy M 7B (free)
A wild 7B parameter model that merges several models using the new task_arithmetic merge method from mergekit. List of merged models: - NousResearch/Nous-Capybara-7B-V1.9 - HuggingFaceH4/zephyr-7b-beta - lemonilia/AshhLimaRP-Mistral-7B - Vulkane/120-Days-of-Sodom-LoRA-Mistral-7b - Undi95/Mistral-pippa-sharegpt-7b-qlora #merge #uncensored Note: this is a free, rate-limited version of this model. Outputs may be cached. Read about rate limits here.
by undi954K context$0/M input tkns$0/M output tkns4.79M tokens this week - Goliath 120B
A large LLM created by combining two fine-tuned Llama 70B models into one 120B model. Combines Xwin and Euryale. Credits to - @chargoddard for developing the framework used to merge the model - mergekit. - @Undi95 for helping with the merge ratios. #merge
by alpindale6K context$9.375/M input tkns$9.375/M output tkns5.01M tokens this week - Toppy M 7B
A wild 7B parameter model that merges several models using the new task_arithmetic merge method from mergekit. List of merged models: - NousResearch/Nous-Capybara-7B-V1.9 - HuggingFaceH4/zephyr-7b-beta - lemonilia/AshhLimaRP-Mistral-7B - Vulkane/120-Days-of-Sodom-LoRA-Mistral-7b - Undi95/Mistral-pippa-sharegpt-7b-qlora #merge #uncensored
by undi954K context$0.07/M input tkns$0.07/M output tkns2.9M tokens this week - ReMM SLERP 13B
A recreation trial of the original MythoMax-L2-B13 but with updated models. #merge
by undi954K context$0.27/M input tkns$0.27/M output tkns544K tokens this week