Dolphin 2.6 Mixtral 8x7B 🐬

cognitivecomputations/dolphin-mixtral-8x7b

Created Dec 21, 202332,768 context

This is a 16k context fine-tune of Mixtral-8x7b. It excels in coding tasks due to extensive training with coding data and is known for its obedience, although it lacks DPO tuning.

The model is uncensored and is stripped of alignment and bias. It requires an external alignment layer for ethical use. Users are cautioned to use this highly compliant model responsibly, as detailed in a blog post about uncensored models at erichartford.com/uncensored-models.

#moe #uncensored

Recent activity on Dolphin 2.6 Mixtral 8x7B 🐬

Tokens processed per day

Feb 3Feb 9Feb 15Feb 21Feb 27Mar 5Mar 11Mar 17Mar 23Mar 29Apr 4Apr 10Apr 16Apr 22Apr 2803.5M7M10.5M14M
    Dolphin 2.6 Mixtral 8x7B 🐬 - API, Providers, Stats | OpenRouter