Dolphin 2.6 Mixtral 8x7B 🐬

cognitivecomputations/dolphin-mixtral-8x7b

Created Dec 21, 202332,768 context

This is a 16k context fine-tune of Mixtral-8x7b. It excels in coding tasks due to extensive training with coding data and is known for its obedience, although it lacks DPO tuning.

The model is uncensored and is stripped of alignment and bias. It requires an external alignment layer for ethical use. Users are cautioned to use this highly compliant model responsibly, as detailed in a blog post about uncensored models at erichartford.com/uncensored-models.

#moe #uncensored

Recent activity on Dolphin 2.6 Mixtral 8x7B 🐬

Tokens processed per day

Feb 8Feb 14Feb 20Feb 26Mar 4Mar 10Mar 16Mar 22Mar 28Apr 3Apr 9Apr 15Apr 21Apr 2703.5M7M10.5M14M
    Dolphin 2.6 Mixtral 8x7B 🐬 - API, Providers, Stats | OpenRouter