OpenChat 3.5 7B
openchat/openchat-7b
Created Nov 288,192 context
$0.055/M input tokens$0.055/M output tokens
OpenChat 7B is a library of open-source language models, fine-tuned with "C-RLFT (Conditioned Reinforcement Learning Fine-Tuning)" - a strategy inspired by offline reinforcement learning. It has been trained on mixed-quality data without preference labels.
- For OpenChat fine-tuned on Mistral 7B, check out OpenChat 7B.
- For OpenChat fine-tuned on Llama 8B, check out OpenChat 8B.
#open-source