DeepSeek V4 Flash is an efficiency-optimized Mixture-of-Experts model from DeepSeek with 284B total parameters and 13B activated parameters, supporting a 1M-token context window. It is designed for fast inference and high-throughput workloads, while maintaining strong reasoning and coding performance.
The model includes hybrid attention for efficient long-context processing and supports configurable reasoning modes. It is well suited for applications such as coding assistants, chat systems, and agent workflows where responsiveness and cost efficiency are important.