DeepSeek has introduced a streamlined variant of its latest R1 reasoning artificial intelligence, aiming at users who want advanced math reasoning but have less powerful hardware. This compact model, named compact machine reasoning system, is said to outperform similar sized competitors on select benchmark tests.
The technology relies on Alibaba’s Qwen3 8B model, which hit the market in May and serves as its base. When put through the AIME 2025 exam, a rigorous set of math challenges, this AI delivers results that outpace Google’s Gemini 2.5 Flash.
Significantly, the smaller DeepSeek model nearly rivals the performance of Microsoft’s Phi 4 reasoning plus on HMMT, a well known math assessment. While distilled systems like this usually offer lower overall abilities than their full sized counterparts, their reduced hardware needs are a clear benefit.
Hardware Requirements and Access
Running the Qwen3 8B version requires a graphics processing unit with 40GB to 80GB of memory, such as Nvidia’s H100. In stark contrast, operating the primary R1 model demands about twelve GPUs with 80GB each, a setup that remains out of reach for many organizations.
According to DeepSeek, their approach included using output created by the refined R1 model to fine tune Qwen3 8B. The resulting system maintains strong reasoning capabilities while being efficient enough for a variety of industry and research applications.
On Hugging Face, a popular platform for AI development, DeepSeek makes clear that the R1 0528 Qwen3 8B is aimed both at academic work and at companies needing smaller, nimble solutions.
The model is released under the open MIT license, removing obstacles for those who want to use it in commercial projects. LM Studio and others already make the AI available through easy to integrate APIs.