A wave of companies is rushing to adopt the latest artificial intelligence model from Chinese startup DeepSeek, drawn by its promise of advanced reasoning and lower operational costs. The model, known as DeepSeek-R1, is already being integrated by businesses spanning industries from automotive marketplaces to holographic technology, signaling how AI development is no longer dominated by Western firms like OpenAI.

Among the early adopters is automotive marketplace TuanChe, which announced on January 28 that it will integrate DeepSeek-R1 into its data analytics and marketing recommendation platform to enhance consumer targeting and responsiveness. Meanwhile, in the growing holographic industry, Nasdaq-listed WiMi Hologram Cloud and MicroCloud Hologram plan to embed R1 into their content generation pipelines, citing its ability to enable more lifelike 3D imagery through complex reasoning and rapid self-learning.

Elsewhere, AI-driven search engine Perplexity AI has incorporated R1 to enhance deep web research, according to Forbes. Underscoring the model’s growing acceptance, Microsoft—an OpenAI backer—has also added R1 to its model catalog on Azure AI Foundry and GitHub.

This surge of adoption speaks to a key challenge in AI: scaling inference. While companies invest heavily in training large models, they may struggle to serve them efficiently at scale afterward, leading to high costs and latency. R1 addresses this issue with a focus on efficient inference.

A key factor in the model’s efficiency is its mixture-of-experts (MoE) architecture, which activates only a fraction of its parameters at a time, reducing power consumption and processing delays. DeepSeek also employs a multi-stage loop training method that combines self-play and iterative refinement—a form of reinforcement learning that minimizes the need for massive labeled datasets.

Another major draw is R1’s open-source licensing, which allows companies to inspect and customize the model to suit specialized applications. This flexibility stands in contrast to proprietary AI systems, which have hitherto been more expensive and less transparent.

R1’s efficiency raises questions about the long-term demand for high-end AI hardware. Some analysts suggest that more efficient architectures could dampen spending on GPUs from dominant chipmakers like Nvidia. Others argue that lower inference costs will expand AI adoption overall, ultimately benefiting the hardware market.

DeepSeek’s Chinese origins add another layer of complexity. US lawmakers are already grappling with the influence of fast-growing Chinese digital platforms, most notably TikTok, which has faced ongoing scrutiny over national security concerns. For businesses integrating R1, such as Perplexity AI, storing data on US- or EU-based servers is, for now, a way to comply with regulations and address potential security risks.

Still, some speculate whether DeepSeek could face restrictions similar to TikTok’s in the US, given Washington’s increasing focus on AI as a strategic asset.

Despite these uncertainties, R1’s rapid adoption highlights how AI competition may very well still be in its early stages. As newer players enter the space, even established leaders like OpenAI may find themselves challenged in ways that were unthinkable just a few years ago.