Telecom operators are shifting their capital expenditure priorities from connectivity to computing power. By late March, China’s three major telecom operators had released their 2026 capex plans. Total industry capex is expected to reach RMB 262.6 billion (USD 38.4 billion) in 2026, down 8% from a year earlier.
With user growth and traffic gains nearing their limits, and base station coverage approaching saturation, information and communications technology vendors that depend on operator capex cycles can no longer rely on their traditional role as pipeline builders to drive growth. They need a new engine, and artificial intelligence may be it.
All three operators have made computing infrastructure their top investment priority for 2026, with combined spending expected to exceed RMB 80.8 billion (USD 11.8 billion). Each is projected to post double-digit growth in that category.
Global technology companies are also continuing to spend heavily. Amazon, Google, Meta, and Microsoft invested more than USD 383 billion in 2025, and S&P Global projects that figure will rise to USD 635 billion in 2026. IDC has forecast that global AI investment could exceed USD 1.2 trillion by 2029.
Demand is rising just as quickly. Over the past two years, China’s daily token calls have reportedly increased from 100 billion to 140 trillion, a more than 1,000-fold jump. Backed by state policy, pilot programs, and the push to develop what Beijing calls “new quality productive forces,” computing power is increasingly being positioned as the next generation of infrastructure after communications networks.
Xie Junshi, executive vice president of ZTE, told 36Kr:
“Like telecommunications, AI is fundamentally a complex engineering science that spans multiple disciplines. Both involve the efficient coordination of huge, complex systems, and both require overall optimization across chips, hardware, software, resource scheduling, and applications. That demands full-stack technological depth, engineering experience, and system optimization capabilities.”
For ZTE, the rationale for making AI its main strategic focus rests on three criteria: market size, growth potential, and technical barriers to entry. At the same time, it has been deliberate in how it approaches the field. It is shifting from a connectivity-only model to one built on both connectivity and computing power, aiming to become a leader in network connectivity and intelligent computing.
The heterogeneity and distributed nature of AI computing have turned the connectivity once provided by telecom networks into part of computing infrastructure itself. AI is no longer just a contest over single-chip performance. It has become a systems engineering challenge shaped by the coordination of computing, networking, and scheduling.
That aligns with Xie’s view of the market. In his assessment, competition in AI is shifting toward overall total cost of ownership, or TCO, across chips, hardware and software, systems, full-stack capabilities, and ecosystems.
If networks remain ZTE’s core business and foundation, computing power is emerging as its next growth engine. In 2025, ZTE returned to growth. Its revenue mix across networks, computing power, and home and personal terminals became more balanced, reducing its dependence on operators’ 5G spending and broadening its growth base.
“The more complex something is, the more it plays to ZTE’s strengths,” Xie said. He argued that the company’s more than 40 years of technical expertise and innovation in information and communications technology can form a durable moat.
One easily overlooked detail is that, at this year’s Nvidia GTC, Jensen Huang focused less on single-GPU performance. As early as January, he had proposed AI’s “five-layer cake,” spanning energy, chips, infrastructure, models, and applications. Each layer is interconnected, and none operates in isolation.
In such a value chain, however, occupying only one layer can leave a company exposed to pressure from both upstream and downstream players. Companies focused on computing power can be constrained by chipmakers. Those focused on applications may be displaced by advances in model capabilities. Platform-layer companies also face direct competition from major cloud vendors.
That helps explain why ZTE sees its five-layer capability framework for end-to-end AI solutions as central to its strategy.
The first layer is core capabilities. Cui Li, ZTE’s chief development officer, said the company aims to build an open ecosystem for domestic AI. As performance in the AI era is no longer determined by a single chip, system-level coordination across chips, foundational software, and architecture has become a key competitive factor.
To that end, ZTE has developed high-speed interconnect chips for intra- and inter-machine connectivity, domain-specific processors, and data and network acceleration chips. It has also adapted its systems to mainstream GPUs and carried out coordinated optimization across them, forming what it describes as whole-domain chip coordination capabilities. At the software level, it has built an independently controllable foundational platform around the NewStart operating system and the GoldenDB database.
ZTE has also developed an orthogonal interconnect architecture. Through modular design, it aims to deliver high-density integration, reliability, simplified operations and maintenance, and open interconnection, supporting the construction of supernodes and large-scale, energy-efficient AI factories.
The second layer is infrastructure. Here, ZTE emphasizes solutions built around optimal TCO. Computing power, networks, and data centers are treated not as standalone modules, but as integrated systems designed for efficiency. Across computing, networking, and energy, ZTE combines full-stack intelligent computing products, open-standard interconnection, and modular energy solutions to improve coordination, scalability, and efficiency.
Between the lower and upper layers sits the capability platform. “ZTE hopes to become an enabler of model engineering,” Cui said. In her view, this layer must improve the utilization of underlying computing resources while lowering barriers to application development. To that end, ZTE has introduced an intelligent computing resource management platform, a training and inference acceleration platform, and enterprise-grade agents such as Co-Sight and Co-Claw, aimed at improving AI production efficiency and ease of use.
Above that is the application layer. ZTE is building what it calls a vertical AI operating system that extends a foundational large model across multiple industry scenarios, alongside other domain-specific models. The company said it has focused on deployments in telecommunications, R&D, industry, and electric power, and has delivered more than 1,000 benchmark projects across more than 18 industries.
The final layer is terminals. These are not just hardware formats, but also access points and distribution hubs for AI capabilities. In personal devices, ZTE has launched what it describes as the industry’s first AI-native smartphone, the Nubia M153, including a version equipped with the Doubao assistant. In the home, it connects service entry points through smart center screens, cloud PCs, and freestanding smart displays designed for different use cases. Across elderly care, education, entertainment, and security, it is positioning these products as part of a broader smart home hub that integrates networks, computing, and screens.
Taken at face value, ZTE’s five-layer framework may appear to be a technical architecture or product matrix. At a deeper level, it reflects an attempt to answer a broader question: what is ZTE’s competitive advantage in the AI era?
Its answer is clear. It no longer wants to be seen only as a telecom equipment vendor or a provider of computing infrastructure. Instead, it aims to position itself as a systems orchestrator that links capabilities across the stack.
Xie said ZTE has four key points of differentiation: its decades of full-stack technology accumulation, its flexible architectural design, its foundation in home devices and on-device AI, and its global delivery and service capabilities.
Rather than relying on single-point strengths, ZTE is trying to build a system-level moat for the AI era, positioning itself as a contributor of value across the industry chain while expanding access to AI.
That matters because AI is not evolving as a set of isolated products. It is developing as an interdependent systems project. Chipmakers define the upper limits of computing performance, model companies push the boundaries of intelligence, and systems orchestrators are tasked with integrating those capabilities into deployable solutions.
Such coordination is difficult for any single company to lead alone. It requires collaboration across chipmakers, model developers, and a wide range of industries. At ZTE’s 2026 China ecosystem partner conference, the company addressed this challenge by proposing what it calls a collaborative ecosystem.
Drawing on the idea of harmony without uniformity, ZTE launched an initiative centered on the full AI industry chain, calling for upgrades in core capabilities, scenario-based solutions, business models, and enablement systems. It also outlined four priorities: opening capabilities, strengthening cooperation, sharing benefits, and improving enablement.
From opening core capabilities to building customer-centered partnerships, integrating ecosystem resources, establishing clearer benefit-sharing mechanisms, and providing end-to-end support, ZTE is working to shape what it describes as a value community for the AI era.
Since making the ecosystem a priority in 2025, ZTE has partnered with accelerator card vendors, large model companies, and industry players. It said it now has more than 30,000 ecosystem partners across sectors including internet services, finance, electric power, government, transportation, large enterprises, education, and healthcare.
ZTE said the ecosystem is beginning to produce results. In healthcare, it said a large model for medical examination scenarios has been deployed at multiple hospitals. In steelmaking, it said it has worked with HBIS Group to launch an integrated intelligent computing appliance and a large model covering the full production process, improving professional knowledge retrieval efficiency by 60%.
ZTE’s concept of a collaborative ecosystem moves beyond the traditional client-vendor relationship. It is closer to a coordination model aimed at making AI more broadly deployable.
As capabilities across chips, models, applications, and industry scenarios continue to expand while becoming more distributed, it is increasingly difficult for any single vendor to complete the full loop. By opening capabilities, coordinating partners, and sharing benefits, ZTE is attempting to build a new kind of industrial force for the AI era.
Its strategic direction is becoming clearer. ZTE is neither competing directly at the chip layer nor limiting itself to applications or terminals. Instead, it is anchoring itself in the computing foundation and aiming to turn AI into deployable productivity through open capabilities and ecosystem coordination.
“AI development is by no means a solo effort. It must be built on open architecture and centered on ecosystem coordination,” Xie said. “Only by pooling strength and moving forward together can we achieve steady and lasting progress.”
KrASIA features translated and adapted content that was originally published by 36Kr. This article was written by 36Kr Brand.
Note: RMB figures are converted to USD at rates of RMB 6.84 = USD 1 based on estimates as of April 13, 2026, unless otherwise stated. USD conversions are presented for ease of reference and may not fully match prevailing exchange rates.