Today’s AI Computing Power: Liquid Cooling + AIDC + Cloud Providers + Computing Power Leasing continue to explode! [Taoguba]
Once again confirming our perfect strategy of shifting from commercial aerospace to computing power at the end of January!
As the Spring Festival approaches, domestic AI giants are launching major initiatives, further increasing computing power consumption!
Therefore, the simultaneous rise in Token volume and price is inevitable, and computing power inflation will officially propagate to (domestic) cloud services.
Current market hype around AI follows this path: GPU–Storage–Advanced Process–CPU–Cloud Services–IDC;
Severe shortage of computing power
Dream 2.0—“8 hours queue to generate 15 seconds video”
Behind the 8-hour queue to generate a 15-second video is ByteDance’s AI video model Seedance 2.0, whose technological breakthrough has sparked a nationwide creative frenzy. Its cinematic-level generation capability has directly raised user expectations, but also exposed the current stage bottleneck in computing power supply.
Extreme queuing phenomenon: technical popularity and system pressure—peak wait times over 8 hours: On February 11, 2026, real-world testing showed that after users submitted video generation tasks, the queue exceeded 1,000 people. Even with paid “accelerated channels,” it still took 8 hours to generate a 15-second video. Average peak-time wait for ordinary users is 20-30 minutes (including security review), while premium members (annual fee 5199 RMB) only need 3-5 minutes.
Core cause of congestion: insufficient computing resources!
Chinese AI giants compete for computing resources during Spring Festival
During the Spring Festival, Chinese AI labs are fiercely competing, releasing new models and launching aggressive “red envelope” activities. But behind the noise lies a more urgent story: the sharp surge in computational demand is putting enormous pressure on the entire ecosystem.
Industry surveys confirm resource shortages. A major Chinese internet company has had to suspend most new projects, reallocating limited computing resources to support sudden demand spikes from New Year marketing campaigns.
In recent weeks, there has been a surge in purchasing computing resources, especially NVIDIA chips, to cope with the AI usage boom. Giants are renting computing power at twice the market price. Whenever possible, they even bypass leasing and directly buy chips, regardless of cost.
This competition indicates that demand for computing power will further surge this year. Take the Seedance 2.0 video generation wave as an example—the required computing power, memory, and infrastructure demands are staggering.
The simple logic now is: AI becomes more useful → people use it crazily → Token volume and price soar → demand for computing power/cloud chips explodes!
Token volume and price rise together
Tokens are the “billing and consumption units” for large model AI.
Think of it as: AI work consumes “traffic / electricity / words,” and the more used, the more it costs.
During the Spring Festival, the number of people generating videos with Seedance will skyrocket, leading to explosive growth in Token volume. Seedance will continue to iterate towards higher-definition versions. ByteDance’s token consumption in 2026 is huge; a tenfold increase is inevitable.
JPMorgan predicts: Between 2025 and 2030, China’s AI inference token consumption will achieve an approximately 330% compound annual growth rate, surging from 10 trillion tokens in 2025 to 3.9 quadrillion tokens in 2030, a 370-fold increase.
With the current explosion of Agents, multimodal models, and long-context capabilities, the rise in Token volume and price is inevitable (see Google Gemini below). Computing power inflation continues to propagate downstream, now officially reaching domestic cloud services.
Token volume increase
Agent scenarios see exponential growth in token demand: simple single-Agent token consumption exceeds chatbot by an order of magnitude; multi-Agent collaboration exceeds chatbot by two orders. High-value Agents (like Claude Code, Cowork, Openclaw) will accelerate deployment in 2026, with open-source models (GLM-5, Kimi K2.5) breaking “availability limits,” leading to large-scale application of Agents.
The context window of new global large models generally expands from tens of thousands to 100,000 tokens, allowing entire books or codebases to be processed in one request, greatly increasing token consumption.
Multimodal Agents need to encode images, audio, and other non-text information into visual tokens. A high-resolution image can be encoded into thousands of tokens, further boosting total token consumption.
Token price increase:
Upstream computing costs continue to rise + rigid downstream demand, leading top models (OAI +40%, Anthropic, Zhipu +30%) to raise token prices; the era of low-cost tokens is ending.
Commercialization of high-value Agents drives token pricing from “software usage fees” toward “digital employee salaries.” High-value Agent services (like Harvey AI legal Agent enterprise version) with per-user monthly token consumption of $500 and pricing of $1000-2000/month are emerging, greatly increasing token value.
Continued propagation of computing power inflation downstream: GPU–Storage–Advanced Process–CPU–Cloud Services–IDC
Capacity bottlenecks in advanced chip manufacturing, shortages of data center hardware (large transformers, cooling systems), energy and infrastructure constraints push up upstream hardware costs.
In January 2026, overseas AWS and Google Cloud raised prices, with AI and GPU-related services increasing by 15%-100%, ending the long-term price decline cycle of cloud services.
Inflation has now officially propagated to domestic cloud services. On February 5, Wangsu raised CDN and storage prices by 30%-40%; on February 11, UCloud increased prices across all products. More cloud providers may follow.
Cloud service price hikes
Cloudflare announced earnings, rising 15% after hours, forecasting that 2026 full-year and Q1 sales will surpass Wall Street expectations. The rapid development of AI applications is increasing demand for cloud computing. Computing power inflation is now officially propagating to (domestic) cloud services!
Recent positive catalysts for cloud computing include:
February 2026: Several companies announced price increases for cloud products and services;
State-owned Assets Supervision and Administration Commission: Central enterprises should actively expand effective computing investments;
State-owned Assets Supervision and Administration Commission: Focus on breaking through key core technologies and continuously tackling “large model” tech;
Li Qiang: Fully promote AI technological innovation, industrial development, and empowering applications.
2026 cloud computing services may enter a period of widespread price increases:
On January 24, Amazon AWS announced a roughly 15% increase in its EC2 machine capacity blocks for large model training, breaking nearly 20 years of “only price cuts” tradition.
On January 27, Google Cloud announced that starting May 2026, it will officially raise global data transfer prices, with North America rates roughly doubling.
The domestic UCloud’s price increase following overseas cloud providers reflects:
The company’s outlook on the cloud industry and the recognition of an inflection point;
Rigid increases in hardware, energy, and other costs, with the company passing on systemic costs, indirectly reflecting sustained high growth in AI computing demand.
Key beneficiary companies
XXX:
The company has completed the transition from traditional manufacturing to a full-stack AI computing service provider, with core assets highlighted:
IDC Operations: Successfully completed Qinhuai Data asset transfer, further streamlining governance. As a core IDC supplier for ByteDance (high revenue share), Qinhuai Data’s resources in the Beijing ring and key nodes will directly benefit from ByteDance’s increased capital expenditure on computing power, with performance elasticity expected.
Liquid Cooling Technology: Making substantial progress in liquid cooling, through upstream material R&D and M&A, accelerating from material supplier to comprehensive solution provider, aiming to gain higher market share among top global manufacturers.
High-end Components: Addressing AI power supply miniaturization pain points, the company has built a new supercapacitor R&D and manufacturing base. This marks the transition of multilayer foil technology from lab to large-scale industrialization, with customized solutions deeply embedded in data center power systems.
XXX:
The company has deepened collaboration with Tencent Cloud on computing network for years, leveraging over 2800 edge nodes worldwide that have completed computing upgrades, building a distributed computing network. Combined with Tencent Cloud’s full-stack AI capabilities, they are creating a cloud-edge collaborative AI inference architecture. Tencent’s “Yuanbao” billion-red-envelope activity attracts massive user concurrency, real-time AI Q&A, and intelligent distribution scenarios, creating short-term computing peaks and demanding high elasticity and low latency. Wanguo Technology uses nearby edge nodes to schedule computing resources, keeping model inference latency within 10ms, ensuring smooth interaction and system stability under high concurrency, supporting elastic inference needs during events. As AI interactive applications grow, demand for edge computing will continue to rise, and their cooperation is expected to expand into more high-concurrency, real-time scenarios, broadening commercial opportunities.
XXX:
The company’s performance inflection point is confirmed, with AI computing power + Tencent ecosystem driving new brilliance, leading to explosive growth.
According to the latest earnings forecast on January 27, 2026, the company expects net profit attributable to parent in 2025 to reach 270-330 million RMB, nearly tenfold increase YoY. The core driver is the large-scale deployment of computing power, with AI computing resources soaring from 13,000P at the start of the year to 33,000P, fully leased, with high certainty of performance realization.
Thanks to deep ties with Tencent, the company is expected to continue doubling its computing scale and benefit from ongoing enterprise application growth. On the manufacturing side, overseas capacity release will further improve gross margins and support overall growth. Valuation re-rating and market cap growth are anticipated: current forecast for 2025 net profit is about 300 million RMB, with a P/E ratio significantly below the AI computing sector average, offering substantial valuation revaluation potential. Looking ahead to 2026, as computing profits materialize, price increases occur, and Tencent ecosystem synergy strengthens, the company’s performance is expected to reach a new level. Based on a conservative net profit of 500 million RMB in 2026 and reasonable valuation estimates, the current valuation is undervalued, with broad growth potential and a historic opportunity for value reappraisal.
Are you convinced of the continuous explosion of the AI computing power industry? Where is the value? Where are the opportunities?
Like + Share + Comment: As AI computing power prices continue to rise, which giants have the most explosive potential?
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
AI Computing Power: 39 Trillion Tokens Surge, 370 Times! Cloud Providers + AIDC + Computing Power
Today’s AI Computing Power: Liquid Cooling + AIDC + Cloud Providers + Computing Power Leasing continue to explode! [Taoguba]
Once again confirming our perfect strategy of shifting from commercial aerospace to computing power at the end of January!
As the Spring Festival approaches, domestic AI giants are launching major initiatives, further increasing computing power consumption!
Therefore, the simultaneous rise in Token volume and price is inevitable, and computing power inflation will officially propagate to (domestic) cloud services.
Current market hype around AI follows this path: GPU–Storage–Advanced Process–CPU–Cloud Services–IDC;
Severe shortage of computing power
Dream 2.0—“8 hours queue to generate 15 seconds video”
Behind the 8-hour queue to generate a 15-second video is ByteDance’s AI video model Seedance 2.0, whose technological breakthrough has sparked a nationwide creative frenzy. Its cinematic-level generation capability has directly raised user expectations, but also exposed the current stage bottleneck in computing power supply.
Extreme queuing phenomenon: technical popularity and system pressure—peak wait times over 8 hours: On February 11, 2026, real-world testing showed that after users submitted video generation tasks, the queue exceeded 1,000 people. Even with paid “accelerated channels,” it still took 8 hours to generate a 15-second video. Average peak-time wait for ordinary users is 20-30 minutes (including security review), while premium members (annual fee 5199 RMB) only need 3-5 minutes.
Core cause of congestion: insufficient computing resources!
During the Spring Festival, Chinese AI labs are fiercely competing, releasing new models and launching aggressive “red envelope” activities. But behind the noise lies a more urgent story: the sharp surge in computational demand is putting enormous pressure on the entire ecosystem.
Industry surveys confirm resource shortages. A major Chinese internet company has had to suspend most new projects, reallocating limited computing resources to support sudden demand spikes from New Year marketing campaigns.
In recent weeks, there has been a surge in purchasing computing resources, especially NVIDIA chips, to cope with the AI usage boom. Giants are renting computing power at twice the market price. Whenever possible, they even bypass leasing and directly buy chips, regardless of cost.
This competition indicates that demand for computing power will further surge this year. Take the Seedance 2.0 video generation wave as an example—the required computing power, memory, and infrastructure demands are staggering.
The simple logic now is: AI becomes more useful → people use it crazily → Token volume and price soar → demand for computing power/cloud chips explodes!
Tokens are the “billing and consumption units” for large model AI.
Think of it as: AI work consumes “traffic / electricity / words,” and the more used, the more it costs.
During the Spring Festival, the number of people generating videos with Seedance will skyrocket, leading to explosive growth in Token volume. Seedance will continue to iterate towards higher-definition versions. ByteDance’s token consumption in 2026 is huge; a tenfold increase is inevitable.
JPMorgan predicts: Between 2025 and 2030, China’s AI inference token consumption will achieve an approximately 330% compound annual growth rate, surging from 10 trillion tokens in 2025 to 3.9 quadrillion tokens in 2030, a 370-fold increase.
With the current explosion of Agents, multimodal models, and long-context capabilities, the rise in Token volume and price is inevitable (see Google Gemini below). Computing power inflation continues to propagate downstream, now officially reaching domestic cloud services.
Agent scenarios see exponential growth in token demand: simple single-Agent token consumption exceeds chatbot by an order of magnitude; multi-Agent collaboration exceeds chatbot by two orders. High-value Agents (like Claude Code, Cowork, Openclaw) will accelerate deployment in 2026, with open-source models (GLM-5, Kimi K2.5) breaking “availability limits,” leading to large-scale application of Agents.
The context window of new global large models generally expands from tens of thousands to 100,000 tokens, allowing entire books or codebases to be processed in one request, greatly increasing token consumption.
Multimodal Agents need to encode images, audio, and other non-text information into visual tokens. A high-resolution image can be encoded into thousands of tokens, further boosting total token consumption.
Upstream computing costs continue to rise + rigid downstream demand, leading top models (OAI +40%, Anthropic, Zhipu +30%) to raise token prices; the era of low-cost tokens is ending.
Commercialization of high-value Agents drives token pricing from “software usage fees” toward “digital employee salaries.” High-value Agent services (like Harvey AI legal Agent enterprise version) with per-user monthly token consumption of $500 and pricing of $1000-2000/month are emerging, greatly increasing token value.
Continued propagation of computing power inflation downstream: GPU–Storage–Advanced Process–CPU–Cloud Services–IDC
Capacity bottlenecks in advanced chip manufacturing, shortages of data center hardware (large transformers, cooling systems), energy and infrastructure constraints push up upstream hardware costs.
In January 2026, overseas AWS and Google Cloud raised prices, with AI and GPU-related services increasing by 15%-100%, ending the long-term price decline cycle of cloud services.
Inflation has now officially propagated to domestic cloud services. On February 5, Wangsu raised CDN and storage prices by 30%-40%; on February 11, UCloud increased prices across all products. More cloud providers may follow.
Cloudflare announced earnings, rising 15% after hours, forecasting that 2026 full-year and Q1 sales will surpass Wall Street expectations. The rapid development of AI applications is increasing demand for cloud computing. Computing power inflation is now officially propagating to (domestic) cloud services!
Recent positive catalysts for cloud computing include:
February 2026: Several companies announced price increases for cloud products and services;
State-owned Assets Supervision and Administration Commission: Central enterprises should actively expand effective computing investments;
State-owned Assets Supervision and Administration Commission: Focus on breaking through key core technologies and continuously tackling “large model” tech;
Li Qiang: Fully promote AI technological innovation, industrial development, and empowering applications.
2026 cloud computing services may enter a period of widespread price increases:
On January 24, Amazon AWS announced a roughly 15% increase in its EC2 machine capacity blocks for large model training, breaking nearly 20 years of “only price cuts” tradition.
On January 27, Google Cloud announced that starting May 2026, it will officially raise global data transfer prices, with North America rates roughly doubling.
The domestic UCloud’s price increase following overseas cloud providers reflects:
The company’s outlook on the cloud industry and the recognition of an inflection point;
Rigid increases in hardware, energy, and other costs, with the company passing on systemic costs, indirectly reflecting sustained high growth in AI computing demand.
Key beneficiary companies
XXX:
The company has completed the transition from traditional manufacturing to a full-stack AI computing service provider, with core assets highlighted:
IDC Operations: Successfully completed Qinhuai Data asset transfer, further streamlining governance. As a core IDC supplier for ByteDance (high revenue share), Qinhuai Data’s resources in the Beijing ring and key nodes will directly benefit from ByteDance’s increased capital expenditure on computing power, with performance elasticity expected.
Liquid Cooling Technology: Making substantial progress in liquid cooling, through upstream material R&D and M&A, accelerating from material supplier to comprehensive solution provider, aiming to gain higher market share among top global manufacturers.
High-end Components: Addressing AI power supply miniaturization pain points, the company has built a new supercapacitor R&D and manufacturing base. This marks the transition of multilayer foil technology from lab to large-scale industrialization, with customized solutions deeply embedded in data center power systems.
The company has deepened collaboration with Tencent Cloud on computing network for years, leveraging over 2800 edge nodes worldwide that have completed computing upgrades, building a distributed computing network. Combined with Tencent Cloud’s full-stack AI capabilities, they are creating a cloud-edge collaborative AI inference architecture. Tencent’s “Yuanbao” billion-red-envelope activity attracts massive user concurrency, real-time AI Q&A, and intelligent distribution scenarios, creating short-term computing peaks and demanding high elasticity and low latency. Wanguo Technology uses nearby edge nodes to schedule computing resources, keeping model inference latency within 10ms, ensuring smooth interaction and system stability under high concurrency, supporting elastic inference needs during events. As AI interactive applications grow, demand for edge computing will continue to rise, and their cooperation is expected to expand into more high-concurrency, real-time scenarios, broadening commercial opportunities.
The company’s performance inflection point is confirmed, with AI computing power + Tencent ecosystem driving new brilliance, leading to explosive growth.
According to the latest earnings forecast on January 27, 2026, the company expects net profit attributable to parent in 2025 to reach 270-330 million RMB, nearly tenfold increase YoY. The core driver is the large-scale deployment of computing power, with AI computing resources soaring from 13,000P at the start of the year to 33,000P, fully leased, with high certainty of performance realization.
Thanks to deep ties with Tencent, the company is expected to continue doubling its computing scale and benefit from ongoing enterprise application growth. On the manufacturing side, overseas capacity release will further improve gross margins and support overall growth. Valuation re-rating and market cap growth are anticipated: current forecast for 2025 net profit is about 300 million RMB, with a P/E ratio significantly below the AI computing sector average, offering substantial valuation revaluation potential. Looking ahead to 2026, as computing profits materialize, price increases occur, and Tencent ecosystem synergy strengthens, the company’s performance is expected to reach a new level. Based on a conservative net profit of 500 million RMB in 2026 and reasonable valuation estimates, the current valuation is undervalued, with broad growth potential and a historic opportunity for value reappraisal.
Are you convinced of the continuous explosion of the AI computing power industry? Where is the value? Where are the opportunities?
Like + Share + Comment: As AI computing power prices continue to rise, which giants have the most explosive potential?