In the era of the great agent explosion, how should we respond to AI anxiety?

robot
Abstract generation in progress

Written by: XinGPT

AI Is Yet Another Movement Toward Technological Equality

Recently, an article titled “The Internet Is Dead, Agent Eternal” went viral on social media, and I agree with some of its judgments. For example, it points out that in the AI era, using DAU (Daily Active Users) to measure value is no longer appropriate because the internet is a networked structure with decreasing marginal costs—the more people use it, the stronger the network effects. In contrast, large models are star-shaped structures, with marginal costs increasing linearly with token usage. Therefore, compared to DAU, a more important metric is token consumption.

However, I believe the conclusions further drawn from this article are clearly biased. It describes tokens as a privilege of the new era, asserting that whoever has more computing power holds more power, and that the speed at which tokens are burned determines human evolution speed. Thus, one must constantly accelerate consumption, or risk falling behind AI-era competitors.

Similar views also appear in another popular article titled “From DAU to Token Consumption: Power Shift in the AI Era,” which even suggests that the average person should consume at least 100 million tokens daily, ideally reaching 1 billion tokens; otherwise, “those who consume 1 billion tokens will become gods, and we are still humans.”

But few have seriously calculated this. According to GPT-4o pricing, 1 billion tokens per day costs about $6,800, roughly 50,000 RMB. What kind of high-value work justifies running an agent at such a long-term cost?

I do not deny the efficiency of anxiety spreading in AI dissemination, nor do I deny that this industry is almost daily “exploding.” But the future of agents should not be simplified into a contest of token consumption.

To get rich, you do need to build roads first, but overbuilding only leads to waste. The stadiums rising in the western mountains—ultimately—often become overgrown with weeds, more a debt burden than a hub for international events.

AI ultimately points toward technological equality, not privilege concentration. Nearly all technologies that truly change human history go through phases of myth-making, monopoly, and finally, widespread adoption. The steam engine was not exclusive to the aristocracy; electricity was not only for palaces; the internet is not only for a few companies.

The iPhone changed communication, but it did not create a “communication aristocracy.” For the same price, ordinary people’s devices are no different from those used by Taylor Swift or LeBron James. That is technological equality.

AI is following the same path. What ChatGPT brings is fundamentally the equality of knowledge and ability. The model does not know who you are, nor does it care; it responds based on the same set of parameters.

Therefore, whether an agent burns 100 million or 1 billion tokens, it does not inherently signify superiority or inferiority. The real difference lies in whether the goals are clear, whether the structure is rational, and whether the questions are properly posed.

More valuable skills are those that produce greater results with fewer tokens. The upper limit of using an agent depends on human judgment and design, not on how long your bank card can sustain burning tokens. In reality, AI’s rewards for creativity, insight, and structure far surpass those for mere consumption.

This is the tool-level equality, and it is where humans still hold the initiative.

How Should We Face AI Anxiety?

Friends studying broadcasting and television were deeply shocked after watching the video about Seedance 2.0: “If this is the case, the roles we learn—directing, editing, cinematography—will all be replaced by AI.”

AI development is so rapid that humanity seems to be losing ground; many jobs are destined to be replaced by AI, unstoppable. When the steam engine was invented, coachmen had no place left.

Many people start to worry whether they can adapt to future society after being replaced by AI. Rationally, we know that as AI replaces humans, new job opportunities will also emerge.

But the speed of this replacement is faster than we imagine.

If your data, skills, even your humor and emotional value can be better performed by AI, then why would employers choose humans? And what if the boss is AI? Some lament, “Don’t ask what AI can do for you; ask what you can do for AI,” a clear sign of the arrival of the降临派.

Living in the late 19th-century second industrial revolution, philosopher Max Weber proposed the concept of instrumental rationality, which focuses on “using the means that achieve the set goals at the lowest cost and most calculable way.”

This instrumental rationality starts from the premise: it does not question whether the goal “should” be pursued, only how to best achieve it.

This way of thinking is precisely the first principle of AI.

AI agents are concerned with how to better accomplish the given task—how to write code more efficiently, generate videos better, compose articles more effectively. In this tool-oriented dimension, AI’s progress is exponential.

Since Lee Sedol lost the first game to AlphaGo, humans have forever lost to AI in Go.

Max Weber warned of the “iron cage of rationality”: when tool rationality dominates, the goal itself is often no longer questioned, leaving only how to operate more efficiently. People may become highly rational but simultaneously lose their sense of value and meaning.

But AI does not need value judgments or a sense of meaning. It calculates the functions of production efficiency and economic benefits to find an absolute maximum point tangent to the utility curve.

Therefore, under the current system dominated by instrumental rationality, AI is inherently better suited to adapt to this system. The moment ChatGPT was born, just like Lee Sedol losing that game, our defeat to AI agents was preordained—coded into the divine code, and the button was pressed. The only difference is when the wheel of history will crush us.

So what should humans do?

Humans should pursue meaning.

In the field of Go, a despairing fact is that the probability of top professional nine-dan players tying with AI is theoretically approaching zero.

Yet, the game of Go still exists. Its meaning is no longer just about winning or losing but has become a form of aesthetic and expression. Professional players seek not only victory but also the structures discussed in Go, the strategic choices, the thrill of turning disadvantages into advantages, and the conflicts in complex situations.

Humans pursue beauty, value, and happiness.

Bolt’s 100-meter record is 9.58 seconds, and a Ferrari can run 100 meters in less than 3 seconds, but that does not diminish Bolt’s greatness. Because Bolt symbolizes the human spirit of challenging limits and pursuing excellence.

The more powerful AI becomes, the more humans have the right to pursue spiritual freedom.

Max Weber contrasted instrumental rationality with value rationality. In a value-rational worldview, choosing whether to do something is not solely based on economic interests or productivity; rather, whether the act itself is worth doing, whether it aligns with one’s meaning, beliefs, or responsibilities, is more important.

I asked ChatGPT: If the Louvre catches fire and there’s a cute kitten inside, and you can only save one—would you save the cat or the masterpiece?

It answered: save the cat, providing a long list of reasons.

But I also asked: why not save the masterpiece? And it immediately replied that saving the masterpiece is also an option.

Clearly, for ChatGPT, saving the cat or the masterpiece makes no difference; it simply completes the task based on context recognition, using the underlying formulas of the large model, burning some tokens to fulfill a human command.

As for whether to save the cat or the masterpiece, or why to consider such questions, ChatGPT does not care.

Therefore, what truly matters is not whether we will be replaced by AI, but whether, as AI makes the world more efficient, we still want to leave space for happiness, meaning, and value.

Becoming someone who is better at using AI is important, but perhaps even more important before that is not to forget how to be human.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)