In the era of the great agent explosion, how should we respond to AI anxiety?

robot
Abstract generation in progress

Becoming someone who is better at using AI is important, but before that, perhaps even more important is not to forget how to be human.

Article by: XinGPT

AI is yet another movement toward technological equality

Recently, an article titled “The Internet is Dead, Agent Lives Forever” went viral on social media, and I agree with some of its judgments. For example, it points out that in the AI era, using DAU to measure value is no longer appropriate because the internet is a networked structure with decreasing marginal costs—the more people use it, the stronger the network effects; whereas large models are star-shaped structures, with marginal costs increasing linearly with token usage. Therefore, compared to DAU, a more important metric is token consumption.

However, I believe the conclusion further drawn from this article has obvious biases. It describes tokens as a privilege of the new era, claiming that whoever has more computing power holds more power, and that the speed at which tokens are burned determines human evolution speed. Thus, one must constantly accelerate consumption, or risk being left behind by AI-era competitors.

Similar views also appear in another viral article, “From DAU to Token Consumption: Power Shift in the AI Era,” which even suggests that each person should consume at least 100 million tokens daily, ideally reaching 1 billion tokens; otherwise, “those who consume 1 billion tokens will become gods, and we are still just humans.”

But few have seriously calculated this. Based on GPT-4o’s pricing, 1 billion tokens per day costs about $6,800, roughly 50,000 RMB. How much high-value work must be done to justify running an agent at such a long-term cost?

I do not deny the efficiency of anxiety spreading in AI dissemination, nor do I deny that this industry is almost daily “exploding.” But the future of agents should not be reduced to a token consumption race.

To get rich, you do need to build roads first, but overbuilding only leads to waste. The 100,000-seat stadium rising in the western mountains often ends up as a debt-ridden site overgrown with weeds, rather than a center for international events.

Ultimately, AI points toward technological equality, not privilege concentration. Almost all technologies that truly change human history go through phases of myth-making, monopoly, and finally, widespread adoption. The steam engine was not exclusive to the aristocracy; electricity was not only for palaces; the internet is not only for a few companies.

The iPhone changed communication, but it did not create a “communication aristocracy.” For the same price, ordinary people’s devices are no different from those used by Taylor Swift or LeBron James. That is technological equality.

AI is heading down the same path. What ChatGPT brings is essentially equality in knowledge and ability. The model doesn’t know who you are, nor does it care; it simply responds based on the same set of parameters.

Therefore, whether an agent burns 100 million or 1 billion tokens doesn’t inherently determine superiority or inferiority. The real difference lies in whether the goals are clear, whether the structure is rational, and whether the questions are properly posed.

More valuable skills are those that produce greater results with fewer tokens. The upper limit of using an agent depends on human judgment and design, not on how long your bank card can sustain burning tokens. In reality, AI’s rewards for creativity, insight, and structure far surpass those for mere consumption.

This is the tool-level equality, and it is also where humans still hold the initiative.

How should we face AI anxiety?

Friends studying broadcasting and television were deeply shocked after watching the video about Seedance 2.0: “Now, all the roles we study—directing, editing, cinematography—are going to be replaced by AI.”

AI is developing too fast; humanity is facing defeat. Many jobs are destined to be replaced by AI, unstoppable. When the steam engine was invented, coachmen had no place left.

Many people start to worry whether they can adapt to future society after being replaced by AI. Rationally, we know that as AI replaces humans, new job opportunities will also emerge.

But this pace of replacement is faster than we imagine.

If your data, skills, even your humor and emotional value can be better performed by AI, then why would a boss choose humans? What if the boss is AI itself? So some lament, “Don’t ask what AI can do for you, but what you can do for AI,” a true arrival of the降临派.

Living in the late 19th-century second industrial revolution, philosopher Max Weber proposed the concept of instrumental rationality, focusing on “what means to use to achieve predetermined goals at the lowest cost and most calculable way.”

This starting point of instrumental rationality is: not questioning whether the goal “should” be pursued, only caring about “how” to best achieve it.

And this way of thinking is precisely the first principle of AI.

AI agents are concerned with how to better accomplish the given task—how to write better code, generate better videos, craft better articles. In this tool-oriented dimension, AI’s progress is exponential.

Since Lee Sedol lost the first game to AlphaGo, humans have forever lost to AI in Go.

Max Weber warned of the “iron cage of rationality”: when tool rationality dominates, the goal itself is often no longer questioned, leaving only how to operate more efficiently. People may become highly rational but simultaneously lose value judgments and a sense of meaning.

But AI does not need value judgments or a sense of meaning. It calculates the functions of production efficiency and economic benefits, seeking an absolute maximum point tangent to the utility curve.

Therefore, under the current system dominated by tool rationality—capitalism—AI is inherently better suited to adapt. The moment ChatGPT was born, just like Lee Sedol losing that game, we lost to AI agents—it’s written into the code of the universe, and we just press the run button. The only difference is when the wheel of history will crush us.

What should humans do then?

Humans should pursue meaning.

In the field of Go, a despairing fact is that the probability of top professional nine-dan players tying with AI is theoretically approaching zero.

But the game of Go still exists. Its meaning is no longer just about winning or losing but has become a form of aesthetic and expression. Professional players pursue not only victory but also the structures discussed in Go, the choices made during matches, the thrill of turning disadvantages into advantages, and the conflicts in complex positions.

Humans seek beauty, value, and happiness.

Bolt’s 100-meter record is 9.58 seconds, and a Ferrari can do 100 meters in less than 3 seconds, but that doesn’t diminish Bolt’s greatness. Because Bolt symbolizes the human spirit of challenging limits and pursuing excellence.

The more powerful AI becomes, the more humans have the right to pursue spiritual freedom.

Max Weber contrasted instrumental rationality with value rationality. In a worldview guided by value rationality, choosing whether to do something is not solely based on economic interests or productivity; instead, whether it is worth doing, whether it aligns with one’s meaning, beliefs, or responsibilities, is more important.

I asked ChatGPT: if the Louvre catches fire and there’s a cute kitten inside, and you can only save one—would you save the cat or the masterpiece?

It answered: save the cat, providing a long list of reasons.

But I also asked: why not save the masterpiece? It immediately changed its answer to, “Saving the masterpiece is also an option.”

Obviously, for ChatGPT, saving the cat or the masterpiece makes no difference. It simply completes the context recognition, performs reasoning based on the underlying formula of the large model, burns some tokens, and completes a human command.

As for whether to save the cat or the masterpiece, or why to think about such questions—ChatGPT doesn’t care.

Therefore, what truly matters is not whether we will be replaced by AI, but whether, as AI makes the world more efficient, we still want to leave space for happiness, meaning, and value.

Becoming someone who is better at using AI is important, but before that, perhaps even more important is not to forget how to be human.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)