Ãå±±ÂÖ¼é

Viewpoint: ‘AI Action Plan’ Does Not Address AGI, Superintelligence, or Alternate Intel

By Vincent J. Vitkowsky | July 31, 2025

On July 23, 2025, the White House released its policy document entitled Winning the Race: America’s AI Action Plan. The Action Plan provides a thorough analysis of strategies, measures, and potential concerns to guide the future development of AI. It comprehensively addresses issues and recommends policy actions under the framework of three “pillars,” identified as innovation, infrastructure, and international diplomacy and security.

However, a key issue is not addressed. That is the urgent need to understand the way Artificial General Intelligence (“AGI”), Superintelligence, and Alternate Intelligence will increase the Black Box problem.

Within the section called “Pillar I: Accelerate AI Innovation,” the AI Action Plan, some aspects of the Black Box problem are identified. Within a point entitled Invest in AI Interpretability, Control, and Robustness Breakthroughs, the Action Plan identifies part of the concern as follows:

Today, the inner workings of frontier AI systems are poorly understood. Technologists know how LLMs work at a high level, but often cannot explain why a model produced a specific output. This can make it hard to predict the behavior of any specific AI system. This lack of predictability, in turn, can make it challenging to use advanced AI in defense, national security, or other applications where lives are at stake. The United States will be better able to use AI systems to their fullest potential in high-stakes national security domains if we make fundamental breakthroughs on these research problems.

The Action Plan recommends (1) launching a technology development program to advance AI interpretability, AI control systems, and adversarial robustness, (2) prioritizing fundamental advancements in AI interpretability, control, and robustness, and (3) coordinating an “AI hackathon” initiative to attract the best talent to test AI systems.

This is commendable as far as it goes, which is not nearly far enough. It does not specifically highlight the critical need to understand the effects and consequences of the ongoing competition to create systems operating at higher levels of intelligence.

First, nation-state developers and private developers are racing toward AGI. They give the term various meanings, but most common definitions use it to refer to advanced AI systems that can perform a broad range of tasks faster and better than humans, or AI that is at least as competent as humans in most cognitive tasks. Google and others have warned that AGI could greatly empower Agentic AI systems to plan and execute actions autonomously. They warn that this increases the risk of real world consequences of misalignment, i.e., an AI system pursuing goals and taking actions that it knows the developer did not intend. It also enhances other risks such as misuse, mistakes, and structural risks (defined as harms from multi-agent dynamics and conflicting incentives). Google sets this out in detail in its April 2025 paper entitled .

Next, there is a fierce race to produce AI systems with capabilities exceeding human intelligence. This is sometimes referred to as “Superintelligence”. As progress is made, it will greatly compound the risks of developers failing to understand the way their creations analyze, operate and act.

Finally, the world has already seen the emergence of what can best be described as “Alternate Intelligence”. AI can approach problems and actions in ways that no human has or would. Consider, for example, the development of AlphaGoZero. Go is a traditional Asian game thought to be the most complicated in the world. An AI system called AlphaGo was trained by analyzing millions of moves in 100,000 games played by humans. Quickly it was able to defeat human champions. But then AlphaGoZero was developed. That system was trained without any data about prior human moves. It was simply given the rules of the game, and trained only by reinforced learning through games against itself. It was soon able to dominate even the most sophisticated systems trained on human data by using moves and strategies that humans had never used, or only rarely contemplated. That is, it approached the problems presented by the game through analysis that was entirely uninformed by human thought processes, and which proved vastly superior. This occurred in 2017, so it is not some hypothetical future scenario.

All of this suggests the world of AI science fiction may be closer to reality than is understood. AI technologists urgently need to analyze how much closer.

The AI Action Plan did not address the crucial need to understand what has and can happen as AI systems reach AGI, Superintelligence, or Alternate Intelligence. The necessary research is less immediately lucrative than research racing toward developing systems with the precise capabilities that need to be controlled. So the funding for this research must be provided through a full throttle government initiative.

Topics InsurTech Data Driven Artificial Intelligence

Was this article valuable?

Here are more articles you may enjoy.