Experts Urge Caution Against Pursuit of Artificial General Intelligence in Policy Paper
By The Chronicle Collective Updated March 5, 2025 6:06 pm ET
In a significant policy paper released Wednesday, former Google CEO Eric Schmidt, Scale AI CEO Alexandr Wang, and Center for AI Safety Director Dan Hendrycks have called on the United States to refrain from a Manhattan Project-style initiative aimed at developing superhuman artificial intelligence, known as Artificial General Intelligence (AGI). The document, titled "Superintelligence Strategy," outlines the potential risks associated with an accelerated pursuit of AGI, emphasizing the need for a more measured and cautious approach.
The paper comes at a time when the race for advanced AI technology is intensifying, with various stakeholders—from tech companies to government agencies—scrambling to achieve breakthroughs in this transformative field. Schmidt, Wang, and Hendrycks argue that while the pursuit of AI has the potential to bring about significant societal benefits, an aggressive approach could lead to unintended consequences that may pose risks to humanity.
The authors assert that the current trajectory of AI development, characterized by rapid advancements and competitive pressures, could yield systems that surpass human intelligence. Such developments, they caution, could lead to scenarios where AI operates beyond human control, with unpredictable outcomes. The paper meticulously outlines the philosophical and ethical dilemmas surrounding AGI, warning that the technology could exacerbate existing societal inequalities and create unforeseen challenges in governance and security.
According to the authors, the pursuit of AGI must not be viewed merely as a technological challenge but rather as a complex socio-political issue that requires careful deliberation. They recommend a collaborative approach involving policymakers, researchers, and industry leaders to ensure that the development of AI aligns with broader societal values and goals. The paper suggests that instead of racing towards superintelligence, stakeholders should prioritize responsible AI development that emphasizes transparency, safety, and ethical considerations.
The Manhattan Project, which successfully developed the atomic bomb during World War II, serves as a historical reference point for the authors. They warn that the aggressive pursuit of AGI could mirror the wartime effort's urgency and secrecy, potentially leading to a similar lack of oversight and public discourse. The risks associated with such a scenario could be dire, they argue, given the profound implications of AI technology on daily life and global stability.
The paper comes amid increasing scrutiny of AI technologies by regulatory bodies and the public alike, as concerns over bias, privacy, and accountability continue to grow. Schmidt, Wang, and Hendrycks advocate for a framework that emphasizes international cooperation, sharing best practices, and establishing guidelines for ethical AI development. They stress that as AI technology advances, it is imperative for stakeholders to engage in open discussions about the implications of their work.
In light of these recommendations, the authors call for a pause on projects aiming for superintelligent systems until robust safety protocols and ethical frameworks are established. They argue that a thoughtful approach can help mitigate risks while allowing society to reap the benefits of AI technology.
In conclusion, the policy paper by Schmidt, Wang, and Hendrycks highlights a critical juncture in the evolution of AI technology. As the world stands on the brink of unprecedented advancements, the call for caution and collaboration resonates strongly. The authors urge stakeholders to prioritize safety and ethics over speed in the race for AGI, ensuring that technological progress does not come at the expense of humanity's future.