Silicon Valley Takes Artificial General Intelligence Seriously—Washington Must Too

Artificial General Intelligence (AGI), once confined to the realm of science fiction, is now a looming reality that demands urgent attention from lawmakers. Recent testimony from whistleblowers at a Senate Judiciary Subcommittee hearing reveals the rapidly advancing technology and a worrying lack of oversight that could have profound implications for society.

On September 17, during the hearing titled “Oversight of AI: Insiders’ Perspectives,” experts from leading AI firms raised alarms about the ambitious goals of companies such as OpenAI, Google, and Anthropic. Helen Toner, a former board member of OpenAI and current director of strategy at Georgetown University’s Center for Security and Emerging Technologies, stressed the significant disconnect between insider perspectives on AI and public perception. She asserted that major AI firms are treating the pursuit of AGI as a serious and achievable goal. “The biggest disconnect that I see…is the idea of artificial general intelligence,” Toner stated, highlighting the urgency of the matter.

Joining her, William Saunders, a former researcher at OpenAI who resigned out of concern for ethical practices, echoed this sentiment. Saunders testified that these companies are not only pursuing AGI but are also pouring billions of dollars into these initiatives. “Companies like OpenAI are working towards building artificial general intelligence,” he remarked, underscoring the financial and intellectual resources dedicated to this goal.

All three leading AI labs have made their ambitions clear. OpenAI’s mission statement emphasizes the importance of ensuring that AGI benefits humanity, while Anthropic focuses on developing safe and interpretable AI systems. Google DeepMind’s co-founder, Shane Legg, has even predicted that human-level AI could be achieved by the mid-2020s. New entrants into the AI race, including Elon Musk’s xAI and Ilya Sutskever’s Safe Superintelligence Inc., are similarly focused on AGI, raising questions about the competitive landscape and ethical considerations in the sector.

Despite these advancements, many policymakers in Washington have previously dismissed AGI as either marketing hype or an abstract concept. However, the recent hearing indicated a shift in perception. Senator Josh Hawley (R-MO) noted that the witnesses were experts who had firsthand experience with these technologies and were less inclined to provide a rosy picture than corporate executives. Similarly, Senator Richard Blumenthal (D-CT), the subcommittee Chair, acknowledged that the prospect of AGI becoming a reality within a decade is no longer far-fetched. “It’s here and now—one to three years has been the latest prediction,” he asserted, emphasizing the need for caution.

The growing acceptance of AGI’s potential within Washington reflects changing public opinion. A July 2023 survey by the AI Policy Institute indicated that a majority of Americans believe AGI will be developed within the next five years, with 82% advocating for a cautious and deliberate approach to AI development.

The stakes involved in the pursuit of AGI are astronomical. Saunders warned that the technology could lead to devastating cyberattacks or the creation of “novel biological weapons.” Toner went further, cautioning that in a worst-case scenario, AGI could even result in human extinction.

Yet, despite the gravity of these warnings, the U.S. has implemented virtually no regulatory oversight over the companies racing toward AGI. This regulatory vacuum leaves society vulnerable to the risks associated with rapid advancements in AI technology.

As Silicon Valley intensifies its focus on AGI, it is imperative for Washington to take this issue seriously. The current trajectory poses not only ethical dilemmas but also existential threats that demand immediate and informed legislative action. Without proactive measures, the very technology designed to enhance human capabilities could instead pose catastrophic risks. The time for robust regulatory frameworks and a comprehensive understanding of AGI is now.