Washington is Waking Up to AI’s Risks About Three Years Too Late
In an era where some of the world’s largest corporations are deeply invested in artificial intelligence, U.S. lawmakers are only now beginning to recognize the potential dangers posed by this rapidly advancing technology. Despite AI’s complexity and the urgent pleas from its creators to slow development, Congress has yet to pass any significant legislation to regulate it.
The lack of legislative action is stark, especially when compared to the federal oversight of industries like narcotics, cigarettes, and even social media platforms such as TikTok. Although a bipartisan “roadmap” for AI regulation was released last month, its prospects in an election year remain uncertain. Ironically, one of the roadmap’s goals is to prevent AI from interfering with the electoral process.
Currently, the responsibility for keeping AI companies in check falls to the underfunded Federal Trade Commission (FTC) and the Justice Department. Antitrust officials from these agencies are reportedly close to finalizing an agreement on how to jointly oversee major AI players like Microsoft, Google, Nvidia, and OpenAI. This impending agreement suggests a significant crackdown is imminent, though it may come too late to rein in the already rampant AI development.
The rapid growth of AI is exemplified by Nvidia, a previously little-known chipmaker that recently joined the $3 trillion market cap club, briefly surpassing Apple as the second most valuable publicly traded company in the U.S. Microsoft, the current leader by market cap, owes much of its position to its investments in OpenAI, the creator of ChatGPT.
AI’s swift rise from an academic topic to a Wall Street sensation was catalyzed by OpenAI’s release of ChatGPT, igniting a financial boom. This surge has occurred largely unregulated, as U.S. lawmakers have struggled to keep pace with technological advancements. In contrast, European officials have already adopted the world’s first standalone AI law, five years after initial proposals.
A group of current and former OpenAI employees recently highlighted the risks of this unregulated growth in an open letter. They warned that AI companies have strong financial motivations to resist effective oversight. These employees, often bound by strict confidentiality agreements, are among the few who can hold these corporations accountable, but their ability to speak out is limited.
In essence, the current situation relies on the self-regulation of newly wealthy tech executives, raising concerns about what might go wrong without more robust government intervention.