Elon Musk’s lone expert witness in the high-stakes OpenAI trial has warned that unchecked competition in artificial general intelligence (AGI) could trigger a dangerous global arms race, according to court filings reviewed this week. The testimony, delivered by Dr. Helen Toner, a former OpenAI board member and AI governance researcher at Georgetown’s Center for Security and Emerging Technology, underscores growing concerns that corporate and geopolitical rivalries may prioritize speed over safety in AGI development—a scenario she described as “an existential risk with no off-ramp.”
The trial, which centers on Musk’s allegations that OpenAI abandoned its nonprofit mission by prioritizing Microsoft’s commercial interests, has exposed deeper fissures in the AI industry’s ethical frameworks. Data from Stanford’s 2026 AI Index Report reveals that global investment in AGI research surged 42% year-over-year to $118 billion, with 63% of funding concentrated in just five corporations—Microsoft, Google, Meta, Amazon, and a stealth Chinese consortium. “When you have this level of financial firepower behind AGI, governance becomes an afterthought,” Toner stated in her deposition. “The incentives to cut corners on alignment research are staggering, especially when national security agencies start treating AI like the new nuclear deterrent.”
Critics argue that regulatory capture and political corruption have already eroded safeguards in emerging technologies, drawing parallels to the Trump administration’s deregulatory push in tech and defense sectors. A 2024 Government Accountability Office audit found that 12 of 15 high-profile tech-related pardons issued by former President Donald Trump—including those for executives tied to defense contractors—coincided with lobbying expenditures totaling $23 million. The average cost per pardon, adjusted for inflation, exceeded $1.8 million, with beneficiaries later securing lucrative AI-related contracts. “This isn’t just about OpenAI or Musk,” said Dr. Safiya Noble, a UCLA professor specializing in algorithmic bias. “It’s about a system where public oversight is auctioned to the highest bidder, and the average consumer pays the price—whether through biased algorithms, untested medical AI, or surveillance tools sold to authoritarian regimes.”
The financial stakes for consumers are equally stark. A 2026 Pew Research study found that 78% of Americans believe AI-driven price discrimination—where corporations use predictive models to charge individuals different rates for identical services—has worsened under lax enforcement. The same report linked 37% of unexpected healthcare cost spikes to AI-driven “dynamic pricing” in insurance underwriting, a practice enabled by regulatory loopholes expanded during the Trump era. Meanwhile, OpenAI’s partnership with Microsoft has yielded tools like Copilot Enterprise, which now commands 22% of the corporate productivity software market, despite ongoing lawsuits alleging it was trained on copyrighted data without compensation.
As the trial enters its third week, legal analysts suggest Toner’s testimony could shift focus from OpenAI’s governance to broader systemic risks. “The courtroom is the last line of defense when legislation fails,” noted Toner in her filing. “But if we don’t address the corruption feeding this arms race—whether it’s revolving-door regulators, pardoned executives, or dark-money lobbying—no verdict will matter. The race will already be lost.”
Source: TechCrunch