Chicago – OpenAI stepped onto the legislative stage on April 9, testifying for Illinois Senate Bill 3444, a proposal that would bar AI labs from being sued for "critical harms" caused by their most advanced models. The bill targets incidents that result in 100 or more deaths, injuries, or at least $1 billion in property loss, provided the lab did not act intentionally or recklessly and had posted safety, security and transparency reports online.
The legislation defines a "frontier model" as any AI system trained with more than $100 million in computational costs. That definition captures the industry’s biggest players – OpenAI, Google, xAI, Anthropic and Meta – whose models routinely exceed that threshold.
"We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses—small and big—of Illinois," OpenAI spokesperson Jamie Radice said in an emailed statement. "They also help avoid a patchwork of state‑by‑state rules and move toward clearer, more consistent national standards."
OpenAI’s global affairs representative Caitlin Niedermeyer, who testified before the Senate committee, echoed the company's stance on federal harmonization. She warned that a fragmented landscape of state regulations could create friction without improving safety, and urged Congress to adopt a unified framework that would "reinforce a path toward harmonization with federal systems."
Critics remain skeptical. Scott Wisor, policy director for the Secure AI project, told WIRED the bill faces slim odds of passage, noting that a recent Illinois poll showed 90 percent of residents oppose exempting AI firms from liability. He added that the state has already pursued stricter AI rules, such as limiting the use of AI in mental‑health services and enforcing the Biometric Information Privacy Act.
SB 3444 lists several scenarios that qualify as critical harms, including the creation of chemical, biological, radiological or nuclear weapons by a bad actor using AI, and autonomous AI conduct that would be criminal if performed by a human. Under the bill, a lab would escape liability if it had not intentionally or recklessly caused the outcome and had complied with reporting requirements.
Federal lawmakers have yet to pass any AI‑specific liability framework, leaving states to experiment with their own approaches. Illinois joins California and New York, which have enacted bills requiring AI developers to submit safety and transparency reports. The lack of a national standard leaves companies navigating a patchwork of regulations that could hinder innovation.
OpenAI’s endorsement marks a shift from its previous defensive posture, where the company opposed measures that could expose it to lawsuits over its technology. The firm now appears to favor legislation that it believes will protect both public safety and the competitive edge of U.S. AI research.
Family members of children who died by suicide after allegedly forming unhealthy relationships with ChatGPT have filed lawsuits against OpenAI, highlighting the personal‑level harms that also attract scrutiny. While SB 3444 concentrates on large‑scale events, the broader debate continues over how to address both individual and societal risks posed by increasingly powerful AI models.
As the industry watches Illinois’ effort, the outcome could set a precedent for how the United States balances accountability with rapid AI advancement.
Este artículo fue escrito con la asistencia de IA.
News Factory SEO te ayuda a automatizar contenido de noticias para tu sitio.