×
New York passes a bill to prevent AI-fueled disasters

New York passes a bill to prevent AI-fueled disasters

New York state lawmakers passed a bill on Thursday that aims to prevent frontier AI models from OpenAI, Google, and Anthropic from contributing to disaster scenarios, including the death or injury of more than 100 people, or more than $1 billion in damages.

The passage of the RAISE Act represents a win for the AI safety movement, which has lost ground in recent years as Silicon Valley and the Trump administration have prioritized speed and innovation. Safety advocates including Nobel laureate Geoffrey Hinton and AI research pioneer Yoshua Bengio have championed the RAISE Act. Should it become law, the bill would establish America’s first set of legally mandated transparency standards for frontier AI labs.

The RAISE Act has some of the same provisions and goals as California’s controversial AI safety bill, SB 1047, which was ultimately vetoed. However, the co-sponsor of the bill, New York state Senator Andrew Gounardes, told TechCrunch in an interview that he deliberately designed the RAISE Act such that it doesn’t chill innovation among startups or academic researchers — a common criticism of SB 1047.

“The window to put in place guardrails is rapidly shrinking given how fast this technology is evolving,” said Senator Gounardes. “The people that know [AI] the best say that these risks are incredibly likely […] That’s alarming.”

The RAISE Act is now headed for New York Governor Kathy Hochul’s desk, where she could either sign the bill into law, send it back for amendments, or veto it altogether.

If signed into law, New York’s AI safety bill would require the world’s largest AI labs to publish thorough safety and security reports on their frontier AI models. The bill also requires AI labs to report safety incidents, such as concerning AI model behavior or bad actors stealing an AI model, should they happen. If tech companies fail to live up to these standards, the RAISE Act empowers New York’s attorney general to bring civil penalties of up to $30 million.

The RAISE Act aims to narrowly regulate the world’s largest companies — whether they’re based in California (like OpenAI and Google) or China (like DeepSeek and Alibaba). The bill’s transparency requirements apply to companies whose AI models were trained using more than $100 million in computing resources (seemingly, more than any AI model available today), and are being made available to New York residents.

While similar to SB 1047 in some ways, the RAISE Act was designed to address criticisms of previous AI safety bills, according to Nathan Calvin, the vice president of State Affairs and general counsel at Encode, who worked on this bill and SB 1047. Notably, the RAISE Act does not require AI model developers to include a “kill switch” on their models, nor does it hold companies that post-train frontier AI models accountable for critical harms.

Nevertheless, Silicon Valley has pushed back significantly on New York’s AI safety bill, New York state Assemblymember and co-sponsor of the RAISE Act Alex Bores told TechCrunch. Bores called the industry resistance unsurprising, but claimed that the RAISE Act would not limit innovation of tech companies in any way.

“The NY RAISE Act is yet another stupid, stupid state level AI bill that will only hurt the US at a time when our adversaries are racing ahead,” said Andreessen Horowitz general partner Anjney Midha in a Friday post on X. Andreessen Horowitz and startup incubator Y Combinator were some of the fiercest opponents to SB 1047.

Anthropic, the safety-focused AI lab that called for federal transparency standards for AI companies earlier this month, has not reached an official stance on the bill, co-founder Jack Clark said in a Friday post on X. However, Clark expressed some grievances over how broad the RAISE Act is, noting that it could present a risk to “smaller companies.”

When asked about Anthropic’s criticism, state Senator Gounardes told TechCrunch he thought it “misses the mark,” noting that he designed the bill not to apply to small companies.

OpenAI, Google, and Meta did not respond to TechCrunch’s request for comment.

Another common criticism of the RAISE Act is that AI model developers simply wouldn’t offer their most advanced AI models in the state of New York. That was a similar criticism brought against SB 1047, and it’s largely what’s played out in Europe thanks to the continent’s tough regulations on technology.

Assemblymember Bores told TechCrunch that the regulatory burden of the RAISE Act is relatively light, and therefore, shouldn’t require tech companies to stop operating their products in New York. Given the fact that New York has the third largest GDP in the U.S., pulling out of the state is not something most companies would take lightly.

“I don’t want to underestimate the political pettiness that might happen, but I am very confident that there is no economic reason for [AI companies] to not make their models available in New York,” said Assemblymember Bores.



Source link
#York #passes #bill #prevent #AIfueled #disasters

White House officials are exploring official government oversight of new AI models, according to the New York Times.

U.S. officials, speaking on the condition of anonymity, told the publication that the Trump administration is forming an AI working group composed of tech leaders and government representatives. The group will be tasked with outlining potential oversight procedures for new models launching to market, including formal review processes, the Times reported.

The proposed plans were discussed at a White House meeting last week with representatives from Anthropic, Google, and OpenAI.

Potentially influenced by regulatory processes announced by UK regulators, which relegate AI oversight to relevant government bodies, the working group would also determine which U.S. agencies would be tasked with oversight. Some officials have suggested the National Security Agency (NSA), the White House Office of the National Cyber Director, and the director of national intelligence take the lead, while others have even suggested revitalizing the Biden-era Center for A.I. Standards and Innovation, according to the Times.

The administration has reversed its stance on AI regulation in recent months, despite announcing a federal AI action plan that pulled back on regulation of tech companies and threatened to reduce federal funding for states that impeded AI infrastructure efforts through regulation. Trump’s One Big Beautiful Bill also included limits on state governments’ AI regulation, originally proposing a 10-year moratorium on state action in favor of federal oversight.

Trump appointee and FCC chairman Brendan Carr has also advocated for a light-touch approach to AI regulation.

#Trump #federal #model #oversight">Trump considering federal AI model oversight
                                                            White House officials are exploring official government oversight of new AI models, according to the New York Times. U.S. officials, speaking on the condition of anonymity, told the publication that the Trump administration is forming an AI working group composed of tech leaders and government representatives. The group will be tasked with outlining potential oversight procedures for new models launching to market, including formal review processes, the Times reported. 
        SEE ALSO:
        
            Student sues matchmaking app for allegedly stealing her likeness for an ad
            
        
    
The proposed plans were discussed at a White House meeting last week with representatives from Anthropic, Google, and OpenAI. 
        
            Mashable Light Speed
        
        
    

Potentially influenced by regulatory processes announced by UK regulators, which relegate AI oversight to relevant government bodies, the working group would also determine which U.S. agencies would be tasked with oversight. Some officials have suggested the National Security Agency (NSA), the White House Office of the National Cyber Director, and the director of national intelligence take the lead, while others have even suggested revitalizing the Biden-era Center for A.I. Standards and Innovation, according to the Times. The administration has reversed its stance on AI regulation in recent months, despite announcing a federal AI action plan that pulled back on regulation of tech companies and threatened to reduce federal funding for states that impeded AI infrastructure efforts through regulation. Trump’s One Big Beautiful Bill also included limits on state governments’ AI regulation, originally proposing a 10-year moratorium on state action in favor of federal oversight. 
Trump appointee and FCC chairman Brendan Carr has also advocated for a light-touch approach to AI regulation. 

                    
                                            
                            
                        
                                    #Trump #federal #model #oversight

government oversight of new AI models, according to the New York Times.

U.S. officials, speaking on the condition of anonymity, told the publication that the Trump administration is forming an AI working group composed of tech leaders and government representatives. The group will be tasked with outlining potential oversight procedures for new models launching to market, including formal review processes, the Times reported.

The proposed plans were discussed at a White House meeting last week with representatives from Anthropic, Google, and OpenAI.

Potentially influenced by regulatory processes announced by UK regulators, which relegate AI oversight to relevant government bodies, the working group would also determine which U.S. agencies would be tasked with oversight. Some officials have suggested the National Security Agency (NSA), the White House Office of the National Cyber Director, and the director of national intelligence take the lead, while others have even suggested revitalizing the Biden-era Center for A.I. Standards and Innovation, according to the Times.

The administration has reversed its stance on AI regulation in recent months, despite announcing a federal AI action plan that pulled back on regulation of tech companies and threatened to reduce federal funding for states that impeded AI infrastructure efforts through regulation. Trump’s One Big Beautiful Bill also included limits on state governments’ AI regulation, originally proposing a 10-year moratorium on state action in favor of federal oversight.

Trump appointee and FCC chairman Brendan Carr has also advocated for a light-touch approach to AI regulation.

#Trump #federal #model #oversight">Trump considering federal AI model oversight

White House officials are exploring official government oversight of new AI models, according to the New York Times.

U.S. officials, speaking on the condition of anonymity, told the publication that the Trump administration is forming an AI working group composed of tech leaders and government representatives. The group will be tasked with outlining potential oversight procedures for new models launching to market, including formal review processes, the Times reported.

The proposed plans were discussed at a White House meeting last week with representatives from Anthropic, Google, and OpenAI.

Potentially influenced by regulatory processes announced by UK regulators, which relegate AI oversight to relevant government bodies, the working group would also determine which U.S. agencies would be tasked with oversight. Some officials have suggested the National Security Agency (NSA), the White House Office of the National Cyber Director, and the director of national intelligence take the lead, while others have even suggested revitalizing the Biden-era Center for A.I. Standards and Innovation, according to the Times.

The administration has reversed its stance on AI regulation in recent months, despite announcing a federal AI action plan that pulled back on regulation of tech companies and threatened to reduce federal funding for states that impeded AI infrastructure efforts through regulation. Trump’s One Big Beautiful Bill also included limits on state governments’ AI regulation, originally proposing a 10-year moratorium on state action in favor of federal oversight.

Trump appointee and FCC chairman Brendan Carr has also advocated for a light-touch approach to AI regulation.

#Trump #federal #model #oversight

Post Comment