×
This Robot Only Needs a Single AI Model to Master Humanlike Movements

This Robot Only Needs a Single AI Model to Master Humanlike Movements

While there is a lot of work to do, Tedrake says all of the evidence so far suggests that the approaches used to LLMs also work for robots. “I think it’s changing everything,” he says.

Gauging progress in robotics has become more challenging of late, of course, with videoclips showing commercial humanoids performing complex chores, like loading refrigerators or taking out the trash with seeming ease. YouTube clips can be deceptive, though, and humanoid robots tend to be either teleoperated, carefully programmed in advance, or trained to do a single task in very controlled conditions.

The new Atlas work is a big sign that robots are starting to experience the kind of equivalent advances in robotics that eventually led to the general language models that gave us ChatGPT in the field of generative AI. Eventually, such progress could give us robots that are able to operate in a wide range of messy environments with ease and are able to rapidly learn new skills—from welding pipes to making espressos—without extensive retraining.

“It’s definitely a step forward,” says Ken Goldberg, a roboticist at UC Berkeley who receives some funding from TRI but was not involved with the Atlas work. “The coordination of legs and arms is a big deal.”

Goldberg says, however, that the idea of emergent robot behavior should be treated carefully. Just as the surprising abilities of large language models can sometimes be traced to examples included in their training data, he says that robots may demonstrate skills that seem more novel than they really are. He adds that it is helpful to know details about how often a robot succeeds and in what ways it fails during experiments. TRI has previously been transparent with the work it’s done on LBMs and may well release more data on the new model.

Whether simple scaling up the data used to train robot models will unlock ever-more emergent behavior remains an open question. At a debate held in May at the International Conference on Robotics and Automation in Atlanta, Goldberg and others cautioned that engineering methods will also play an important role going forward.

Tedrake, for one, is convinced that robotics is nearing an inflection point—one that will enable more real-world use of humanoids and other robots. “I think we need to put these robots out of the world and start doing real work,” he says.

What do you think of Atlas’ new skills? And do you think that we are headed for a ChatGPT-style breakthrough in robotics? Let me know your thoughts on ailab@wired.com.


This is an edition of Will Knight’s AI Lab newsletter. Read previous newsletters here.

Source link
#Robot #Single #Model #Master #Humanlike #Movements

White House officials are exploring official government oversight of new AI models, according to the New York Times.

U.S. officials, speaking on the condition of anonymity, told the publication that the Trump administration is forming an AI working group composed of tech leaders and government representatives. The group will be tasked with outlining potential oversight procedures for new models launching to market, including formal review processes, the Times reported.

The proposed plans were discussed at a White House meeting last week with representatives from Anthropic, Google, and OpenAI.

Potentially influenced by regulatory processes announced by UK regulators, which relegate AI oversight to relevant government bodies, the working group would also determine which U.S. agencies would be tasked with oversight. Some officials have suggested the National Security Agency (NSA), the White House Office of the National Cyber Director, and the director of national intelligence take the lead, while others have even suggested revitalizing the Biden-era Center for A.I. Standards and Innovation, according to the Times.

The administration has reversed its stance on AI regulation in recent months, despite announcing a federal AI action plan that pulled back on regulation of tech companies and threatened to reduce federal funding for states that impeded AI infrastructure efforts through regulation. Trump’s One Big Beautiful Bill also included limits on state governments’ AI regulation, originally proposing a 10-year moratorium on state action in favor of federal oversight.

Trump appointee and FCC chairman Brendan Carr has also advocated for a light-touch approach to AI regulation.

#Trump #federal #model #oversight">Trump considering federal AI model oversight
                                                            White House officials are exploring official government oversight of new AI models, according to the New York Times. U.S. officials, speaking on the condition of anonymity, told the publication that the Trump administration is forming an AI working group composed of tech leaders and government representatives. The group will be tasked with outlining potential oversight procedures for new models launching to market, including formal review processes, the Times reported. 
        SEE ALSO:
        
            Student sues matchmaking app for allegedly stealing her likeness for an ad
            
        
    
The proposed plans were discussed at a White House meeting last week with representatives from Anthropic, Google, and OpenAI. 
        
            Mashable Light Speed
        
        
    

Potentially influenced by regulatory processes announced by UK regulators, which relegate AI oversight to relevant government bodies, the working group would also determine which U.S. agencies would be tasked with oversight. Some officials have suggested the National Security Agency (NSA), the White House Office of the National Cyber Director, and the director of national intelligence take the lead, while others have even suggested revitalizing the Biden-era Center for A.I. Standards and Innovation, according to the Times. The administration has reversed its stance on AI regulation in recent months, despite announcing a federal AI action plan that pulled back on regulation of tech companies and threatened to reduce federal funding for states that impeded AI infrastructure efforts through regulation. Trump’s One Big Beautiful Bill also included limits on state governments’ AI regulation, originally proposing a 10-year moratorium on state action in favor of federal oversight. 
Trump appointee and FCC chairman Brendan Carr has also advocated for a light-touch approach to AI regulation. 

                    
                                            
                            
                        
                                    #Trump #federal #model #oversight

government oversight of new AI models, according to the New York Times.

U.S. officials, speaking on the condition of anonymity, told the publication that the Trump administration is forming an AI working group composed of tech leaders and government representatives. The group will be tasked with outlining potential oversight procedures for new models launching to market, including formal review processes, the Times reported.

The proposed plans were discussed at a White House meeting last week with representatives from Anthropic, Google, and OpenAI.

Potentially influenced by regulatory processes announced by UK regulators, which relegate AI oversight to relevant government bodies, the working group would also determine which U.S. agencies would be tasked with oversight. Some officials have suggested the National Security Agency (NSA), the White House Office of the National Cyber Director, and the director of national intelligence take the lead, while others have even suggested revitalizing the Biden-era Center for A.I. Standards and Innovation, according to the Times.

The administration has reversed its stance on AI regulation in recent months, despite announcing a federal AI action plan that pulled back on regulation of tech companies and threatened to reduce federal funding for states that impeded AI infrastructure efforts through regulation. Trump’s One Big Beautiful Bill also included limits on state governments’ AI regulation, originally proposing a 10-year moratorium on state action in favor of federal oversight.

Trump appointee and FCC chairman Brendan Carr has also advocated for a light-touch approach to AI regulation.

#Trump #federal #model #oversight">Trump considering federal AI model oversight

White House officials are exploring official government oversight of new AI models, according to the New York Times.

U.S. officials, speaking on the condition of anonymity, told the publication that the Trump administration is forming an AI working group composed of tech leaders and government representatives. The group will be tasked with outlining potential oversight procedures for new models launching to market, including formal review processes, the Times reported.

The proposed plans were discussed at a White House meeting last week with representatives from Anthropic, Google, and OpenAI.

Potentially influenced by regulatory processes announced by UK regulators, which relegate AI oversight to relevant government bodies, the working group would also determine which U.S. agencies would be tasked with oversight. Some officials have suggested the National Security Agency (NSA), the White House Office of the National Cyber Director, and the director of national intelligence take the lead, while others have even suggested revitalizing the Biden-era Center for A.I. Standards and Innovation, according to the Times.

The administration has reversed its stance on AI regulation in recent months, despite announcing a federal AI action plan that pulled back on regulation of tech companies and threatened to reduce federal funding for states that impeded AI infrastructure efforts through regulation. Trump’s One Big Beautiful Bill also included limits on state governments’ AI regulation, originally proposing a 10-year moratorium on state action in favor of federal oversight.

Trump appointee and FCC chairman Brendan Carr has also advocated for a light-touch approach to AI regulation.

#Trump #federal #model #oversight

On May 4, 2026, the U.S. Securities and Exchange Commission filed an amended complaint to add the Elon Musk Revocable Trust dated July 22, 2003 (the “Revocable Trust”) as a defendant to this action. The amended complaint alleges that the defendants failed to timely file a beneficial ownership report with the Commission after the Revocable Trust acquired beneficial ownership of more than five percent of the outstanding shares of Twitter, Inc. common stock, in violation of the beneficial ownership reporting requirements under the Securities Exchange Act of 1934 (“Exchange Act”).

The SEC simultaneously moved for entry of a consent final judgment as to the Revocable Trust. Without admitting or denying the allegations of the complaint as to the Revocable Trust, the Revocable Trust consented to entry of a final judgment, subject to court approval, that would permanently enjoin it from violating Section 13(d) of the Exchange Act and Rule 13d-1 thereunder and order it to pay a civil penalty of $1.5 million.

As explained in the consent motion, if the court enters the proposed final judgment as to the Revocable Trust as proposed by the Revocable Trust and the SEC, the SEC will file a stipulated dismissal of Elon Musk in his personal capacity, which will resolve this case in its entirety.

#Elon #Musk #settle #feds #Twitter #lawsuit #pocket #changeElon Musk,Law,News,Policy,Tech,Twitter – X">Elon Musk will settle the feds’ Twitter lawsuit with pocket changeOn May 4, 2026, the U.S. Securities and Exchange Commission filed an amended complaint to add the Elon Musk Revocable Trust dated July 22, 2003 (the “Revocable Trust”) as a defendant to this action. The amended complaint alleges that the defendants failed to timely file a beneficial ownership report with the Commission after the Revocable Trust acquired beneficial ownership of more than five percent of the outstanding shares of Twitter, Inc. common stock, in violation of the beneficial ownership reporting requirements under the Securities Exchange Act of 1934 (“Exchange Act”).The SEC simultaneously moved for entry of a consent final judgment as to the Revocable Trust. Without admitting or denying the allegations of the complaint as to the Revocable Trust, the Revocable Trust consented to entry of a final judgment, subject to court approval, that would permanently enjoin it from violating Section 13(d) of the Exchange Act and Rule 13d-1 thereunder and order it to pay a civil penalty of .5 million.As explained in the consent motion, if the court enters the proposed final judgment as to the Revocable Trust as proposed by the Revocable Trust and the SEC, the SEC will file a stipulated dismissal of Elon Musk in his personal capacity, which will resolve this case in its entirety.#Elon #Musk #settle #feds #Twitter #lawsuit #pocket #changeElon Musk,Law,News,Policy,Tech,Twitter – X

Post Comment