×
This Microsoft Entra ID Vulnerability Could Have Been Catastrophic

This Microsoft Entra ID Vulnerability Could Have Been Catastrophic

As businesses around the world have shifted their digital infrastructure over the last decade from self-hosted servers to the cloud, they’ve benefitted from the standardized, built-in security features of major cloud providers like Microsoft. But with so much riding on these systems, there can be potentially disastrous consequences at a massive scale if something goes wrong. Case in point: Security researcher Dirk-jan Mollema recently stumbled upon a pair of vulnerabilities in Microsoft Azure’s identity and access management platform that could have been exploited for a potentially cataclysmic takeover of all Azure customer accounts.

Known as Entra ID, the system stores each Azure cloud customer’s user identities, sign-in access controls, applications, and subscription management tools. Mollema has studied Entra ID security in depth and published multiple studies about weaknesses in the system, which was formerly known as Azure Active Directory. But while preparing to present at the Black Hat security conference in Las Vegas in July, Mollema discovered two vulnerabilities that he realized could be used to gain global administrator privileges—essentially god mode—and compromise every Entra ID directory, or what is known as a “tenant.” Mollema says that this would have exposed nearly every Entra ID tenant in the world other than, perhaps, government cloud infrastructure.

“I was just staring at my screen. I was like, ‘No, this shouldn’’t really happen,’” says Mollema, who runs the Dutch cybersecurity company Outsider Security and specializes in cloud security. “It was quite bad. As bad as it gets, I would say.”

“From my own tenants—my test tenant or even a trial tenant—you could request these tokens and you could impersonate basically anybody else in anybody else’s tenant,” Mollema adds. “That means you could modify other people’s configuration, create new and admin users in that tenant, and do anything you would like.”

Given the seriousness of the vulnerability, Mollema disclosed his findings to the Microsoft Security Response Center on July 14, the same day that he discovered the flaws. Microsoft started investigating the findings that day and issued a fix globally on July 17. The company confirmed to Mollema that the issue was fixed by July 23 and implemented extra measures in August. Microsoft issued a CVE for the vulnerability on September 4.

“We mitigated the newly identified issue quickly, and accelerated the remediation work underway to decommission this legacy protocol usage, as part of our Secure Future Initiative,” Tom Gallagher, Microsoft’s Security Response Center vice president of engineering, told WIRED in a statement. “We implemented a code change within the vulnerable validation logic, tested the fix, and applied it across our cloud ecosystem.”

Gallagher says that Microsoft found “no evidence of abuse” of the vulnerability during its investigation.

Both vulnerabilities relate to legacy systems still functioning within Entra ID. The first involves a type of Azure authentication token Mollema discovered known as Actor Tokens that are issued by an obscure Azure mechanism called the “Access Control Service.” Actor Tokens have some special system properties that Mollema realized could be useful to an attacker when combined with another vulnerability. The other bug was a major flaw in a historic Azure Active Directory application programming interface known as “Graph” that was used to facilitate access to data stored in Microsoft 365. Microsoft is in the process of retiring Azure Active Directory Graph and transitioning users to its successor, Microsoft Graph, which is designed for Entra ID. The flaw was related to a failure by Azure AD Graph to properly validate which Azure tenant was making an access request, which could be manipulated so the API would accept an Actor Token from a different tenant that should have been rejected.

Source link
#Microsoft #Entra #Vulnerability #Catastrophic

White House officials are exploring official government oversight of new AI models, according to the New York Times.

U.S. officials, speaking on the condition of anonymity, told the publication that the Trump administration is forming an AI working group composed of tech leaders and government representatives. The group will be tasked with outlining potential oversight procedures for new models launching to market, including formal review processes, the Times reported.

The proposed plans were discussed at a White House meeting last week with representatives from Anthropic, Google, and OpenAI.

Potentially influenced by regulatory processes announced by UK regulators, which relegate AI oversight to relevant government bodies, the working group would also determine which U.S. agencies would be tasked with oversight. Some officials have suggested the National Security Agency (NSA), the White House Office of the National Cyber Director, and the director of national intelligence take the lead, while others have even suggested revitalizing the Biden-era Center for A.I. Standards and Innovation, according to the Times.

The administration has reversed its stance on AI regulation in recent months, despite announcing a federal AI action plan that pulled back on regulation of tech companies and threatened to reduce federal funding for states that impeded AI infrastructure efforts through regulation. Trump’s One Big Beautiful Bill also included limits on state governments’ AI regulation, originally proposing a 10-year moratorium on state action in favor of federal oversight.

Trump appointee and FCC chairman Brendan Carr has also advocated for a light-touch approach to AI regulation.

#Trump #federal #model #oversight">Trump considering federal AI model oversight
                                                            White House officials are exploring official government oversight of new AI models, according to the New York Times. U.S. officials, speaking on the condition of anonymity, told the publication that the Trump administration is forming an AI working group composed of tech leaders and government representatives. The group will be tasked with outlining potential oversight procedures for new models launching to market, including formal review processes, the Times reported. 
        SEE ALSO:
        
            Student sues matchmaking app for allegedly stealing her likeness for an ad
            
        
    
The proposed plans were discussed at a White House meeting last week with representatives from Anthropic, Google, and OpenAI. 
        
            Mashable Light Speed
        
        
    

Potentially influenced by regulatory processes announced by UK regulators, which relegate AI oversight to relevant government bodies, the working group would also determine which U.S. agencies would be tasked with oversight. Some officials have suggested the National Security Agency (NSA), the White House Office of the National Cyber Director, and the director of national intelligence take the lead, while others have even suggested revitalizing the Biden-era Center for A.I. Standards and Innovation, according to the Times. The administration has reversed its stance on AI regulation in recent months, despite announcing a federal AI action plan that pulled back on regulation of tech companies and threatened to reduce federal funding for states that impeded AI infrastructure efforts through regulation. Trump’s One Big Beautiful Bill also included limits on state governments’ AI regulation, originally proposing a 10-year moratorium on state action in favor of federal oversight. 
Trump appointee and FCC chairman Brendan Carr has also advocated for a light-touch approach to AI regulation. 

                    
                                            
                            
                        
                                    #Trump #federal #model #oversight

government oversight of new AI models, according to the New York Times.

U.S. officials, speaking on the condition of anonymity, told the publication that the Trump administration is forming an AI working group composed of tech leaders and government representatives. The group will be tasked with outlining potential oversight procedures for new models launching to market, including formal review processes, the Times reported.

The proposed plans were discussed at a White House meeting last week with representatives from Anthropic, Google, and OpenAI.

Potentially influenced by regulatory processes announced by UK regulators, which relegate AI oversight to relevant government bodies, the working group would also determine which U.S. agencies would be tasked with oversight. Some officials have suggested the National Security Agency (NSA), the White House Office of the National Cyber Director, and the director of national intelligence take the lead, while others have even suggested revitalizing the Biden-era Center for A.I. Standards and Innovation, according to the Times.

The administration has reversed its stance on AI regulation in recent months, despite announcing a federal AI action plan that pulled back on regulation of tech companies and threatened to reduce federal funding for states that impeded AI infrastructure efforts through regulation. Trump’s One Big Beautiful Bill also included limits on state governments’ AI regulation, originally proposing a 10-year moratorium on state action in favor of federal oversight.

Trump appointee and FCC chairman Brendan Carr has also advocated for a light-touch approach to AI regulation.

#Trump #federal #model #oversight">Trump considering federal AI model oversight

White House officials are exploring official government oversight of new AI models, according to the New York Times.

U.S. officials, speaking on the condition of anonymity, told the publication that the Trump administration is forming an AI working group composed of tech leaders and government representatives. The group will be tasked with outlining potential oversight procedures for new models launching to market, including formal review processes, the Times reported.

The proposed plans were discussed at a White House meeting last week with representatives from Anthropic, Google, and OpenAI.

Potentially influenced by regulatory processes announced by UK regulators, which relegate AI oversight to relevant government bodies, the working group would also determine which U.S. agencies would be tasked with oversight. Some officials have suggested the National Security Agency (NSA), the White House Office of the National Cyber Director, and the director of national intelligence take the lead, while others have even suggested revitalizing the Biden-era Center for A.I. Standards and Innovation, according to the Times.

The administration has reversed its stance on AI regulation in recent months, despite announcing a federal AI action plan that pulled back on regulation of tech companies and threatened to reduce federal funding for states that impeded AI infrastructure efforts through regulation. Trump’s One Big Beautiful Bill also included limits on state governments’ AI regulation, originally proposing a 10-year moratorium on state action in favor of federal oversight.

Trump appointee and FCC chairman Brendan Carr has also advocated for a light-touch approach to AI regulation.

#Trump #federal #model #oversight

Post Comment