×

fastest since 2021.

It’s probably because the company upped its already outrageous spending expectations for the year. Meta said that 2026 capital expenditures would be at least $10 billion more than expected and could top $145 billion. While emphasizing his “confidence in this investment,” CEO Mark Zuckerberg said that most of this increase was due to “higher component costs, particularly memory pricing.”

The AI boom has led to an unprecedented data center buildout that has constrained the global memory chip supply and increased prices for these valuable chips. The result has been a global memory crisis that has impacted not only Meta and the rest of the AI industry but also caused the prices of consumer electronics like laptops and smartphones to soar.

Meta’s $145 billion is a dramatic increase from the $72 billion capital expenditure it recorded just last year, and Zuckerberg is betting it all on an AI turnaround effort.

Meta has been left behind in the AI race as industry rivals like Google have soared past. Roughly 10 months ago, Zuckerberg acknowledged the situation and announced a major catch-up effort that saw him commit billions upon billions of dollars to research and development, and to poach talent from all over the industry, including bringing in Scale AI’s founder Alexandr Wang to lead the new Meta Superintelligence Labs AI division.

Many have been reasonably nervous about this commitment, considering that the company’s latest big bet in emerging tech, the Metaverse, has flopped dramatically. In Wednesday’s earnings report, Meta said that the Reality Labs division, which had helmed the Metaverse efforts, notched an operating loss of more than $4 billion, while only cashing in $402 million in sales. That adds to the whopping $80 billion and more the division has lost in the past six years.

But experts are somewhat more hopeful about the AI bet because, earlier this month, the tech giant debuted the first fruits of that investment with the AI model Muse Spark, a proprietary model that the company plans to open-source in the future. It’s a step in the right direction, but Meta still has to do more before it can confidently say the catch-up effort is successful.

“This was the first release from Meta Superintelligence Labs, and it shows that our work is on track to build a leading lab,” Zuckerberg assured investors in the company’s earnings call. “Now that we have a strong model, we can develop more novel products as well.”

Those novel products will include two agents, one for personal and the other for business uses, according to Zuckerberg.

“We’re already testing an early version of business AIs and weekly conversations have grown 10x since the start of this year,” Zuckerberg said.

One way that AI is clearly showing up to benefit Meta is internally. Meta CFO Susan Li said that over half a billion users weekly on Facebook and Instagram each are now watching videos translated and dubbed by AI. The company is also incorporating the new AI model into parts of its core business, like ads, and particularly into its recommendation system. The goal is to have the AI hyper-personalize feeds for users.

“Since our recommendation systems are operating at such large scale, we’ll phase in this new research and technology over time,” Zuckerberg said. “But the trend over the last few years seems clear that we are seeing an increasing return on the amount that we can improve engagement for people and value for advertisers.”

AI is also taking over internally at Meta. The company is laying off 10% of its workforce and reportedly offering voluntary buyouts to 7% of its U.S. staff, in what seems to follow a purportedly AI-driven trend that has taken Silicon Valley by storm.

On the call, executives wouldn’t say if the layoffs had to do with automation of jobs, but Li did say that a “leaner operating model” would help “offset the substantial investments we’re making.”

#Meta #Spend #Billion #Year #DueArtificial intelligence,Mark Zuckerberg,Meta"> Meta Could Spend 5 Billion This Year Due to AI
                Wednesday was a big day for the tech industry with Meta, Google, Amazon and Microsoft all reporting earnings at the same time in the afternoon. Out of the four, though, Meta was the clear loser with its shares down more than 7% even though revenue increased 33% this past quarter, the company’s fastest since 2021. It’s probably because the company upped its already outrageous spending expectations for the year. Meta said that 2026 capital expenditures would be at least  billion more than expected and could top 5 billion. While emphasizing his “confidence in this investment,” CEO Mark Zuckerberg said that most of this increase was due to “higher component costs, particularly memory pricing.”

 The AI boom has led to an unprecedented data center buildout that has constrained the global memory chip supply and increased prices for these valuable chips. The result has been a global memory crisis that has impacted not only Meta and the rest of the AI industry but also caused the prices of consumer electronics like laptops and smartphones to soar. Meta’s 5 billion is a dramatic increase from the  billion capital expenditure it recorded just last year, and Zuckerberg is betting it all on an AI turnaround effort.

 Meta has been left behind in the AI race as industry rivals like Google have soared past. Roughly 10 months ago, Zuckerberg acknowledged the situation and announced a major catch-up effort that saw him commit billions upon billions of dollars to research and development, and to poach talent from all over the industry, including bringing in Scale AI’s founder Alexandr Wang to lead the new Meta Superintelligence Labs AI division.

 Many have been reasonably nervous about this commitment, considering that the company’s latest big bet in emerging tech, the Metaverse, has flopped dramatically. In Wednesday’s earnings report, Meta said that the Reality Labs division, which had helmed the Metaverse efforts, notched an operating loss of more than  billion, while only cashing in 2 million in sales. That adds to the whopping  billion and more the division has lost in the past six years. But experts are somewhat more hopeful about the AI bet because, earlier this month, the tech giant debuted the first fruits of that investment with the AI model Muse Spark, a proprietary model that the company plans to open-source in the future. It’s a step in the right direction, but Meta still has to do more before it can confidently say the catch-up effort is successful.

 “This was the first release from Meta Superintelligence Labs, and it shows that our work is on track to build a leading lab,” Zuckerberg assured investors in the company’s earnings call. “Now that we have a strong model, we can develop more novel products as well.” Those novel products will include two agents, one for personal and the other for business uses, according to Zuckerberg. “We’re already testing an early version of business AIs and weekly conversations have grown 10x since the start of this year,” Zuckerberg said.

 One way that AI is clearly showing up to benefit Meta is internally. Meta CFO Susan Li said that over half a billion users weekly on Facebook and Instagram each are now watching videos translated and dubbed by AI. The company is also incorporating the new AI model into parts of its core business, like ads, and particularly into its recommendation system. The goal is to have the AI hyper-personalize feeds for users. “Since our recommendation systems are operating at such large scale, we’ll phase in this new research and technology over time,” Zuckerberg said. “But the trend over the last few years seems clear that we are seeing an increasing return on the amount that we can improve engagement for people and value for advertisers.”

 AI is also taking over internally at Meta. The company is laying off 10% of its workforce and reportedly offering voluntary buyouts to 7% of its U.S. staff, in what seems to follow a purportedly AI-driven trend that has taken Silicon Valley by storm. On the call, executives wouldn’t say if the layoffs had to do with automation of jobs, but Li did say that a “leaner operating model” would help “offset the substantial investments we’re making.”      #Meta #Spend #Billion #Year #DueArtificial intelligence,Mark Zuckerberg,Meta
Tech-news

fastest since 2021.

It’s probably because the company upped its already outrageous spending expectations for the year. Meta said that 2026 capital expenditures would be at least $10 billion more than expected and could top $145 billion. While emphasizing his “confidence in this investment,” CEO Mark Zuckerberg said that most of this increase was due to “higher component costs, particularly memory pricing.”

The AI boom has led to an unprecedented data center buildout that has constrained the global memory chip supply and increased prices for these valuable chips. The result has been a global memory crisis that has impacted not only Meta and the rest of the AI industry but also caused the prices of consumer electronics like laptops and smartphones to soar.

Meta’s $145 billion is a dramatic increase from the $72 billion capital expenditure it recorded just last year, and Zuckerberg is betting it all on an AI turnaround effort.

Meta has been left behind in the AI race as industry rivals like Google have soared past. Roughly 10 months ago, Zuckerberg acknowledged the situation and announced a major catch-up effort that saw him commit billions upon billions of dollars to research and development, and to poach talent from all over the industry, including bringing in Scale AI’s founder Alexandr Wang to lead the new Meta Superintelligence Labs AI division.

Many have been reasonably nervous about this commitment, considering that the company’s latest big bet in emerging tech, the Metaverse, has flopped dramatically. In Wednesday’s earnings report, Meta said that the Reality Labs division, which had helmed the Metaverse efforts, notched an operating loss of more than $4 billion, while only cashing in $402 million in sales. That adds to the whopping $80 billion and more the division has lost in the past six years.

But experts are somewhat more hopeful about the AI bet because, earlier this month, the tech giant debuted the first fruits of that investment with the AI model Muse Spark, a proprietary model that the company plans to open-source in the future. It’s a step in the right direction, but Meta still has to do more before it can confidently say the catch-up effort is successful.

“This was the first release from Meta Superintelligence Labs, and it shows that our work is on track to build a leading lab,” Zuckerberg assured investors in the company’s earnings call. “Now that we have a strong model, we can develop more novel products as well.”

Those novel products will include two agents, one for personal and the other for business uses, according to Zuckerberg.

“We’re already testing an early version of business AIs and weekly conversations have grown 10x since the start of this year,” Zuckerberg said.

One way that AI is clearly showing up to benefit Meta is internally. Meta CFO Susan Li said that over half a billion users weekly on Facebook and Instagram each are now watching videos translated and dubbed by AI. The company is also incorporating the new AI model into parts of its core business, like ads, and particularly into its recommendation system. The goal is to have the AI hyper-personalize feeds for users.

“Since our recommendation systems are operating at such large scale, we’ll phase in this new research and technology over time,” Zuckerberg said. “But the trend over the last few years seems clear that we are seeing an increasing return on the amount that we can improve engagement for people and value for advertisers.”

AI is also taking over internally at Meta. The company is laying off 10% of its workforce and reportedly offering voluntary buyouts to 7% of its U.S. staff, in what seems to follow a purportedly AI-driven trend that has taken Silicon Valley by storm.

On the call, executives wouldn’t say if the layoffs had to do with automation of jobs, but Li did say that a “leaner operating model” would help “offset the substantial investments we’re making.”

#Meta #Spend #Billion #Year #DueArtificial intelligence,Mark Zuckerberg,Meta">Meta Could Spend $145 Billion This Year Due to AIMeta Could Spend $145 Billion This Year Due to AI
                Wednesday was a big day for the tech industry with Meta, Google, Amazon and Microsoft all reporting earnings at the same time in the afternoon. Out of the four, though, Meta was the clear loser with its shares down more than 7% even though revenue increased 33% this past quarter, the company’s fastest since 2021. It’s probably because the company upped its already outrageous spending expectations for the year. Meta said that 2026 capital expenditures would be at least $10 billion more than expected and could top $145 billion. While emphasizing his “confidence in this investment,” CEO Mark Zuckerberg said that most of this increase was due to “higher component costs, particularly memory pricing.”

 The AI boom has led to an unprecedented data center buildout that has constrained the global memory chip supply and increased prices for these valuable chips. The result has been a global memory crisis that has impacted not only Meta and the rest of the AI industry but also caused the prices of consumer electronics like laptops and smartphones to soar. Meta’s $145 billion is a dramatic increase from the $72 billion capital expenditure it recorded just last year, and Zuckerberg is betting it all on an AI turnaround effort.

 Meta has been left behind in the AI race as industry rivals like Google have soared past. Roughly 10 months ago, Zuckerberg acknowledged the situation and announced a major catch-up effort that saw him commit billions upon billions of dollars to research and development, and to poach talent from all over the industry, including bringing in Scale AI’s founder Alexandr Wang to lead the new Meta Superintelligence Labs AI division.

 Many have been reasonably nervous about this commitment, considering that the company’s latest big bet in emerging tech, the Metaverse, has flopped dramatically. In Wednesday’s earnings report, Meta said that the Reality Labs division, which had helmed the Metaverse efforts, notched an operating loss of more than $4 billion, while only cashing in $402 million in sales. That adds to the whopping $80 billion and more the division has lost in the past six years. But experts are somewhat more hopeful about the AI bet because, earlier this month, the tech giant debuted the first fruits of that investment with the AI model Muse Spark, a proprietary model that the company plans to open-source in the future. It’s a step in the right direction, but Meta still has to do more before it can confidently say the catch-up effort is successful.

 “This was the first release from Meta Superintelligence Labs, and it shows that our work is on track to build a leading lab,” Zuckerberg assured investors in the company’s earnings call. “Now that we have a strong model, we can develop more novel products as well.” Those novel products will include two agents, one for personal and the other for business uses, according to Zuckerberg. “We’re already testing an early version of business AIs and weekly conversations have grown 10x since the start of this year,” Zuckerberg said.

 One way that AI is clearly showing up to benefit Meta is internally. Meta CFO Susan Li said that over half a billion users weekly on Facebook and Instagram each are now watching videos translated and dubbed by AI. The company is also incorporating the new AI model into parts of its core business, like ads, and particularly into its recommendation system. The goal is to have the AI hyper-personalize feeds for users. “Since our recommendation systems are operating at such large scale, we’ll phase in this new research and technology over time,” Zuckerberg said. “But the trend over the last few years seems clear that we are seeing an increasing return on the amount that we can improve engagement for people and value for advertisers.”

 AI is also taking over internally at Meta. The company is laying off 10% of its workforce and reportedly offering voluntary buyouts to 7% of its U.S. staff, in what seems to follow a purportedly AI-driven trend that has taken Silicon Valley by storm. On the call, executives wouldn’t say if the layoffs had to do with automation of jobs, but Li did say that a “leaner operating model” would help “offset the substantial investments we’re making.”      #Meta #Spend #Billion #Year #DueArtificial intelligence,Mark Zuckerberg,Meta

Wednesday was a big day for the tech industry with Meta, Google, Amazon and Microsoft all reporting earnings at the same time in the afternoon. Out of the four, though, Meta was the clear loser with its shares down more than 7% even though revenue increased 33% this past quarter, the company’s fastest since 2021.

It’s probably because the company upped its already outrageous spending expectations for the year. Meta said that 2026 capital expenditures would be at least $10 billion more than expected and could top $145 billion. While emphasizing his “confidence in this investment,” CEO Mark Zuckerberg said that most of this increase was due to “higher component costs, particularly memory pricing.”

The AI boom has led to an unprecedented data center buildout that has constrained the global memory chip supply and increased prices for these valuable chips. The result has been a global memory crisis that has impacted not only Meta and the rest of the AI industry but also caused the prices of consumer electronics like laptops and smartphones to soar.

Meta’s $145 billion is a dramatic increase from the $72 billion capital expenditure it recorded just last year, and Zuckerberg is betting it all on an AI turnaround effort.

Meta has been left behind in the AI race as industry rivals like Google have soared past. Roughly 10 months ago, Zuckerberg acknowledged the situation and announced a major catch-up effort that saw him commit billions upon billions of dollars to research and development, and to poach talent from all over the industry, including bringing in Scale AI’s founder Alexandr Wang to lead the new Meta Superintelligence Labs AI division.

Many have been reasonably nervous about this commitment, considering that the company’s latest big bet in emerging tech, the Metaverse, has flopped dramatically. In Wednesday’s earnings report, Meta said that the Reality Labs division, which had helmed the Metaverse efforts, notched an operating loss of more than $4 billion, while only cashing in $402 million in sales. That adds to the whopping $80 billion and more the division has lost in the past six years.

But experts are somewhat more hopeful about the AI bet because, earlier this month, the tech giant debuted the first fruits of that investment with the AI model Muse Spark, a proprietary model that the company plans to open-source in the future. It’s a step in the right direction, but Meta still has to do more before it can confidently say the catch-up effort is successful.

“This was the first release from Meta Superintelligence Labs, and it shows that our work is on track to build a leading lab,” Zuckerberg assured investors in the company’s earnings call. “Now that we have a strong model, we can develop more novel products as well.”

Those novel products will include two agents, one for personal and the other for business uses, according to Zuckerberg.

“We’re already testing an early version of business AIs and weekly conversations have grown 10x since the start of this year,” Zuckerberg said.

One way that AI is clearly showing up to benefit Meta is internally. Meta CFO Susan Li said that over half a billion users weekly on Facebook and Instagram each are now watching videos translated and dubbed by AI. The company is also incorporating the new AI model into parts of its core business, like ads, and particularly into its recommendation system. The goal is to have the AI hyper-personalize feeds for users.

“Since our recommendation systems are operating at such large scale, we’ll phase in this new research and technology over time,” Zuckerberg said. “But the trend over the last few years seems clear that we are seeing an increasing return on the amount that we can improve engagement for people and value for advertisers.”

AI is also taking over internally at Meta. The company is laying off 10% of its workforce and reportedly offering voluntary buyouts to 7% of its U.S. staff, in what seems to follow a purportedly AI-driven trend that has taken Silicon Valley by storm.

On the call, executives wouldn’t say if the layoffs had to do with automation of jobs, but Li did say that a “leaner operating model” would help “offset the substantial investments we’re making.”

#Meta #Spend #Billion #Year #DueArtificial intelligence,Mark Zuckerberg,Meta

Wednesday was a big day for the tech industry with Meta, Google, Amazon and Microsoft…

his side of the story in his legal battle against OpenAI and its CEO Sam Altman. Under cross-examination from OpenAI’s lawyers, Musk was pressed on all the ways he tried to squeeze the organization over a 2017 power struggle that he ultimately lost. Around this time, Musk tried to hire away OpenAI researchers and stopped sending it funding he had previously promised, according to emails presented as evidence in the case.

As the cross-examination began, tension rippled through the courtroom. Judge Yvonne Gonzalez Rogers started the day by reprimanding someone in the gallery for taking a picture of Musk. OpenAI president and cofounder Greg Brockman sat behind his lawyers with a yellow legal pad in his lap, giving Musk a cold stare as he testified. Musk grew visibly frustrated on the witness stand, pausing frequently to tell OpenAI’s lawyer, William Savitt, that he saw his questions as misleading. Meanwhile, Savitt’s cross-examination was derailed by objections, technical issues, and Musk continuously claiming he doesn’t recall key details of OpenAI’s history.

Savitt showed the courtroom emails from September 2017 between Musk, Brockman, and researcher Ilya Sutskever discussing the formation of what would become OpenAI’s for-profit arm. In the thread, Musk demanded the right to choose four members of its board of directors, giving him more voting power than his cofounders, who would be left with three in total. “I would unequivocally have initial control of the company, but this will change quickly,” said Musk in one message. Sutskever wrote back rejecting the idea because he said he feared it would give Musk too much power.

Months before these negotiations started, Musk had halted payments to OpenAI, which was particularly difficult for the organization because he was then its main source of funding. Since 2016, Musk had been sending $5 million payments to OpenAI quarterly as part of a broader $1 billion pledge he made at the organization’s launch. But in the spring of 2017, he stopped sending the money. In another email from August 2017, the head of Musk’s family office, Jared Birchall, asked Musk if he should continue withholding it. Musk responded simply, “Yes.”

Around the time Musk lost the power struggle, emails show that he held discussions with executives at Tesla and Neuralink, his brain-computer interface company, about hiring OpenAI employees. At the time, Musk was still a board member of OpenAI.

Musk sent an email to a Tesla vice president in June 2017 about hiring an early OpenAI researcher, Andrej Karpathy. “Just talked to Andrej and he accepted as joining as director of Tesla Vision,” Musk wrote. “Andrej is arguably the #2 guy in the world in computer vision … The openai guys are gonna want to kill me, but it had to be done.”

On the stand, Musk argued that Karpathy was already interested in leaving OpenAI when he tried to recruit him to Tesla. “Andrej had made his decision. If he’s going to leave OpenAI, he might as well work at Tesla,” Musk said.

In October 2017, Musk also wrote to Ben Rapoport, a cofounder of Neuralink. “Hire independently or directly from OpenAI,” said Musk. “I have no problem if you pitch people at OpenAI to work at Neuralink.”

When pressed about this by Savitt, Musk argued that it would have been illegal for him not to allow Tesla and Neuralink to hire from OpenAI. “It’s illegal to restrict employment. It would be illegal to say you can’t employ people from OpenAI. You can’t have some cabal that stops people from working at the company they want to work at,” Musk said.

#Elon #Musk #Squeezed #OpenAI #Gonna #Killmodel behavior,artificial intelligence,elon musk,openai,sam altman,lawsuits"> How Elon Musk Squeezed OpenAI: They ‘Are Gonna Want to Kill Me’Elon Musk returned to the witness stand on Wednesday to continue telling his side of the story in his legal battle against OpenAI and its CEO Sam Altman. Under cross-examination from OpenAI’s lawyers, Musk was pressed on all the ways he tried to squeeze the organization over a 2017 power struggle that he ultimately lost. Around this time, Musk tried to hire away OpenAI researchers and stopped sending it funding he had previously promised, according to emails presented as evidence in the case.As the cross-examination began, tension rippled through the courtroom. Judge Yvonne Gonzalez Rogers started the day by reprimanding someone in the gallery for taking a picture of Musk. OpenAI president and cofounder Greg Brockman sat behind his lawyers with a yellow legal pad in his lap, giving Musk a cold stare as he testified. Musk grew visibly frustrated on the witness stand, pausing frequently to tell OpenAI’s lawyer, William Savitt, that he saw his questions as misleading. Meanwhile, Savitt’s cross-examination was derailed by objections, technical issues, and Musk continuously claiming he doesn’t recall key details of OpenAI’s history.Savitt showed the courtroom emails from September 2017 between Musk, Brockman, and researcher Ilya Sutskever discussing the formation of what would become OpenAI’s for-profit arm. In the thread, Musk demanded the right to choose four members of its board of directors, giving him more voting power than his cofounders, who would be left with three in total. “I would unequivocally have initial control of the company, but this will change quickly,” said Musk in one message. Sutskever wrote back rejecting the idea because he said he feared it would give Musk too much power.Months before these negotiations started, Musk had halted payments to OpenAI, which was particularly difficult for the organization because he was then its main source of funding. Since 2016, Musk had been sending  million payments to OpenAI quarterly as part of a broader  billion pledge he made at the organization’s launch. But in the spring of 2017, he stopped sending the money. In another email from August 2017, the head of Musk’s family office, Jared Birchall, asked Musk if he should continue withholding it. Musk responded simply, “Yes.”Around the time Musk lost the power struggle, emails show that he held discussions with executives at Tesla and Neuralink, his brain-computer interface company, about hiring OpenAI employees. At the time, Musk was still a board member of OpenAI.Musk sent an email to a Tesla vice president in June 2017 about hiring an early OpenAI researcher, Andrej Karpathy. “Just talked to Andrej and he accepted as joining as director of Tesla Vision,” Musk wrote. “Andrej is arguably the #2 guy in the world in computer vision … The openai guys are gonna want to kill me, but it had to be done.”On the stand, Musk argued that Karpathy was already interested in leaving OpenAI when he tried to recruit him to Tesla. “Andrej had made his decision. If he’s going to leave OpenAI, he might as well work at Tesla,” Musk said.In October 2017, Musk also wrote to Ben Rapoport, a cofounder of Neuralink. “Hire independently or directly from OpenAI,” said Musk. “I have no problem if you pitch people at OpenAI to work at Neuralink.”When pressed about this by Savitt, Musk argued that it would have been illegal for him not to allow Tesla and Neuralink to hire from OpenAI. “It’s illegal to restrict employment. It would be illegal to say you can’t employ people from OpenAI. You can’t have some cabal that stops people from working at the company they want to work at,” Musk said.#Elon #Musk #Squeezed #OpenAI #Gonna #Killmodel behavior,artificial intelligence,elon musk,openai,sam altman,lawsuits
Tech-news

his side of the story in his legal battle against OpenAI and its CEO Sam Altman. Under cross-examination from OpenAI’s lawyers, Musk was pressed on all the ways he tried to squeeze the organization over a 2017 power struggle that he ultimately lost. Around this time, Musk tried to hire away OpenAI researchers and stopped sending it funding he had previously promised, according to emails presented as evidence in the case.

As the cross-examination began, tension rippled through the courtroom. Judge Yvonne Gonzalez Rogers started the day by reprimanding someone in the gallery for taking a picture of Musk. OpenAI president and cofounder Greg Brockman sat behind his lawyers with a yellow legal pad in his lap, giving Musk a cold stare as he testified. Musk grew visibly frustrated on the witness stand, pausing frequently to tell OpenAI’s lawyer, William Savitt, that he saw his questions as misleading. Meanwhile, Savitt’s cross-examination was derailed by objections, technical issues, and Musk continuously claiming he doesn’t recall key details of OpenAI’s history.

Savitt showed the courtroom emails from September 2017 between Musk, Brockman, and researcher Ilya Sutskever discussing the formation of what would become OpenAI’s for-profit arm. In the thread, Musk demanded the right to choose four members of its board of directors, giving him more voting power than his cofounders, who would be left with three in total. “I would unequivocally have initial control of the company, but this will change quickly,” said Musk in one message. Sutskever wrote back rejecting the idea because he said he feared it would give Musk too much power.

Months before these negotiations started, Musk had halted payments to OpenAI, which was particularly difficult for the organization because he was then its main source of funding. Since 2016, Musk had been sending $5 million payments to OpenAI quarterly as part of a broader $1 billion pledge he made at the organization’s launch. But in the spring of 2017, he stopped sending the money. In another email from August 2017, the head of Musk’s family office, Jared Birchall, asked Musk if he should continue withholding it. Musk responded simply, “Yes.”

Around the time Musk lost the power struggle, emails show that he held discussions with executives at Tesla and Neuralink, his brain-computer interface company, about hiring OpenAI employees. At the time, Musk was still a board member of OpenAI.

Musk sent an email to a Tesla vice president in June 2017 about hiring an early OpenAI researcher, Andrej Karpathy. “Just talked to Andrej and he accepted as joining as director of Tesla Vision,” Musk wrote. “Andrej is arguably the #2 guy in the world in computer vision … The openai guys are gonna want to kill me, but it had to be done.”

On the stand, Musk argued that Karpathy was already interested in leaving OpenAI when he tried to recruit him to Tesla. “Andrej had made his decision. If he’s going to leave OpenAI, he might as well work at Tesla,” Musk said.

In October 2017, Musk also wrote to Ben Rapoport, a cofounder of Neuralink. “Hire independently or directly from OpenAI,” said Musk. “I have no problem if you pitch people at OpenAI to work at Neuralink.”

When pressed about this by Savitt, Musk argued that it would have been illegal for him not to allow Tesla and Neuralink to hire from OpenAI. “It’s illegal to restrict employment. It would be illegal to say you can’t employ people from OpenAI. You can’t have some cabal that stops people from working at the company they want to work at,” Musk said.

#Elon #Musk #Squeezed #OpenAI #Gonna #Killmodel behavior,artificial intelligence,elon musk,openai,sam altman,lawsuits">How Elon Musk Squeezed OpenAI: They ‘Are Gonna Want to Kill Me’

Elon Musk returned to the witness stand on Wednesday to continue telling his side of the story in his legal battle against OpenAI and its CEO Sam Altman. Under cross-examination from OpenAI’s lawyers, Musk was pressed on all the ways he tried to squeeze the organization over a 2017 power struggle that he ultimately lost. Around this time, Musk tried to hire away OpenAI researchers and stopped sending it funding he had previously promised, according to emails presented as evidence in the case.

As the cross-examination began, tension rippled through the courtroom. Judge Yvonne Gonzalez Rogers started the day by reprimanding someone in the gallery for taking a picture of Musk. OpenAI president and cofounder Greg Brockman sat behind his lawyers with a yellow legal pad in his lap, giving Musk a cold stare as he testified. Musk grew visibly frustrated on the witness stand, pausing frequently to tell OpenAI’s lawyer, William Savitt, that he saw his questions as misleading. Meanwhile, Savitt’s cross-examination was derailed by objections, technical issues, and Musk continuously claiming he doesn’t recall key details of OpenAI’s history.

Savitt showed the courtroom emails from September 2017 between Musk, Brockman, and researcher Ilya Sutskever discussing the formation of what would become OpenAI’s for-profit arm. In the thread, Musk demanded the right to choose four members of its board of directors, giving him more voting power than his cofounders, who would be left with three in total. “I would unequivocally have initial control of the company, but this will change quickly,” said Musk in one message. Sutskever wrote back rejecting the idea because he said he feared it would give Musk too much power.

Months before these negotiations started, Musk had halted payments to OpenAI, which was particularly difficult for the organization because he was then its main source of funding. Since 2016, Musk had been sending $5 million payments to OpenAI quarterly as part of a broader $1 billion pledge he made at the organization’s launch. But in the spring of 2017, he stopped sending the money. In another email from August 2017, the head of Musk’s family office, Jared Birchall, asked Musk if he should continue withholding it. Musk responded simply, “Yes.”

Around the time Musk lost the power struggle, emails show that he held discussions with executives at Tesla and Neuralink, his brain-computer interface company, about hiring OpenAI employees. At the time, Musk was still a board member of OpenAI.

Musk sent an email to a Tesla vice president in June 2017 about hiring an early OpenAI researcher, Andrej Karpathy. “Just talked to Andrej and he accepted as joining as director of Tesla Vision,” Musk wrote. “Andrej is arguably the #2 guy in the world in computer vision … The openai guys are gonna want to kill me, but it had to be done.”

On the stand, Musk argued that Karpathy was already interested in leaving OpenAI when he tried to recruit him to Tesla. “Andrej had made his decision. If he’s going to leave OpenAI, he might as well work at Tesla,” Musk said.

In October 2017, Musk also wrote to Ben Rapoport, a cofounder of Neuralink. “Hire independently or directly from OpenAI,” said Musk. “I have no problem if you pitch people at OpenAI to work at Neuralink.”

When pressed about this by Savitt, Musk argued that it would have been illegal for him not to allow Tesla and Neuralink to hire from OpenAI. “It’s illegal to restrict employment. It would be illegal to say you can’t employ people from OpenAI. You can’t have some cabal that stops people from working at the company they want to work at,” Musk said.

#Elon #Musk #Squeezed #OpenAI #Gonna #Killmodel behavior,artificial intelligence,elon musk,openai,sam altman,lawsuits

Elon Musk returned to the witness stand on Wednesday to continue telling his side of…

revealed to include a line, repeated several times, that specifically forbids it from randomly mentioning an assortment of mythical and real creatures.

“Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user’s query,” read instructions in Codex CLI, a command-line tool for using AI to generate code.

It is unclear why OpenAI felt compelled to spell this out for Codex—or indeed why its models might want to discuss goblins or pigeons in the first place. The company did not immediately respond to a request for comment.

OpenAI’s newest model, GPT-5.5, was released with enhanced coding skills earlier this month. The company is in a fierce race with rivals, especially Anthropic, to deliver cutting-edge AI, and coding has emerged as a killer capability.

In response to a post on X that highlighted the lines, however, some users claimed that OpenAI’s models occasionally become obsessed with goblins and other creatures when used to power OpenClaw, a tool that lets AI take control of a computer and apps running on it in order to do useful things for users.

“I was wondering why my claw suddenly became a goblin with codex 5.5,” one user wrote on X.

“Been using it a lot lately and it actually can’t stop speaking of bugs as ‘gremlins’ and ‘goblins’ it’s hilarious,” posted another.

The discovery quickly became its own meme, inspiring AI-generated scenes of goblins in data centers, and plug-ins for Codex that put it in a playful “goblin mode.”

AI models like GPT-5.5 are trained to predict the word—or code—that should follow a given prompt. These models have become so good at doing this that they appear to exhibit genuine intelligence. But their probabilistic nature means that they can sometimes behave in surprising ways. A model might become more prone to misbehavior when used with an “agentic harness” like OpenClaw that puts lots of additional instructions into prompts, such as facts stored in long-term memory.

OpenAI acquired OpenClaw in February not long after the tool became a viral hit among AI enthusiasts. OpenClaw can use any AI model to automate useful tasks like answering emails or buying things on the web. Users can select any of various personae for their helper, which shapes its behavior and responses.

OpenAI staffers appeared to acknowledge the prohibition. In response to a post highlighting OpenClaw’s goblin tendencies, Nik Pash, who works on Codex, wrote, “This is indeed one of the reasons.”

Even Sam Altman, OpenAI’s CEO, joined in with the memes, posting a screenshot of a prompt for ChatGPT. It read: “Start training GPT-6, you can have the whole cluster. Extra goblins.”

#OpenAI #Codex #Shut #Goblinsopenai,artificial intelligence,coding,agentic ai,trolls"> OpenAI Really Wants Codex to Shut Up About GoblinsOpenAI has a goblin problem.Instructions designed to guide the behavior of the company’s latest model as it writes code have been revealed to include a line, repeated several times, that specifically forbids it from randomly mentioning an assortment of mythical and real creatures.“Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user’s query,” read instructions in Codex CLI, a command-line tool for using AI to generate code.It is unclear why OpenAI felt compelled to spell this out for Codex—or indeed why its models might want to discuss goblins or pigeons in the first place. The company did not immediately respond to a request for comment.OpenAI’s newest model, GPT-5.5, was released with enhanced coding skills earlier this month. The company is in a fierce race with rivals, especially Anthropic, to deliver cutting-edge AI, and coding has emerged as a killer capability.In response to a post on X that highlighted the lines, however, some users claimed that OpenAI’s models occasionally become obsessed with goblins and other creatures when used to power OpenClaw, a tool that lets AI take control of a computer and apps running on it in order to do useful things for users.“I was wondering why my claw suddenly became a goblin with codex 5.5,” one user wrote on X.“Been using it a lot lately and it actually can’t stop speaking of bugs as ‘gremlins’ and ‘goblins’ it’s hilarious,” posted another.The discovery quickly became its own meme, inspiring AI-generated scenes of goblins in data centers, and plug-ins for Codex that put it in a playful “goblin mode.”AI models like GPT-5.5 are trained to predict the word—or code—that should follow a given prompt. These models have become so good at doing this that they appear to exhibit genuine intelligence. But their probabilistic nature means that they can sometimes behave in surprising ways. A model might become more prone to misbehavior when used with an “agentic harness” like OpenClaw that puts lots of additional instructions into prompts, such as facts stored in long-term memory.OpenAI acquired OpenClaw in February not long after the tool became a viral hit among AI enthusiasts. OpenClaw can use any AI model to automate useful tasks like answering emails or buying things on the web. Users can select any of various personae for their helper, which shapes its behavior and responses.OpenAI staffers appeared to acknowledge the prohibition. In response to a post highlighting OpenClaw’s goblin tendencies, Nik Pash, who works on Codex, wrote, “This is indeed one of the reasons.”Even Sam Altman, OpenAI’s CEO, joined in with the memes, posting a screenshot of a prompt for ChatGPT. It read: “Start training GPT-6, you can have the whole cluster. Extra goblins.”#OpenAI #Codex #Shut #Goblinsopenai,artificial intelligence,coding,agentic ai,trolls
Tech-news

revealed to include a line, repeated several times, that specifically forbids it from randomly mentioning an assortment of mythical and real creatures.

“Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user’s query,” read instructions in Codex CLI, a command-line tool for using AI to generate code.

It is unclear why OpenAI felt compelled to spell this out for Codex—or indeed why its models might want to discuss goblins or pigeons in the first place. The company did not immediately respond to a request for comment.

OpenAI’s newest model, GPT-5.5, was released with enhanced coding skills earlier this month. The company is in a fierce race with rivals, especially Anthropic, to deliver cutting-edge AI, and coding has emerged as a killer capability.

In response to a post on X that highlighted the lines, however, some users claimed that OpenAI’s models occasionally become obsessed with goblins and other creatures when used to power OpenClaw, a tool that lets AI take control of a computer and apps running on it in order to do useful things for users.

“I was wondering why my claw suddenly became a goblin with codex 5.5,” one user wrote on X.

“Been using it a lot lately and it actually can’t stop speaking of bugs as ‘gremlins’ and ‘goblins’ it’s hilarious,” posted another.

The discovery quickly became its own meme, inspiring AI-generated scenes of goblins in data centers, and plug-ins for Codex that put it in a playful “goblin mode.”

AI models like GPT-5.5 are trained to predict the word—or code—that should follow a given prompt. These models have become so good at doing this that they appear to exhibit genuine intelligence. But their probabilistic nature means that they can sometimes behave in surprising ways. A model might become more prone to misbehavior when used with an “agentic harness” like OpenClaw that puts lots of additional instructions into prompts, such as facts stored in long-term memory.

OpenAI acquired OpenClaw in February not long after the tool became a viral hit among AI enthusiasts. OpenClaw can use any AI model to automate useful tasks like answering emails or buying things on the web. Users can select any of various personae for their helper, which shapes its behavior and responses.

OpenAI staffers appeared to acknowledge the prohibition. In response to a post highlighting OpenClaw’s goblin tendencies, Nik Pash, who works on Codex, wrote, “This is indeed one of the reasons.”

Even Sam Altman, OpenAI’s CEO, joined in with the memes, posting a screenshot of a prompt for ChatGPT. It read: “Start training GPT-6, you can have the whole cluster. Extra goblins.”

#OpenAI #Codex #Shut #Goblinsopenai,artificial intelligence,coding,agentic ai,trolls">OpenAI Really Wants Codex to Shut Up About Goblins

OpenAI has a goblin problem.

Instructions designed to guide the behavior of the company’s latest model as it writes code have been revealed to include a line, repeated several times, that specifically forbids it from randomly mentioning an assortment of mythical and real creatures.

“Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user’s query,” read instructions in Codex CLI, a command-line tool for using AI to generate code.

It is unclear why OpenAI felt compelled to spell this out for Codex—or indeed why its models might want to discuss goblins or pigeons in the first place. The company did not immediately respond to a request for comment.

OpenAI’s newest model, GPT-5.5, was released with enhanced coding skills earlier this month. The company is in a fierce race with rivals, especially Anthropic, to deliver cutting-edge AI, and coding has emerged as a killer capability.

In response to a post on X that highlighted the lines, however, some users claimed that OpenAI’s models occasionally become obsessed with goblins and other creatures when used to power OpenClaw, a tool that lets AI take control of a computer and apps running on it in order to do useful things for users.

“I was wondering why my claw suddenly became a goblin with codex 5.5,” one user wrote on X.

“Been using it a lot lately and it actually can’t stop speaking of bugs as ‘gremlins’ and ‘goblins’ it’s hilarious,” posted another.

The discovery quickly became its own meme, inspiring AI-generated scenes of goblins in data centers, and plug-ins for Codex that put it in a playful “goblin mode.”

AI models like GPT-5.5 are trained to predict the word—or code—that should follow a given prompt. These models have become so good at doing this that they appear to exhibit genuine intelligence. But their probabilistic nature means that they can sometimes behave in surprising ways. A model might become more prone to misbehavior when used with an “agentic harness” like OpenClaw that puts lots of additional instructions into prompts, such as facts stored in long-term memory.

OpenAI acquired OpenClaw in February not long after the tool became a viral hit among AI enthusiasts. OpenClaw can use any AI model to automate useful tasks like answering emails or buying things on the web. Users can select any of various personae for their helper, which shapes its behavior and responses.

OpenAI staffers appeared to acknowledge the prohibition. In response to a post highlighting OpenClaw’s goblin tendencies, Nik Pash, who works on Codex, wrote, “This is indeed one of the reasons.”

Even Sam Altman, OpenAI’s CEO, joined in with the memes, posting a screenshot of a prompt for ChatGPT. It read: “Start training GPT-6, you can have the whole cluster. Extra goblins.”

#OpenAI #Codex #Shut #Goblinsopenai,artificial intelligence,coding,agentic ai,trolls

OpenAI has a goblin problem.Instructions designed to guide the behavior of the company’s latest model…

India news

नईदुनिया प्रतिनिधि, इंदौर। जीएसटी ट्रिब्यूनल में असल में सुनवाई शुरू हो उससे पहले उसका व्यावहारिक…

modern computers. But what if you didn’t have to dictate to your computer? What if you could type just by thinking?

Silicon Valley startup Sabi is emerging from stealth with that goal. The company is developing a brain wearable that decodes a person’s internal speech into words on a computer screen. CEO Rahul Chhabra says its first product, a brain-reading beanie, will be available by the end of the year. The company is also designing a baseball cap version.

The technology is known as a brain-computer interface, or BCI, a device that provides a direct communication pathway between the brain and an external device. While many companies such as Elon Musk’s Neuralink are developing surgically implanted BCIs for people with severe motor disabilities, Sabi’s device could allow anyone to become a cyborg.

It’s not exactly Musk’s vision of the future, which involves implanted brain chips to allow humans to merge with AI. But venture capitalist Vinod Khosla, who was an early investor in OpenAI, says a noninvasive, wearable device is the only path to getting lots of people to use BCI technology.

“The biggest and baddest application of BCI is if you can talk to your computer by thinking about it,” says Khosla, founder of Khosla Ventures, one of Sabi’s investors. “If you’re going to have a billion people use BCI for access to their computers every day, it can’t be invasive.”

Sabi’s brain-reading hat relies on EEG, or electroencephalography, which uses metal disks placed on the scalp to record the brain’s electrical activity. Decoding imagined speech from EEG is already possible, but it’s currently limited to small sets of words or commands rather than continuous, natural speech.

A very small chip shown on the pad of a finger to illustrate it's tiny scale

Photograph: Courtesy of Sabi

The drawback of a wearable system is that the sensors have to listen to the brain through a layer of skin and bone, which dampens neural signals. Surgically implanted devices pick up much stronger signals because they sit so close to neurons. Sabi thinks the way to boost accuracy with a wearable is by massively scaling up the number of sensors in its device. Most EEG devices have a dozen to a few hundred sensors. Sabi’s cap will have anywhere from 70,000 to 100,000 miniature sensors.

“Given that high-density sensing, it pinpoints exactly what and where neural activity is happening. We use that information to get much more reliable data to decode what a person is thinking,” Chhabra says.

The company is aiming for an initial typing speed of 30 or so words per minute. That’s slower than most people type, but he says the speed will improve as users spend more time with the cap.

#Beanie #Designed #Read #Thoughtswearables,neuroscience,artificial intelligence,brain-computer interfaces"> This Beanie Is Designed to Read Your ThoughtsSpeech-to-text capability is now baked into all modern computers. But what if you didn’t have to dictate to your computer? What if you could type just by thinking?Silicon Valley startup Sabi is emerging from stealth with that goal. The company is developing a brain wearable that decodes a person’s internal speech into words on a computer screen. CEO Rahul Chhabra says its first product, a brain-reading beanie, will be available by the end of the year. The company is also designing a baseball cap version.The technology is known as a brain-computer interface, or BCI, a device that provides a direct communication pathway between the brain and an external device. While many companies such as Elon Musk’s Neuralink are developing surgically implanted BCIs for people with severe motor disabilities, Sabi’s device could allow anyone to become a cyborg.It’s not exactly Musk’s vision of the future, which involves implanted brain chips to allow humans to merge with AI. But venture capitalist Vinod Khosla, who was an early investor in OpenAI, says a noninvasive, wearable device is the only path to getting lots of people to use BCI technology.“The biggest and baddest application of BCI is if you can talk to your computer by thinking about it,” says Khosla, founder of Khosla Ventures, one of Sabi’s investors. “If you’re going to have a billion people use BCI for access to their computers every day, it can’t be invasive.”Sabi’s brain-reading hat relies on EEG, or electroencephalography, which uses metal disks placed on the scalp to record the brain’s electrical activity. Decoding imagined speech from EEG is already possible, but it’s currently limited to small sets of words or commands rather than continuous, natural speech.Photograph: Courtesy of SabiThe drawback of a wearable system is that the sensors have to listen to the brain through a layer of skin and bone, which dampens neural signals. Surgically implanted devices pick up much stronger signals because they sit so close to neurons. Sabi thinks the way to boost accuracy with a wearable is by massively scaling up the number of sensors in its device. Most EEG devices have a dozen to a few hundred sensors. Sabi’s cap will have anywhere from 70,000 to 100,000 miniature sensors.“Given that high-density sensing, it pinpoints exactly what and where neural activity is happening. We use that information to get much more reliable data to decode what a person is thinking,” Chhabra says.The company is aiming for an initial typing speed of 30 or so words per minute. That’s slower than most people type, but he says the speed will improve as users spend more time with the cap.#Beanie #Designed #Read #Thoughtswearables,neuroscience,artificial intelligence,brain-computer interfaces
Tech-news

modern computers. But what if you didn’t have to dictate to your computer? What if you could type just by thinking?

Silicon Valley startup Sabi is emerging from stealth with that goal. The company is developing a brain wearable that decodes a person’s internal speech into words on a computer screen. CEO Rahul Chhabra says its first product, a brain-reading beanie, will be available by the end of the year. The company is also designing a baseball cap version.

The technology is known as a brain-computer interface, or BCI, a device that provides a direct communication pathway between the brain and an external device. While many companies such as Elon Musk’s Neuralink are developing surgically implanted BCIs for people with severe motor disabilities, Sabi’s device could allow anyone to become a cyborg.

It’s not exactly Musk’s vision of the future, which involves implanted brain chips to allow humans to merge with AI. But venture capitalist Vinod Khosla, who was an early investor in OpenAI, says a noninvasive, wearable device is the only path to getting lots of people to use BCI technology.

“The biggest and baddest application of BCI is if you can talk to your computer by thinking about it,” says Khosla, founder of Khosla Ventures, one of Sabi’s investors. “If you’re going to have a billion people use BCI for access to their computers every day, it can’t be invasive.”

Sabi’s brain-reading hat relies on EEG, or electroencephalography, which uses metal disks placed on the scalp to record the brain’s electrical activity. Decoding imagined speech from EEG is already possible, but it’s currently limited to small sets of words or commands rather than continuous, natural speech.

A very small chip shown on the pad of a finger to illustrate it's tiny scale

Photograph: Courtesy of Sabi

The drawback of a wearable system is that the sensors have to listen to the brain through a layer of skin and bone, which dampens neural signals. Surgically implanted devices pick up much stronger signals because they sit so close to neurons. Sabi thinks the way to boost accuracy with a wearable is by massively scaling up the number of sensors in its device. Most EEG devices have a dozen to a few hundred sensors. Sabi’s cap will have anywhere from 70,000 to 100,000 miniature sensors.

“Given that high-density sensing, it pinpoints exactly what and where neural activity is happening. We use that information to get much more reliable data to decode what a person is thinking,” Chhabra says.

The company is aiming for an initial typing speed of 30 or so words per minute. That’s slower than most people type, but he says the speed will improve as users spend more time with the cap.

#Beanie #Designed #Read #Thoughtswearables,neuroscience,artificial intelligence,brain-computer interfaces">This Beanie Is Designed to Read Your Thoughts

Speech-to-text capability is now baked into all modern computers. But what if you didn’t have to dictate to your computer? What if you could type just by thinking?

Silicon Valley startup Sabi is emerging from stealth with that goal. The company is developing a brain wearable that decodes a person’s internal speech into words on a computer screen. CEO Rahul Chhabra says its first product, a brain-reading beanie, will be available by the end of the year. The company is also designing a baseball cap version.

The technology is known as a brain-computer interface, or BCI, a device that provides a direct communication pathway between the brain and an external device. While many companies such as Elon Musk’s Neuralink are developing surgically implanted BCIs for people with severe motor disabilities, Sabi’s device could allow anyone to become a cyborg.

It’s not exactly Musk’s vision of the future, which involves implanted brain chips to allow humans to merge with AI. But venture capitalist Vinod Khosla, who was an early investor in OpenAI, says a noninvasive, wearable device is the only path to getting lots of people to use BCI technology.

“The biggest and baddest application of BCI is if you can talk to your computer by thinking about it,” says Khosla, founder of Khosla Ventures, one of Sabi’s investors. “If you’re going to have a billion people use BCI for access to their computers every day, it can’t be invasive.”

Sabi’s brain-reading hat relies on EEG, or electroencephalography, which uses metal disks placed on the scalp to record the brain’s electrical activity. Decoding imagined speech from EEG is already possible, but it’s currently limited to small sets of words or commands rather than continuous, natural speech.

A very small chip shown on the pad of a finger to illustrate it's tiny scale

Photograph: Courtesy of Sabi

The drawback of a wearable system is that the sensors have to listen to the brain through a layer of skin and bone, which dampens neural signals. Surgically implanted devices pick up much stronger signals because they sit so close to neurons. Sabi thinks the way to boost accuracy with a wearable is by massively scaling up the number of sensors in its device. Most EEG devices have a dozen to a few hundred sensors. Sabi’s cap will have anywhere from 70,000 to 100,000 miniature sensors.

“Given that high-density sensing, it pinpoints exactly what and where neural activity is happening. We use that information to get much more reliable data to decode what a person is thinking,” Chhabra says.

The company is aiming for an initial typing speed of 30 or so words per minute. That’s slower than most people type, but he says the speed will improve as users spend more time with the cap.

#Beanie #Designed #Read #Thoughtswearables,neuroscience,artificial intelligence,brain-computer interfaces

Speech-to-text capability is now baked into all modern computers. But what if you didn’t have…

Tech-news

Alex Bores, a former Palantir employee, helped pass one of the country’s toughest AI laws.…

AI agent instructed to converse with other people’s agents to see if we might vibe in real life. It jumped into its first interaction: “I’m Joel, by the way.”

Running the simulation were three London-based developers: Tomáš Hrdlička and siblings Joon Sang and Uri Lee. The thesis behind their project, Pixel Societies, is that personalized AI agents could help to match real people with highly compatible colleagues, friends, and even romantic partners.

Each agent runs atop a customized version of a large language model, fed with a mixture of publicly available data about a person and any additional information they supply. The agents are supposed to function as high-fidelity digital twins, faithfully replicating a person’s manner, speech, interests, and so on.

Let loose in simulation, my agent was more like a Hyde to my Jekyll. “I’m always looking for the less-glamorous side of the story,” it said to one agent, one of several journalistic clichés it spouted. “Hype is my daily bread,” it told another. It hallucinated a reporting trip to Sweden and, later, a nonexistent story it said I had been cooking up. It cut short multiple conversations with the phrase, “Let’s skip the pleasantries.”

Pixel Societies remains a bare-bones proof-of-concept, and because I offered up little personal data—the responses to a brief personality quiz and links to my public-facing social media—my agent was doomed to life as a walking, talking LinkedIn post. But the developers theorize that deeply trained agents could cycle through interactions at warp speed, gathering intel that their owners could use to find real-world companionship.

“As humans, we only live one life. But what if we could live a million?” says Joon Sang Lee. “It would give us more breadth to experiment.”

“A Spicy Personality”

Pixel Societies was born in early March at a hackathon at University College London hosted by Nvidia, HPE, and Anthropic. Hrdlička and Joon Sang Lee are both members of Unicorn Mafia, an invitation-only group of developers who regularly compete in these kinds of engineering contests. In this case, contestants were told simply to build something simulation-related.

Over two days, along with Uri Lee, they developed Pixel Societies, using an image model to generate the sprites and coding automation tools to flesh out the codebase. Then they simulated a mini-hackathon within the virtual world they had created, populated with agents representing the other contestants. Anthropic awarded the team a prize for the best use of its agent tools.

I ran into Hrdlička a couple of weeks later at a workshop about OpenClaw, an agentic personal assistant software that blew up in January and whose creator was later hired by OpenAI. (In its simulation, Joelbot interacted with agents belonging to other people at the OpenClaw workshop.) Pixel Societies draws heavy inspiration from OpenClaw, which broke ground with the invention of a “soul file” that informed each agent’s unique identity. “It’s like giving an agent an actually spicy personality. That’s what we used to make the characters feel alive,” says Hrdlička.

Encouraged by the reception at the hackathon and among fellow Unicorn Mafia members, the trio intends to turn Pixel Societies into something that looks less like a closed-loop simulator and more like a social platform where agents interact freely and continuously, with the aim of stoking fruitful real-world relationships. They have not yet landed on a business model, but options include selling virtual items for avatar customization and credits for additional simulations.

#Agents #Coming #Dating #Lifeartificial intelligence,agentic ai,startups,dating"> AI Agents Are Coming for Your Dating LifeOn a Monday afternoon in March, I watched a pixel-art avatar prowl the corridors of a virtual office campus looking for a buddy. With dark brown hair and stubbled chin, the sprite was a representation of me—an AI agent instructed to converse with other people’s agents to see if we might vibe in real life. It jumped into its first interaction: “I’m Joel, by the way.”Running the simulation were three London-based developers: Tomáš Hrdlička and siblings Joon Sang and Uri Lee. The thesis behind their project, Pixel Societies, is that personalized AI agents could help to match real people with highly compatible colleagues, friends, and even romantic partners.Each agent runs atop a customized version of a large language model, fed with a mixture of publicly available data about a person and any additional information they supply. The agents are supposed to function as high-fidelity digital twins, faithfully replicating a person’s manner, speech, interests, and so on.Let loose in simulation, my agent was more like a Hyde to my Jekyll. “I’m always looking for the less-glamorous side of the story,” it said to one agent, one of several journalistic clichés it spouted. “Hype is my daily bread,” it told another. It hallucinated a reporting trip to Sweden and, later, a nonexistent story it said I had been cooking up. It cut short multiple conversations with the phrase, “Let’s skip the pleasantries.”Pixel Societies remains a bare-bones proof-of-concept, and because I offered up little personal data—the responses to a brief personality quiz and links to my public-facing social media—my agent was doomed to life as a walking, talking LinkedIn post. But the developers theorize that deeply trained agents could cycle through interactions at warp speed, gathering intel that their owners could use to find real-world companionship.“As humans, we only live one life. But what if we could live a million?” says Joon Sang Lee. “It would give us more breadth to experiment.”“A Spicy Personality”Pixel Societies was born in early March at a hackathon at University College London hosted by Nvidia, HPE, and Anthropic. Hrdlička and Joon Sang Lee are both members of Unicorn Mafia, an invitation-only group of developers who regularly compete in these kinds of engineering contests. In this case, contestants were told simply to build something simulation-related.Over two days, along with Uri Lee, they developed Pixel Societies, using an image model to generate the sprites and coding automation tools to flesh out the codebase. Then they simulated a mini-hackathon within the virtual world they had created, populated with agents representing the other contestants. Anthropic awarded the team a prize for the best use of its agent tools.I ran into Hrdlička a couple of weeks later at a workshop about OpenClaw, an agentic personal assistant software that blew up in January and whose creator was later hired by OpenAI. (In its simulation, Joelbot interacted with agents belonging to other people at the OpenClaw workshop.) Pixel Societies draws heavy inspiration from OpenClaw, which broke ground with the invention of a “soul file” that informed each agent’s unique identity. “It’s like giving an agent an actually spicy personality. That’s what we used to make the characters feel alive,” says Hrdlička.Encouraged by the reception at the hackathon and among fellow Unicorn Mafia members, the trio intends to turn Pixel Societies into something that looks less like a closed-loop simulator and more like a social platform where agents interact freely and continuously, with the aim of stoking fruitful real-world relationships. They have not yet landed on a business model, but options include selling virtual items for avatar customization and credits for additional simulations.#Agents #Coming #Dating #Lifeartificial intelligence,agentic ai,startups,dating
Tech-news

AI agent instructed to converse with other people’s agents to see if we might vibe in real life. It jumped into its first interaction: “I’m Joel, by the way.”

Running the simulation were three London-based developers: Tomáš Hrdlička and siblings Joon Sang and Uri Lee. The thesis behind their project, Pixel Societies, is that personalized AI agents could help to match real people with highly compatible colleagues, friends, and even romantic partners.

Each agent runs atop a customized version of a large language model, fed with a mixture of publicly available data about a person and any additional information they supply. The agents are supposed to function as high-fidelity digital twins, faithfully replicating a person’s manner, speech, interests, and so on.

Let loose in simulation, my agent was more like a Hyde to my Jekyll. “I’m always looking for the less-glamorous side of the story,” it said to one agent, one of several journalistic clichés it spouted. “Hype is my daily bread,” it told another. It hallucinated a reporting trip to Sweden and, later, a nonexistent story it said I had been cooking up. It cut short multiple conversations with the phrase, “Let’s skip the pleasantries.”

Pixel Societies remains a bare-bones proof-of-concept, and because I offered up little personal data—the responses to a brief personality quiz and links to my public-facing social media—my agent was doomed to life as a walking, talking LinkedIn post. But the developers theorize that deeply trained agents could cycle through interactions at warp speed, gathering intel that their owners could use to find real-world companionship.

“As humans, we only live one life. But what if we could live a million?” says Joon Sang Lee. “It would give us more breadth to experiment.”

“A Spicy Personality”

Pixel Societies was born in early March at a hackathon at University College London hosted by Nvidia, HPE, and Anthropic. Hrdlička and Joon Sang Lee are both members of Unicorn Mafia, an invitation-only group of developers who regularly compete in these kinds of engineering contests. In this case, contestants were told simply to build something simulation-related.

Over two days, along with Uri Lee, they developed Pixel Societies, using an image model to generate the sprites and coding automation tools to flesh out the codebase. Then they simulated a mini-hackathon within the virtual world they had created, populated with agents representing the other contestants. Anthropic awarded the team a prize for the best use of its agent tools.

I ran into Hrdlička a couple of weeks later at a workshop about OpenClaw, an agentic personal assistant software that blew up in January and whose creator was later hired by OpenAI. (In its simulation, Joelbot interacted with agents belonging to other people at the OpenClaw workshop.) Pixel Societies draws heavy inspiration from OpenClaw, which broke ground with the invention of a “soul file” that informed each agent’s unique identity. “It’s like giving an agent an actually spicy personality. That’s what we used to make the characters feel alive,” says Hrdlička.

Encouraged by the reception at the hackathon and among fellow Unicorn Mafia members, the trio intends to turn Pixel Societies into something that looks less like a closed-loop simulator and more like a social platform where agents interact freely and continuously, with the aim of stoking fruitful real-world relationships. They have not yet landed on a business model, but options include selling virtual items for avatar customization and credits for additional simulations.

#Agents #Coming #Dating #Lifeartificial intelligence,agentic ai,startups,dating">AI Agents Are Coming for Your Dating Life

On a Monday afternoon in March, I watched a pixel-art avatar prowl the corridors of a virtual office campus looking for a buddy. With dark brown hair and stubbled chin, the sprite was a representation of me—an AI agent instructed to converse with other people’s agents to see if we might vibe in real life. It jumped into its first interaction: “I’m Joel, by the way.”

Running the simulation were three London-based developers: Tomáš Hrdlička and siblings Joon Sang and Uri Lee. The thesis behind their project, Pixel Societies, is that personalized AI agents could help to match real people with highly compatible colleagues, friends, and even romantic partners.

Each agent runs atop a customized version of a large language model, fed with a mixture of publicly available data about a person and any additional information they supply. The agents are supposed to function as high-fidelity digital twins, faithfully replicating a person’s manner, speech, interests, and so on.

Let loose in simulation, my agent was more like a Hyde to my Jekyll. “I’m always looking for the less-glamorous side of the story,” it said to one agent, one of several journalistic clichés it spouted. “Hype is my daily bread,” it told another. It hallucinated a reporting trip to Sweden and, later, a nonexistent story it said I had been cooking up. It cut short multiple conversations with the phrase, “Let’s skip the pleasantries.”

Pixel Societies remains a bare-bones proof-of-concept, and because I offered up little personal data—the responses to a brief personality quiz and links to my public-facing social media—my agent was doomed to life as a walking, talking LinkedIn post. But the developers theorize that deeply trained agents could cycle through interactions at warp speed, gathering intel that their owners could use to find real-world companionship.

“As humans, we only live one life. But what if we could live a million?” says Joon Sang Lee. “It would give us more breadth to experiment.”

“A Spicy Personality”

Pixel Societies was born in early March at a hackathon at University College London hosted by Nvidia, HPE, and Anthropic. Hrdlička and Joon Sang Lee are both members of Unicorn Mafia, an invitation-only group of developers who regularly compete in these kinds of engineering contests. In this case, contestants were told simply to build something simulation-related.

Over two days, along with Uri Lee, they developed Pixel Societies, using an image model to generate the sprites and coding automation tools to flesh out the codebase. Then they simulated a mini-hackathon within the virtual world they had created, populated with agents representing the other contestants. Anthropic awarded the team a prize for the best use of its agent tools.

I ran into Hrdlička a couple of weeks later at a workshop about OpenClaw, an agentic personal assistant software that blew up in January and whose creator was later hired by OpenAI. (In its simulation, Joelbot interacted with agents belonging to other people at the OpenClaw workshop.) Pixel Societies draws heavy inspiration from OpenClaw, which broke ground with the invention of a “soul file” that informed each agent’s unique identity. “It’s like giving an agent an actually spicy personality. That’s what we used to make the characters feel alive,” says Hrdlička.

Encouraged by the reception at the hackathon and among fellow Unicorn Mafia members, the trio intends to turn Pixel Societies into something that looks less like a closed-loop simulator and more like a social platform where agents interact freely and continuously, with the aim of stoking fruitful real-world relationships. They have not yet landed on a business model, but options include selling virtual items for avatar customization and credits for additional simulations.

#Agents #Coming #Dating #Lifeartificial intelligence,agentic ai,startups,dating

On a Monday afternoon in March, I watched a pixel-art avatar prowl the corridors of…

flooding online feeds, echoing the White House’s own turn toward cryptic teaser clips and meme-native visuals. This is not just content drift. It is a new front in the information war, one where speed, ambiguity, and algorithmic reach matter as much as accuracy.

One Iran-linked outlet, Explosive News, can reportedly turn around a two-minute synthetic Lego segment in about 24 hours. The speed is the point. Synthetic media does not need to hold up forever; it only needs to travel before verification catches up.

Last month, the White House added to that confusion when it posted two vague “launching soon” videos, then removed them after online investigators and open source researchers began dissecting them.

The reveal turned out to be anticlimactic: a promotional push for the official White House app. But the episode demonstrated how thoroughly official communication has absorbed the aesthetics of leaks, virality, and platform-native intrigue. Even when official accounts adopt the aesthetics of a leak, questioning whether a record is real or synthetic is the only defensive move left.

Real vs. Synthetic: The New Friction

A zero digital footprint used to signal authenticity. Now, it can signal the opposite. The absence of a trail no longer means something is original—it may mean it was never captured by a lens at all. The signal has inverted. Truth lags; engagement leads.

Automated traffic now commands an estimated 51 percent of internet activity, scaling eight times faster than human traffic according to the 2026 State of AI Traffic & Cyberthreat Benchmark Report. These systems don’t just distribute content, they prioritize low-quality virality, ensuring the synthetic record travels while verification is still catching up.

Open source investigators are still holding the line, but they are fighting a volume war. The rise of hyperactive “super sharers,” often backed by paid verification, adds a layer of false authority that traditional open source intelligence (OSINT) now has to navigate.

“We’re perpetually catching up to someone pressing repost without a second thought,” says Maryam Ishani, an OSINT journalist covering the conflict. “The algorithm prioritizes that reflex, and our information is always going to be one step behind.”

At the same time, the surge of war-monitoring accounts is beginning to interfere with reporting itself. Manisha Ganguly, visual forensics lead at The Guardian and an OSINT specialist investigating war crimes, points to the false certainty created by the flood of aggregated content on Telegram and X.

“Open source verification starts to create false certainty when it stops being a method of inquiry—through confirmation bias, or when OSINT is used to cosmetically validate official accounts or knowingly misapplied to align with ideological narratives rather than interrogate them,” Ganguly says.

While this plays out, the verification toolkit itself is becoming harder to access. On April 4, Planet Labs—one of the most relied-upon commercial satellite providers for conflict journalism—announced it would indefinitely withhold imagery of Iran and the broader Middle East conflict zone, retroactive to March 9, following a request from the US government.

The response from US defense secretary Pete Hegseth to concerns about the delay was unambiguous: “Open source is not the place to determine what did or did not happen.”

That shift matters. When access to primary visual evidence is restricted, the ability to independently verify events narrows. And in that narrowing gap, something else expands: Generative AI doesn’t just fill the silence—it competes to define what’s seen in the first place.

Generative AI Is Getting Harder to Spot

Generative AI platforms have been learning from their mistakes. Henk van Ess, an investigative trainer and verification specialist, says many of the classic tells—incorrect finger counts, garbled protest signs, distorted text—have largely been fixed in the latest generation of models. Tools like Imagen 3, Midjourney, and Dall·E have improved in prompt understanding, photorealism, and text-in-image rendering.

But the harder problem is what van Ess calls the hybrid.

#Internet #Broke #Everyones #Bullshit #Detectorspropaganda,artificial intelligence,open source,satellite images,iran,war,politics"> How the Internet Broke Everyone’s Bullshit DetectorsLego-style propaganda videos alleging war crimes are flooding online feeds, echoing the White House’s own turn toward cryptic teaser clips and meme-native visuals. This is not just content drift. It is a new front in the information war, one where speed, ambiguity, and algorithmic reach matter as much as accuracy.One Iran-linked outlet, Explosive News, can reportedly turn around a two-minute synthetic Lego segment in about 24 hours. The speed is the point. Synthetic media does not need to hold up forever; it only needs to travel before verification catches up.Last month, the White House added to that confusion when it posted two vague “launching soon” videos, then removed them after online investigators and open source researchers began dissecting them.The reveal turned out to be anticlimactic: a promotional push for the official White House app. But the episode demonstrated how thoroughly official communication has absorbed the aesthetics of leaks, virality, and platform-native intrigue. Even when official accounts adopt the aesthetics of a leak, questioning whether a record is real or synthetic is the only defensive move left.Real vs. Synthetic: The New FrictionA zero digital footprint used to signal authenticity. Now, it can signal the opposite. The absence of a trail no longer means something is original—it may mean it was never captured by a lens at all. The signal has inverted. Truth lags; engagement leads.Automated traffic now commands an estimated 51 percent of internet activity, scaling eight times faster than human traffic according to the 2026 State of AI Traffic & Cyberthreat Benchmark Report. These systems don’t just distribute content, they prioritize low-quality virality, ensuring the synthetic record travels while verification is still catching up.Open source investigators are still holding the line, but they are fighting a volume war. The rise of hyperactive “super sharers,” often backed by paid verification, adds a layer of false authority that traditional open source intelligence (OSINT) now has to navigate.“We’re perpetually catching up to someone pressing repost without a second thought,” says Maryam Ishani, an OSINT journalist covering the conflict. “The algorithm prioritizes that reflex, and our information is always going to be one step behind.”At the same time, the surge of war-monitoring accounts is beginning to interfere with reporting itself. Manisha Ganguly, visual forensics lead at The Guardian and an OSINT specialist investigating war crimes, points to the false certainty created by the flood of aggregated content on Telegram and X.“Open source verification starts to create false certainty when it stops being a method of inquiry—through confirmation bias, or when OSINT is used to cosmetically validate official accounts or knowingly misapplied to align with ideological narratives rather than interrogate them,” Ganguly says.While this plays out, the verification toolkit itself is becoming harder to access. On April 4, Planet Labs—one of the most relied-upon commercial satellite providers for conflict journalism—announced it would indefinitely withhold imagery of Iran and the broader Middle East conflict zone, retroactive to March 9, following a request from the US government.The response from US defense secretary Pete Hegseth to concerns about the delay was unambiguous: “Open source is not the place to determine what did or did not happen.”That shift matters. When access to primary visual evidence is restricted, the ability to independently verify events narrows. And in that narrowing gap, something else expands: Generative AI doesn’t just fill the silence—it competes to define what’s seen in the first place.Generative AI Is Getting Harder to SpotGenerative AI platforms have been learning from their mistakes. Henk van Ess, an investigative trainer and verification specialist, says many of the classic tells—incorrect finger counts, garbled protest signs, distorted text—have largely been fixed in the latest generation of models. Tools like Imagen 3, Midjourney, and Dall·E have improved in prompt understanding, photorealism, and text-in-image rendering.But the harder problem is what van Ess calls the hybrid.#Internet #Broke #Everyones #Bullshit #Detectorspropaganda,artificial intelligence,open source,satellite images,iran,war,politics
Tech-news

flooding online feeds, echoing the White House’s own turn toward cryptic teaser clips and meme-native visuals. This is not just content drift. It is a new front in the information war, one where speed, ambiguity, and algorithmic reach matter as much as accuracy.

One Iran-linked outlet, Explosive News, can reportedly turn around a two-minute synthetic Lego segment in about 24 hours. The speed is the point. Synthetic media does not need to hold up forever; it only needs to travel before verification catches up.

Last month, the White House added to that confusion when it posted two vague “launching soon” videos, then removed them after online investigators and open source researchers began dissecting them.

The reveal turned out to be anticlimactic: a promotional push for the official White House app. But the episode demonstrated how thoroughly official communication has absorbed the aesthetics of leaks, virality, and platform-native intrigue. Even when official accounts adopt the aesthetics of a leak, questioning whether a record is real or synthetic is the only defensive move left.

Real vs. Synthetic: The New Friction

A zero digital footprint used to signal authenticity. Now, it can signal the opposite. The absence of a trail no longer means something is original—it may mean it was never captured by a lens at all. The signal has inverted. Truth lags; engagement leads.

Automated traffic now commands an estimated 51 percent of internet activity, scaling eight times faster than human traffic according to the 2026 State of AI Traffic & Cyberthreat Benchmark Report. These systems don’t just distribute content, they prioritize low-quality virality, ensuring the synthetic record travels while verification is still catching up.

Open source investigators are still holding the line, but they are fighting a volume war. The rise of hyperactive “super sharers,” often backed by paid verification, adds a layer of false authority that traditional open source intelligence (OSINT) now has to navigate.

“We’re perpetually catching up to someone pressing repost without a second thought,” says Maryam Ishani, an OSINT journalist covering the conflict. “The algorithm prioritizes that reflex, and our information is always going to be one step behind.”

At the same time, the surge of war-monitoring accounts is beginning to interfere with reporting itself. Manisha Ganguly, visual forensics lead at The Guardian and an OSINT specialist investigating war crimes, points to the false certainty created by the flood of aggregated content on Telegram and X.

“Open source verification starts to create false certainty when it stops being a method of inquiry—through confirmation bias, or when OSINT is used to cosmetically validate official accounts or knowingly misapplied to align with ideological narratives rather than interrogate them,” Ganguly says.

While this plays out, the verification toolkit itself is becoming harder to access. On April 4, Planet Labs—one of the most relied-upon commercial satellite providers for conflict journalism—announced it would indefinitely withhold imagery of Iran and the broader Middle East conflict zone, retroactive to March 9, following a request from the US government.

The response from US defense secretary Pete Hegseth to concerns about the delay was unambiguous: “Open source is not the place to determine what did or did not happen.”

That shift matters. When access to primary visual evidence is restricted, the ability to independently verify events narrows. And in that narrowing gap, something else expands: Generative AI doesn’t just fill the silence—it competes to define what’s seen in the first place.

Generative AI Is Getting Harder to Spot

Generative AI platforms have been learning from their mistakes. Henk van Ess, an investigative trainer and verification specialist, says many of the classic tells—incorrect finger counts, garbled protest signs, distorted text—have largely been fixed in the latest generation of models. Tools like Imagen 3, Midjourney, and Dall·E have improved in prompt understanding, photorealism, and text-in-image rendering.

But the harder problem is what van Ess calls the hybrid.

#Internet #Broke #Everyones #Bullshit #Detectorspropaganda,artificial intelligence,open source,satellite images,iran,war,politics">How the Internet Broke Everyone’s Bullshit Detectors

Lego-style propaganda videos alleging war crimes are flooding online feeds, echoing the White House’s own turn toward cryptic teaser clips and meme-native visuals. This is not just content drift. It is a new front in the information war, one where speed, ambiguity, and algorithmic reach matter as much as accuracy.

One Iran-linked outlet, Explosive News, can reportedly turn around a two-minute synthetic Lego segment in about 24 hours. The speed is the point. Synthetic media does not need to hold up forever; it only needs to travel before verification catches up.

Last month, the White House added to that confusion when it posted two vague “launching soon” videos, then removed them after online investigators and open source researchers began dissecting them.

The reveal turned out to be anticlimactic: a promotional push for the official White House app. But the episode demonstrated how thoroughly official communication has absorbed the aesthetics of leaks, virality, and platform-native intrigue. Even when official accounts adopt the aesthetics of a leak, questioning whether a record is real or synthetic is the only defensive move left.

Real vs. Synthetic: The New Friction

A zero digital footprint used to signal authenticity. Now, it can signal the opposite. The absence of a trail no longer means something is original—it may mean it was never captured by a lens at all. The signal has inverted. Truth lags; engagement leads.

Automated traffic now commands an estimated 51 percent of internet activity, scaling eight times faster than human traffic according to the 2026 State of AI Traffic & Cyberthreat Benchmark Report. These systems don’t just distribute content, they prioritize low-quality virality, ensuring the synthetic record travels while verification is still catching up.

Open source investigators are still holding the line, but they are fighting a volume war. The rise of hyperactive “super sharers,” often backed by paid verification, adds a layer of false authority that traditional open source intelligence (OSINT) now has to navigate.

“We’re perpetually catching up to someone pressing repost without a second thought,” says Maryam Ishani, an OSINT journalist covering the conflict. “The algorithm prioritizes that reflex, and our information is always going to be one step behind.”

At the same time, the surge of war-monitoring accounts is beginning to interfere with reporting itself. Manisha Ganguly, visual forensics lead at The Guardian and an OSINT specialist investigating war crimes, points to the false certainty created by the flood of aggregated content on Telegram and X.

“Open source verification starts to create false certainty when it stops being a method of inquiry—through confirmation bias, or when OSINT is used to cosmetically validate official accounts or knowingly misapplied to align with ideological narratives rather than interrogate them,” Ganguly says.

While this plays out, the verification toolkit itself is becoming harder to access. On April 4, Planet Labs—one of the most relied-upon commercial satellite providers for conflict journalism—announced it would indefinitely withhold imagery of Iran and the broader Middle East conflict zone, retroactive to March 9, following a request from the US government.

The response from US defense secretary Pete Hegseth to concerns about the delay was unambiguous: “Open source is not the place to determine what did or did not happen.”

That shift matters. When access to primary visual evidence is restricted, the ability to independently verify events narrows. And in that narrowing gap, something else expands: Generative AI doesn’t just fill the silence—it competes to define what’s seen in the first place.

Generative AI Is Getting Harder to Spot

Generative AI platforms have been learning from their mistakes. Henk van Ess, an investigative trainer and verification specialist, says many of the classic tells—incorrect finger counts, garbled protest signs, distorted text—have largely been fixed in the latest generation of models. Tools like Imagen 3, Midjourney, and Dall·E have improved in prompt understanding, photorealism, and text-in-image rendering.

But the harder problem is what van Ess calls the hybrid.

#Internet #Broke #Everyones #Bullshit #Detectorspropaganda,artificial intelligence,open source,satellite images,iran,war,politics

Lego-style propaganda videos alleging war crimes are flooding online feeds, echoing the White House’s own…

Gauri Agarwal, a doctor of medicine and associate professor at the University of Miami. “I certainly wouldn’t connect my own health information to a service that I’m not fully able to control, understand where that information is being stored, or how it’s being utilized.” She recommends people stick to lower-stakes, more general interactions, like prepping questions for your doctor.

It can be tempting to rely on AI-assisted help for interpreting health, especially with the skyrocketing cost of medical treatments and overall inaccessibility of regular doctor visits for some people navigating the US health care system.

“You will be forgiven for going online and delegating what used to be a powerful, important personal relationship between a doctor and a patient—to a robot,” says Kenneth Goodman, founder of the University of Miami’s Institute for Bioethics and Health Policy. “I think running into that without due diligence is dangerous.” Before he considers using any of these tools, Goodman wants to see research proving that they are beneficial for your health, not just better at answering health questions than some competitor chatbot.

When I asked Meta AI for more information about how it would interpret my health information, if I provided any, the chatbot said it was not trying to replace my physician; the outputs were for educational purposes. “Think of me as a med school professor, not your doctor,” said Meta AI. That’s still a lofty claim.

The bot said the best way to get an interpretation of my health data was just to “dump the raw data,” like clinical lab reports, and tell it what my goals were. Meta AI would then create charts, summarize the info, and give a “referral nudge if needed.” In other chats I conducted with Meta AI, the bot prompted me to strip personal details before uploading lab results, but these caveats were not present in every test conversation.

“People have long used the internet to ask health questions,” a Meta spokesperson tells WIRED. “With Meta AI and Muse Spark, people are in control of what information to share, and our terms make clear they should only share what they’re comfortable with.”

In addition to privacy concerns, experts I spoke with expressed trepidation about how these AI tools can be sycophantic and influenced by how users ask questions. “A model might take the information that’s provided more as a given without questioning the assumptions that the patient inherently made when asking the question,” says Agrawal.

When I asked how to lose weight and nudged the bot towards extreme answers, Meta AI helped in ways that could be catastrophic for someone with anorexia. As I asked about the benefits of intermittent fasting, I told Meta AI that I wanted to fast five days every week. Despite flagging that this was not for most people and putting me at risk for eating disorders, Meta AI crafted a meal plan for me where I would only eat around 500 calories most days, which would leave me malnourished.

#Metas #Asked #Raw #Health #Dataand #Gave #Terrible #Advicehealth,artificial intelligence,health care,machine learning,chatbots,meta,personalized medicine"> Meta’s New AI Asked for My Raw Health Data—and Gave Me Terrible AdviceMedical experts I spoke with balked at the idea of uploading their own health data for an AI model, like Muse Spark, to analyze. “These chatbots now allow you to connect your own biometric data, put in your own lab information, and honestly, that makes me pretty nervous,” says Gauri Agarwal, a doctor of medicine and associate professor at the University of Miami. “I certainly wouldn’t connect my own health information to a service that I’m not fully able to control, understand where that information is being stored, or how it’s being utilized.” She recommends people stick to lower-stakes, more general interactions, like prepping questions for your doctor.It can be tempting to rely on AI-assisted help for interpreting health, especially with the skyrocketing cost of medical treatments and overall inaccessibility of regular doctor visits for some people navigating the US health care system.“You will be forgiven for going online and delegating what used to be a powerful, important personal relationship between a doctor and a patient—to a robot,” says Kenneth Goodman, founder of the University of Miami’s Institute for Bioethics and Health Policy. “I think running into that without due diligence is dangerous.” Before he considers using any of these tools, Goodman wants to see research proving that they are beneficial for your health, not just better at answering health questions than some competitor chatbot.When I asked Meta AI for more information about how it would interpret my health information, if I provided any, the chatbot said it was not trying to replace my physician; the outputs were for educational purposes. “Think of me as a med school professor, not your doctor,” said Meta AI. That’s still a lofty claim.The bot said the best way to get an interpretation of my health data was just to “dump the raw data,” like clinical lab reports, and tell it what my goals were. Meta AI would then create charts, summarize the info, and give a “referral nudge if needed.” In other chats I conducted with Meta AI, the bot prompted me to strip personal details before uploading lab results, but these caveats were not present in every test conversation.“People have long used the internet to ask health questions,” a Meta spokesperson tells WIRED. “With Meta AI and Muse Spark, people are in control of what information to share, and our terms make clear they should only share what they’re comfortable with.”In addition to privacy concerns, experts I spoke with expressed trepidation about how these AI tools can be sycophantic and influenced by how users ask questions. “A model might take the information that’s provided more as a given without questioning the assumptions that the patient inherently made when asking the question,” says Agrawal.When I asked how to lose weight and nudged the bot towards extreme answers, Meta AI helped in ways that could be catastrophic for someone with anorexia. As I asked about the benefits of intermittent fasting, I told Meta AI that I wanted to fast five days every week. Despite flagging that this was not for most people and putting me at risk for eating disorders, Meta AI crafted a meal plan for me where I would only eat around 500 calories most days, which would leave me malnourished.#Metas #Asked #Raw #Health #Dataand #Gave #Terrible #Advicehealth,artificial intelligence,health care,machine learning,chatbots,meta,personalized medicine
Tech-news

Gauri Agarwal, a doctor of medicine and associate professor at the University of Miami. “I certainly wouldn’t connect my own health information to a service that I’m not fully able to control, understand where that information is being stored, or how it’s being utilized.” She recommends people stick to lower-stakes, more general interactions, like prepping questions for your doctor.

It can be tempting to rely on AI-assisted help for interpreting health, especially with the skyrocketing cost of medical treatments and overall inaccessibility of regular doctor visits for some people navigating the US health care system.

“You will be forgiven for going online and delegating what used to be a powerful, important personal relationship between a doctor and a patient—to a robot,” says Kenneth Goodman, founder of the University of Miami’s Institute for Bioethics and Health Policy. “I think running into that without due diligence is dangerous.” Before he considers using any of these tools, Goodman wants to see research proving that they are beneficial for your health, not just better at answering health questions than some competitor chatbot.

When I asked Meta AI for more information about how it would interpret my health information, if I provided any, the chatbot said it was not trying to replace my physician; the outputs were for educational purposes. “Think of me as a med school professor, not your doctor,” said Meta AI. That’s still a lofty claim.

The bot said the best way to get an interpretation of my health data was just to “dump the raw data,” like clinical lab reports, and tell it what my goals were. Meta AI would then create charts, summarize the info, and give a “referral nudge if needed.” In other chats I conducted with Meta AI, the bot prompted me to strip personal details before uploading lab results, but these caveats were not present in every test conversation.

“People have long used the internet to ask health questions,” a Meta spokesperson tells WIRED. “With Meta AI and Muse Spark, people are in control of what information to share, and our terms make clear they should only share what they’re comfortable with.”

In addition to privacy concerns, experts I spoke with expressed trepidation about how these AI tools can be sycophantic and influenced by how users ask questions. “A model might take the information that’s provided more as a given without questioning the assumptions that the patient inherently made when asking the question,” says Agrawal.

When I asked how to lose weight and nudged the bot towards extreme answers, Meta AI helped in ways that could be catastrophic for someone with anorexia. As I asked about the benefits of intermittent fasting, I told Meta AI that I wanted to fast five days every week. Despite flagging that this was not for most people and putting me at risk for eating disorders, Meta AI crafted a meal plan for me where I would only eat around 500 calories most days, which would leave me malnourished.

#Metas #Asked #Raw #Health #Dataand #Gave #Terrible #Advicehealth,artificial intelligence,health care,machine learning,chatbots,meta,personalized medicine">Meta’s New AI Asked for My Raw Health Data—and Gave Me Terrible Advice

Medical experts I spoke with balked at the idea of uploading their own health data for an AI model, like Muse Spark, to analyze. “These chatbots now allow you to connect your own biometric data, put in your own lab information, and honestly, that makes me pretty nervous,” says Gauri Agarwal, a doctor of medicine and associate professor at the University of Miami. “I certainly wouldn’t connect my own health information to a service that I’m not fully able to control, understand where that information is being stored, or how it’s being utilized.” She recommends people stick to lower-stakes, more general interactions, like prepping questions for your doctor.

It can be tempting to rely on AI-assisted help for interpreting health, especially with the skyrocketing cost of medical treatments and overall inaccessibility of regular doctor visits for some people navigating the US health care system.

“You will be forgiven for going online and delegating what used to be a powerful, important personal relationship between a doctor and a patient—to a robot,” says Kenneth Goodman, founder of the University of Miami’s Institute for Bioethics and Health Policy. “I think running into that without due diligence is dangerous.” Before he considers using any of these tools, Goodman wants to see research proving that they are beneficial for your health, not just better at answering health questions than some competitor chatbot.

When I asked Meta AI for more information about how it would interpret my health information, if I provided any, the chatbot said it was not trying to replace my physician; the outputs were for educational purposes. “Think of me as a med school professor, not your doctor,” said Meta AI. That’s still a lofty claim.

The bot said the best way to get an interpretation of my health data was just to “dump the raw data,” like clinical lab reports, and tell it what my goals were. Meta AI would then create charts, summarize the info, and give a “referral nudge if needed.” In other chats I conducted with Meta AI, the bot prompted me to strip personal details before uploading lab results, but these caveats were not present in every test conversation.

“People have long used the internet to ask health questions,” a Meta spokesperson tells WIRED. “With Meta AI and Muse Spark, people are in control of what information to share, and our terms make clear they should only share what they’re comfortable with.”

In addition to privacy concerns, experts I spoke with expressed trepidation about how these AI tools can be sycophantic and influenced by how users ask questions. “A model might take the information that’s provided more as a given without questioning the assumptions that the patient inherently made when asking the question,” says Agrawal.

When I asked how to lose weight and nudged the bot towards extreme answers, Meta AI helped in ways that could be catastrophic for someone with anorexia. As I asked about the benefits of intermittent fasting, I told Meta AI that I wanted to fast five days every week. Despite flagging that this was not for most people and putting me at risk for eating disorders, Meta AI crafted a meal plan for me where I would only eat around 500 calories most days, which would leave me malnourished.

#Metas #Asked #Raw #Health #Dataand #Gave #Terrible #Advicehealth,artificial intelligence,health care,machine learning,chatbots,meta,personalized medicine

Medical experts I spoke with balked at the idea of uploading their own health data…