×
trial in Musk v. Altman comes to a close, one person has emerged as a critical behind-the-scenes manager of communications and egos in OpenAI’s early years: Shivon Zilis.

A longtime employee of Musk and the mother to four of his children, Zilis joined OpenAI as an adviser in 2016. She later served as a director of its nonprofit board from 2020 until 2023 and has worked as an executive at Musk’s other companies, Neuralink and Tesla.

When asked about the nature of his relationship with Zilis in court, Musk offered several answers. At one point, he called her a “chief of staff.” Later, a “close adviser.” At another point, he said “we live together, and she’s the mother of four of my children,” though Zilis said in a deposition that Musk is more of a regular guest and maintains his own residence. Last September, Zilis told OpenAI’s attorneys that she became romantic with Musk around 2016 after she had become an informal adviser to OpenAI. They had their first two children in 2021, she said.

But OpenAI’s lawyers have made the case in witness testimonies and evidence that her most important role, as it pertains to this lawsuit, is being a covert liaison between OpenAI and Musk, even years after he left the nonprofit’s board in February 2018.

“Do you prefer I stay close and friendly to OpenAI to keep info flowing or begin to disassociate? Trust game is about to get tricky so any guidance for how to do right by you is appreciated,” Zilis wrote in a text message to Musk on February 16, 2018, days before OpenAI announced he was leaving the board. Musk responded, “Close and friendly, but we are going to actively try to move three or four people from OpenAI to Tesla. More than that will join over time, but we won’t actively recruit them.”

When asked about this exchange on the witness stand, Musk said he “wanted to know what’s going on.”

In the same text thread, Musk wrote, “There is little chance of OpenAI being a serious force if I focus on Tesla AI.” Zilis reaffirmed him, saying: “There is very low probability of a good future if someone doesn’t slow Demis down,” referring to Demis Hassabis, the leader of Google DeepMind, who Musk has said he didn’t trust to control a superintelligent AI system. “You don’t realize how much you have an ability to influence him directly or otherwise slow him down. I think you know I’m not a malicious person, but in this case it feels fundamentally irresponsible to not find a way to slow or alter his path.”

Roughly two months later, in an email from April 23, 2018, Zilis updated Musk on OpenAI’s fundraising efforts and progress on a project to develop an AI that could play video games. In the same message, she said she had reallocated most of her time away from OpenAI to his other companies, Neuralink and Tesla, but told him, “If you’d prefer I pull more hours back to OpenAI oversight please let me know.”

Almost a year earlier, in the summer of 2017, OpenAI’s cofounders had started negotiating changes to the organization’s corporate structure—Musk wanted control of the company to start out. In an email from August 28, 2017, Zilis wrote to Musk that she had met with OpenAI president Greg Brockman and cofounder Ilya Sutskever to discuss how equity would be divided up in the new company. She summarized points from the meeting, including that Brockman and Sutskever thought one person shouldn’t have unilateral power over AGI, should they develop it. Musk wrote back to Zilis, “This is very annoying. Please encourage them to go start a company. I’ve had enough.”

#Shivon #Zilis #Operated #Elon #Musks #OpenAI #Insidermodel behavior,artificial intelligence,openai,elon musk,sam altman,neuralink,musk v. altman trial"> How Shivon Zilis Operated as Elon Musk’s OpenAI InsiderAs the first week of trial in Musk v. Altman comes to a close, one person has emerged as a critical behind-the-scenes manager of communications and egos in OpenAI’s early years: Shivon Zilis.A longtime employee of Musk and the mother to four of his children, Zilis joined OpenAI as an adviser in 2016. She later served as a director of its nonprofit board from 2020 until 2023 and has worked as an executive at Musk’s other companies, Neuralink and Tesla.When asked about the nature of his relationship with Zilis in court, Musk offered several answers. At one point, he called her a “chief of staff.” Later, a “close adviser.” At another point, he said “we live together, and she’s the mother of four of my children,” though Zilis said in a deposition that Musk is more of a regular guest and maintains his own residence. Last September, Zilis told OpenAI’s attorneys that she became romantic with Musk around 2016 after she had become an informal adviser to OpenAI. They had their first two children in 2021, she said.But OpenAI’s lawyers have made the case in witness testimonies and evidence that her most important role, as it pertains to this lawsuit, is being a covert liaison between OpenAI and Musk, even years after he left the nonprofit’s board in February 2018.“Do you prefer I stay close and friendly to OpenAI to keep info flowing or begin to disassociate? Trust game is about to get tricky so any guidance for how to do right by you is appreciated,” Zilis wrote in a text message to Musk on February 16, 2018, days before OpenAI announced he was leaving the board. Musk responded, “Close and friendly, but we are going to actively try to move three or four people from OpenAI to Tesla. More than that will join over time, but we won’t actively recruit them.”When asked about this exchange on the witness stand, Musk said he “wanted to know what’s going on.”In the same text thread, Musk wrote, “There is little chance of OpenAI being a serious force if I focus on Tesla AI.” Zilis reaffirmed him, saying: “There is very low probability of a good future if someone doesn’t slow Demis down,” referring to Demis Hassabis, the leader of Google DeepMind, who Musk has said he didn’t trust to control a superintelligent AI system. “You don’t realize how much you have an ability to influence him directly or otherwise slow him down. I think you know I’m not a malicious person, but in this case it feels fundamentally irresponsible to not find a way to slow or alter his path.”Roughly two months later, in an email from April 23, 2018, Zilis updated Musk on OpenAI’s fundraising efforts and progress on a project to develop an AI that could play video games. In the same message, she said she had reallocated most of her time away from OpenAI to his other companies, Neuralink and Tesla, but told him, “If you’d prefer I pull more hours back to OpenAI oversight please let me know.”Almost a year earlier, in the summer of 2017, OpenAI’s cofounders had started negotiating changes to the organization’s corporate structure—Musk wanted control of the company to start out. In an email from August 28, 2017, Zilis wrote to Musk that she had met with OpenAI president Greg Brockman and cofounder Ilya Sutskever to discuss how equity would be divided up in the new company. She summarized points from the meeting, including that Brockman and Sutskever thought one person shouldn’t have unilateral power over AGI, should they develop it. Musk wrote back to Zilis, “This is very annoying. Please encourage them to go start a company. I’ve had enough.”#Shivon #Zilis #Operated #Elon #Musks #OpenAI #Insidermodel behavior,artificial intelligence,openai,elon musk,sam altman,neuralink,musk v. altman trial
Tech-news

trial in Musk v. Altman comes to a close, one person has emerged as a critical behind-the-scenes manager of communications and egos in OpenAI’s early years: Shivon Zilis.

A longtime employee of Musk and the mother to four of his children, Zilis joined OpenAI as an adviser in 2016. She later served as a director of its nonprofit board from 2020 until 2023 and has worked as an executive at Musk’s other companies, Neuralink and Tesla.

When asked about the nature of his relationship with Zilis in court, Musk offered several answers. At one point, he called her a “chief of staff.” Later, a “close adviser.” At another point, he said “we live together, and she’s the mother of four of my children,” though Zilis said in a deposition that Musk is more of a regular guest and maintains his own residence. Last September, Zilis told OpenAI’s attorneys that she became romantic with Musk around 2016 after she had become an informal adviser to OpenAI. They had their first two children in 2021, she said.

But OpenAI’s lawyers have made the case in witness testimonies and evidence that her most important role, as it pertains to this lawsuit, is being a covert liaison between OpenAI and Musk, even years after he left the nonprofit’s board in February 2018.

“Do you prefer I stay close and friendly to OpenAI to keep info flowing or begin to disassociate? Trust game is about to get tricky so any guidance for how to do right by you is appreciated,” Zilis wrote in a text message to Musk on February 16, 2018, days before OpenAI announced he was leaving the board. Musk responded, “Close and friendly, but we are going to actively try to move three or four people from OpenAI to Tesla. More than that will join over time, but we won’t actively recruit them.”

When asked about this exchange on the witness stand, Musk said he “wanted to know what’s going on.”

In the same text thread, Musk wrote, “There is little chance of OpenAI being a serious force if I focus on Tesla AI.” Zilis reaffirmed him, saying: “There is very low probability of a good future if someone doesn’t slow Demis down,” referring to Demis Hassabis, the leader of Google DeepMind, who Musk has said he didn’t trust to control a superintelligent AI system. “You don’t realize how much you have an ability to influence him directly or otherwise slow him down. I think you know I’m not a malicious person, but in this case it feels fundamentally irresponsible to not find a way to slow or alter his path.”

Roughly two months later, in an email from April 23, 2018, Zilis updated Musk on OpenAI’s fundraising efforts and progress on a project to develop an AI that could play video games. In the same message, she said she had reallocated most of her time away from OpenAI to his other companies, Neuralink and Tesla, but told him, “If you’d prefer I pull more hours back to OpenAI oversight please let me know.”

Almost a year earlier, in the summer of 2017, OpenAI’s cofounders had started negotiating changes to the organization’s corporate structure—Musk wanted control of the company to start out. In an email from August 28, 2017, Zilis wrote to Musk that she had met with OpenAI president Greg Brockman and cofounder Ilya Sutskever to discuss how equity would be divided up in the new company. She summarized points from the meeting, including that Brockman and Sutskever thought one person shouldn’t have unilateral power over AGI, should they develop it. Musk wrote back to Zilis, “This is very annoying. Please encourage them to go start a company. I’ve had enough.”

#Shivon #Zilis #Operated #Elon #Musks #OpenAI #Insidermodel behavior,artificial intelligence,openai,elon musk,sam altman,neuralink,musk v. altman trial">How Shivon Zilis Operated as Elon Musk’s OpenAI Insider

As the first week of trial in Musk v. Altman comes to a close, one person has emerged as a critical behind-the-scenes manager of communications and egos in OpenAI’s early years: Shivon Zilis.

A longtime employee of Musk and the mother to four of his children, Zilis joined OpenAI as an adviser in 2016. She later served as a director of its nonprofit board from 2020 until 2023 and has worked as an executive at Musk’s other companies, Neuralink and Tesla.

When asked about the nature of his relationship with Zilis in court, Musk offered several answers. At one point, he called her a “chief of staff.” Later, a “close adviser.” At another point, he said “we live together, and she’s the mother of four of my children,” though Zilis said in a deposition that Musk is more of a regular guest and maintains his own residence. Last September, Zilis told OpenAI’s attorneys that she became romantic with Musk around 2016 after she had become an informal adviser to OpenAI. They had their first two children in 2021, she said.

But OpenAI’s lawyers have made the case in witness testimonies and evidence that her most important role, as it pertains to this lawsuit, is being a covert liaison between OpenAI and Musk, even years after he left the nonprofit’s board in February 2018.

“Do you prefer I stay close and friendly to OpenAI to keep info flowing or begin to disassociate? Trust game is about to get tricky so any guidance for how to do right by you is appreciated,” Zilis wrote in a text message to Musk on February 16, 2018, days before OpenAI announced he was leaving the board. Musk responded, “Close and friendly, but we are going to actively try to move three or four people from OpenAI to Tesla. More than that will join over time, but we won’t actively recruit them.”

When asked about this exchange on the witness stand, Musk said he “wanted to know what’s going on.”

In the same text thread, Musk wrote, “There is little chance of OpenAI being a serious force if I focus on Tesla AI.” Zilis reaffirmed him, saying: “There is very low probability of a good future if someone doesn’t slow Demis down,” referring to Demis Hassabis, the leader of Google DeepMind, who Musk has said he didn’t trust to control a superintelligent AI system. “You don’t realize how much you have an ability to influence him directly or otherwise slow him down. I think you know I’m not a malicious person, but in this case it feels fundamentally irresponsible to not find a way to slow or alter his path.”

Roughly two months later, in an email from April 23, 2018, Zilis updated Musk on OpenAI’s fundraising efforts and progress on a project to develop an AI that could play video games. In the same message, she said she had reallocated most of her time away from OpenAI to his other companies, Neuralink and Tesla, but told him, “If you’d prefer I pull more hours back to OpenAI oversight please let me know.”

Almost a year earlier, in the summer of 2017, OpenAI’s cofounders had started negotiating changes to the organization’s corporate structure—Musk wanted control of the company to start out. In an email from August 28, 2017, Zilis wrote to Musk that she had met with OpenAI president Greg Brockman and cofounder Ilya Sutskever to discuss how equity would be divided up in the new company. She summarized points from the meeting, including that Brockman and Sutskever thought one person shouldn’t have unilateral power over AGI, should they develop it. Musk wrote back to Zilis, “This is very annoying. Please encourage them to go start a company. I’ve had enough.”

#Shivon #Zilis #Operated #Elon #Musks #OpenAI #Insidermodel behavior,artificial intelligence,openai,elon musk,sam altman,neuralink,musk v. altman trial

As the first week of trial in Musk v. Altman comes to a close, one…

trash-talked Anthropic for gatekeeping its cybersecurity tool Mythos by only releasing it to select users, he confirmed that OpenAI would be doing the same with its competing tool, Cyber.

Altman said in a post on X on Thursday that OpenAI will begin rolling out GPT-5.5 Cyber “to critical cyber defenders” in the next few days. OpenAI has an application on its website where people submit information about their credentials and planned use in order to gain access.

Cyber can perform such tasks as penetration testing, vulnerability identification (and exploitation), and malware reverse engineering, the application implies. It’s intended to be a toolkit to help a company find security holes and test defenses. The fear is that the kit could be misused by the bad guys.

When Anthropic similarly restricted access to Mythos, Altman called the tactic fear-based marketing. Some critics also thought so, saying Anthropic’s rhetoric was overblown. Ironically, an unauthorized group reportedly managed to gain access to Mythos anyway.

OpenAI says it’s working to make Cyber more widely available by consulting with the U.S. government and identifying more users with legit cybersecurity credentials.

#dissing #Anthropic #limiting #Mythos #OpenAI #restricts #access #Cyber #TechCrunchAnthropic,cyber,Mythos,OpenAI"> After dissing Anthropic for limiting Mythos, OpenAI restricts access to Cyber, too | TechCrunch
After Sam Altman trash-talked Anthropic for gatekeeping its cybersecurity tool Mythos by only releasing it to select users, he confirmed that OpenAI would be doing the same with its competing tool, Cyber.

Altman said in a post on X on Thursday that OpenAI will begin rolling out GPT-5.5 Cyber “to critical cyber defenders” in the next few days. OpenAI has an application on its website where people submit information about their credentials and planned use in order to gain access.







Cyber can perform such tasks as penetration testing, vulnerability identification (and exploitation), and malware reverse engineering, the application implies. It’s intended to be a toolkit to help a company find security holes and test defenses. The fear is that the kit could be misused by the bad guys.

When Anthropic similarly restricted access to Mythos, Altman called the tactic fear-based marketing. Some critics also thought so, saying Anthropic’s rhetoric was overblown. Ironically, an unauthorized group reportedly managed to gain access to Mythos anyway.

OpenAI says it’s working to make Cyber more widely available by consulting with the U.S. government and identifying more users with legit cybersecurity credentials.






#dissing #Anthropic #limiting #Mythos #OpenAI #restricts #access #Cyber #TechCrunchAnthropic,cyber,Mythos,OpenAI
Tech-news

trash-talked Anthropic for gatekeeping its cybersecurity tool Mythos by only releasing it to select users, he confirmed that OpenAI would be doing the same with its competing tool, Cyber.

Altman said in a post on X on Thursday that OpenAI will begin rolling out GPT-5.5 Cyber “to critical cyber defenders” in the next few days. OpenAI has an application on its website where people submit information about their credentials and planned use in order to gain access.

Cyber can perform such tasks as penetration testing, vulnerability identification (and exploitation), and malware reverse engineering, the application implies. It’s intended to be a toolkit to help a company find security holes and test defenses. The fear is that the kit could be misused by the bad guys.

When Anthropic similarly restricted access to Mythos, Altman called the tactic fear-based marketing. Some critics also thought so, saying Anthropic’s rhetoric was overblown. Ironically, an unauthorized group reportedly managed to gain access to Mythos anyway.

OpenAI says it’s working to make Cyber more widely available by consulting with the U.S. government and identifying more users with legit cybersecurity credentials.

#dissing #Anthropic #limiting #Mythos #OpenAI #restricts #access #Cyber #TechCrunchAnthropic,cyber,Mythos,OpenAI">After dissing Anthropic for limiting Mythos, OpenAI restricts access to Cyber, too | TechCrunch

After Sam Altman trash-talked Anthropic for gatekeeping its cybersecurity tool Mythos by only releasing it to select users, he confirmed that OpenAI would be doing the same with its competing tool, Cyber.

Altman said in a post on X on Thursday that OpenAI will begin rolling out GPT-5.5 Cyber “to critical cyber defenders” in the next few days. OpenAI has an application on its website where people submit information about their credentials and planned use in order to gain access.

Cyber can perform such tasks as penetration testing, vulnerability identification (and exploitation), and malware reverse engineering, the application implies. It’s intended to be a toolkit to help a company find security holes and test defenses. The fear is that the kit could be misused by the bad guys.

When Anthropic similarly restricted access to Mythos, Altman called the tactic fear-based marketing. Some critics also thought so, saying Anthropic’s rhetoric was overblown. Ironically, an unauthorized group reportedly managed to gain access to Mythos anyway.

OpenAI says it’s working to make Cyber more widely available by consulting with the U.S. government and identifying more users with legit cybersecurity credentials.

#dissing #Anthropic #limiting #Mythos #OpenAI #restricts #access #Cyber #TechCrunchAnthropic,cyber,Mythos,OpenAI

After Sam Altman trash-talked Anthropic for gatekeeping its cybersecurity tool Mythos by only releasing it…

his side of the story in his legal battle against OpenAI and its CEO Sam Altman. Under cross-examination from OpenAI’s lawyers, Musk was pressed on all the ways he tried to squeeze the organization over a 2017 power struggle that he ultimately lost. Around this time, Musk tried to hire away OpenAI researchers and stopped sending it funding he had previously promised, according to emails presented as evidence in the case.

As the cross-examination began, tension rippled through the courtroom. Judge Yvonne Gonzalez Rogers started the day by reprimanding someone in the gallery for taking a picture of Musk. OpenAI president and cofounder Greg Brockman sat behind his lawyers with a yellow legal pad in his lap, giving Musk a cold stare as he testified. Musk grew visibly frustrated on the witness stand, pausing frequently to tell OpenAI’s lawyer, William Savitt, that he saw his questions as misleading. Meanwhile, Savitt’s cross-examination was derailed by objections, technical issues, and Musk continuously claiming he doesn’t recall key details of OpenAI’s history.

Savitt showed the courtroom emails from September 2017 between Musk, Brockman, and researcher Ilya Sutskever discussing the formation of what would become OpenAI’s for-profit arm. In the thread, Musk demanded the right to choose four members of its board of directors, giving him more voting power than his cofounders, who would be left with three in total. “I would unequivocally have initial control of the company, but this will change quickly,” said Musk in one message. Sutskever wrote back rejecting the idea because he said he feared it would give Musk too much power.

Months before these negotiations started, Musk had halted payments to OpenAI, which was particularly difficult for the organization because he was then its main source of funding. Since 2016, Musk had been sending $5 million payments to OpenAI quarterly as part of a broader $1 billion pledge he made at the organization’s launch. But in the spring of 2017, he stopped sending the money. In another email from August 2017, the head of Musk’s family office, Jared Birchall, asked Musk if he should continue withholding it. Musk responded simply, “Yes.”

Around the time Musk lost the power struggle, emails show that he held discussions with executives at Tesla and Neuralink, his brain-computer interface company, about hiring OpenAI employees. At the time, Musk was still a board member of OpenAI.

Musk sent an email to a Tesla vice president in June 2017 about hiring an early OpenAI researcher, Andrej Karpathy. “Just talked to Andrej and he accepted as joining as director of Tesla Vision,” Musk wrote. “Andrej is arguably the #2 guy in the world in computer vision … The openai guys are gonna want to kill me, but it had to be done.”

On the stand, Musk argued that Karpathy was already interested in leaving OpenAI when he tried to recruit him to Tesla. “Andrej had made his decision. If he’s going to leave OpenAI, he might as well work at Tesla,” Musk said.

In October 2017, Musk also wrote to Ben Rapoport, a cofounder of Neuralink. “Hire independently or directly from OpenAI,” said Musk. “I have no problem if you pitch people at OpenAI to work at Neuralink.”

When pressed about this by Savitt, Musk argued that it would have been illegal for him not to allow Tesla and Neuralink to hire from OpenAI. “It’s illegal to restrict employment. It would be illegal to say you can’t employ people from OpenAI. You can’t have some cabal that stops people from working at the company they want to work at,” Musk said.

#Elon #Musk #Squeezed #OpenAI #Gonna #Killmodel behavior,artificial intelligence,elon musk,openai,sam altman,lawsuits"> How Elon Musk Squeezed OpenAI: They ‘Are Gonna Want to Kill Me’Elon Musk returned to the witness stand on Wednesday to continue telling his side of the story in his legal battle against OpenAI and its CEO Sam Altman. Under cross-examination from OpenAI’s lawyers, Musk was pressed on all the ways he tried to squeeze the organization over a 2017 power struggle that he ultimately lost. Around this time, Musk tried to hire away OpenAI researchers and stopped sending it funding he had previously promised, according to emails presented as evidence in the case.As the cross-examination began, tension rippled through the courtroom. Judge Yvonne Gonzalez Rogers started the day by reprimanding someone in the gallery for taking a picture of Musk. OpenAI president and cofounder Greg Brockman sat behind his lawyers with a yellow legal pad in his lap, giving Musk a cold stare as he testified. Musk grew visibly frustrated on the witness stand, pausing frequently to tell OpenAI’s lawyer, William Savitt, that he saw his questions as misleading. Meanwhile, Savitt’s cross-examination was derailed by objections, technical issues, and Musk continuously claiming he doesn’t recall key details of OpenAI’s history.Savitt showed the courtroom emails from September 2017 between Musk, Brockman, and researcher Ilya Sutskever discussing the formation of what would become OpenAI’s for-profit arm. In the thread, Musk demanded the right to choose four members of its board of directors, giving him more voting power than his cofounders, who would be left with three in total. “I would unequivocally have initial control of the company, but this will change quickly,” said Musk in one message. Sutskever wrote back rejecting the idea because he said he feared it would give Musk too much power.Months before these negotiations started, Musk had halted payments to OpenAI, which was particularly difficult for the organization because he was then its main source of funding. Since 2016, Musk had been sending  million payments to OpenAI quarterly as part of a broader  billion pledge he made at the organization’s launch. But in the spring of 2017, he stopped sending the money. In another email from August 2017, the head of Musk’s family office, Jared Birchall, asked Musk if he should continue withholding it. Musk responded simply, “Yes.”Around the time Musk lost the power struggle, emails show that he held discussions with executives at Tesla and Neuralink, his brain-computer interface company, about hiring OpenAI employees. At the time, Musk was still a board member of OpenAI.Musk sent an email to a Tesla vice president in June 2017 about hiring an early OpenAI researcher, Andrej Karpathy. “Just talked to Andrej and he accepted as joining as director of Tesla Vision,” Musk wrote. “Andrej is arguably the #2 guy in the world in computer vision … The openai guys are gonna want to kill me, but it had to be done.”On the stand, Musk argued that Karpathy was already interested in leaving OpenAI when he tried to recruit him to Tesla. “Andrej had made his decision. If he’s going to leave OpenAI, he might as well work at Tesla,” Musk said.In October 2017, Musk also wrote to Ben Rapoport, a cofounder of Neuralink. “Hire independently or directly from OpenAI,” said Musk. “I have no problem if you pitch people at OpenAI to work at Neuralink.”When pressed about this by Savitt, Musk argued that it would have been illegal for him not to allow Tesla and Neuralink to hire from OpenAI. “It’s illegal to restrict employment. It would be illegal to say you can’t employ people from OpenAI. You can’t have some cabal that stops people from working at the company they want to work at,” Musk said.#Elon #Musk #Squeezed #OpenAI #Gonna #Killmodel behavior,artificial intelligence,elon musk,openai,sam altman,lawsuits
Tech-news

his side of the story in his legal battle against OpenAI and its CEO Sam Altman. Under cross-examination from OpenAI’s lawyers, Musk was pressed on all the ways he tried to squeeze the organization over a 2017 power struggle that he ultimately lost. Around this time, Musk tried to hire away OpenAI researchers and stopped sending it funding he had previously promised, according to emails presented as evidence in the case.

As the cross-examination began, tension rippled through the courtroom. Judge Yvonne Gonzalez Rogers started the day by reprimanding someone in the gallery for taking a picture of Musk. OpenAI president and cofounder Greg Brockman sat behind his lawyers with a yellow legal pad in his lap, giving Musk a cold stare as he testified. Musk grew visibly frustrated on the witness stand, pausing frequently to tell OpenAI’s lawyer, William Savitt, that he saw his questions as misleading. Meanwhile, Savitt’s cross-examination was derailed by objections, technical issues, and Musk continuously claiming he doesn’t recall key details of OpenAI’s history.

Savitt showed the courtroom emails from September 2017 between Musk, Brockman, and researcher Ilya Sutskever discussing the formation of what would become OpenAI’s for-profit arm. In the thread, Musk demanded the right to choose four members of its board of directors, giving him more voting power than his cofounders, who would be left with three in total. “I would unequivocally have initial control of the company, but this will change quickly,” said Musk in one message. Sutskever wrote back rejecting the idea because he said he feared it would give Musk too much power.

Months before these negotiations started, Musk had halted payments to OpenAI, which was particularly difficult for the organization because he was then its main source of funding. Since 2016, Musk had been sending $5 million payments to OpenAI quarterly as part of a broader $1 billion pledge he made at the organization’s launch. But in the spring of 2017, he stopped sending the money. In another email from August 2017, the head of Musk’s family office, Jared Birchall, asked Musk if he should continue withholding it. Musk responded simply, “Yes.”

Around the time Musk lost the power struggle, emails show that he held discussions with executives at Tesla and Neuralink, his brain-computer interface company, about hiring OpenAI employees. At the time, Musk was still a board member of OpenAI.

Musk sent an email to a Tesla vice president in June 2017 about hiring an early OpenAI researcher, Andrej Karpathy. “Just talked to Andrej and he accepted as joining as director of Tesla Vision,” Musk wrote. “Andrej is arguably the #2 guy in the world in computer vision … The openai guys are gonna want to kill me, but it had to be done.”

On the stand, Musk argued that Karpathy was already interested in leaving OpenAI when he tried to recruit him to Tesla. “Andrej had made his decision. If he’s going to leave OpenAI, he might as well work at Tesla,” Musk said.

In October 2017, Musk also wrote to Ben Rapoport, a cofounder of Neuralink. “Hire independently or directly from OpenAI,” said Musk. “I have no problem if you pitch people at OpenAI to work at Neuralink.”

When pressed about this by Savitt, Musk argued that it would have been illegal for him not to allow Tesla and Neuralink to hire from OpenAI. “It’s illegal to restrict employment. It would be illegal to say you can’t employ people from OpenAI. You can’t have some cabal that stops people from working at the company they want to work at,” Musk said.

#Elon #Musk #Squeezed #OpenAI #Gonna #Killmodel behavior,artificial intelligence,elon musk,openai,sam altman,lawsuits">How Elon Musk Squeezed OpenAI: They ‘Are Gonna Want to Kill Me’

Elon Musk returned to the witness stand on Wednesday to continue telling his side of the story in his legal battle against OpenAI and its CEO Sam Altman. Under cross-examination from OpenAI’s lawyers, Musk was pressed on all the ways he tried to squeeze the organization over a 2017 power struggle that he ultimately lost. Around this time, Musk tried to hire away OpenAI researchers and stopped sending it funding he had previously promised, according to emails presented as evidence in the case.

As the cross-examination began, tension rippled through the courtroom. Judge Yvonne Gonzalez Rogers started the day by reprimanding someone in the gallery for taking a picture of Musk. OpenAI president and cofounder Greg Brockman sat behind his lawyers with a yellow legal pad in his lap, giving Musk a cold stare as he testified. Musk grew visibly frustrated on the witness stand, pausing frequently to tell OpenAI’s lawyer, William Savitt, that he saw his questions as misleading. Meanwhile, Savitt’s cross-examination was derailed by objections, technical issues, and Musk continuously claiming he doesn’t recall key details of OpenAI’s history.

Savitt showed the courtroom emails from September 2017 between Musk, Brockman, and researcher Ilya Sutskever discussing the formation of what would become OpenAI’s for-profit arm. In the thread, Musk demanded the right to choose four members of its board of directors, giving him more voting power than his cofounders, who would be left with three in total. “I would unequivocally have initial control of the company, but this will change quickly,” said Musk in one message. Sutskever wrote back rejecting the idea because he said he feared it would give Musk too much power.

Months before these negotiations started, Musk had halted payments to OpenAI, which was particularly difficult for the organization because he was then its main source of funding. Since 2016, Musk had been sending $5 million payments to OpenAI quarterly as part of a broader $1 billion pledge he made at the organization’s launch. But in the spring of 2017, he stopped sending the money. In another email from August 2017, the head of Musk’s family office, Jared Birchall, asked Musk if he should continue withholding it. Musk responded simply, “Yes.”

Around the time Musk lost the power struggle, emails show that he held discussions with executives at Tesla and Neuralink, his brain-computer interface company, about hiring OpenAI employees. At the time, Musk was still a board member of OpenAI.

Musk sent an email to a Tesla vice president in June 2017 about hiring an early OpenAI researcher, Andrej Karpathy. “Just talked to Andrej and he accepted as joining as director of Tesla Vision,” Musk wrote. “Andrej is arguably the #2 guy in the world in computer vision … The openai guys are gonna want to kill me, but it had to be done.”

On the stand, Musk argued that Karpathy was already interested in leaving OpenAI when he tried to recruit him to Tesla. “Andrej had made his decision. If he’s going to leave OpenAI, he might as well work at Tesla,” Musk said.

In October 2017, Musk also wrote to Ben Rapoport, a cofounder of Neuralink. “Hire independently or directly from OpenAI,” said Musk. “I have no problem if you pitch people at OpenAI to work at Neuralink.”

When pressed about this by Savitt, Musk argued that it would have been illegal for him not to allow Tesla and Neuralink to hire from OpenAI. “It’s illegal to restrict employment. It would be illegal to say you can’t employ people from OpenAI. You can’t have some cabal that stops people from working at the company they want to work at,” Musk said.

#Elon #Musk #Squeezed #OpenAI #Gonna #Killmodel behavior,artificial intelligence,elon musk,openai,sam altman,lawsuits

Elon Musk returned to the witness stand on Wednesday to continue telling his side of…

revealed to include a line, repeated several times, that specifically forbids it from randomly mentioning an assortment of mythical and real creatures.

“Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user’s query,” read instructions in Codex CLI, a command-line tool for using AI to generate code.

It is unclear why OpenAI felt compelled to spell this out for Codex—or indeed why its models might want to discuss goblins or pigeons in the first place. The company did not immediately respond to a request for comment.

OpenAI’s newest model, GPT-5.5, was released with enhanced coding skills earlier this month. The company is in a fierce race with rivals, especially Anthropic, to deliver cutting-edge AI, and coding has emerged as a killer capability.

In response to a post on X that highlighted the lines, however, some users claimed that OpenAI’s models occasionally become obsessed with goblins and other creatures when used to power OpenClaw, a tool that lets AI take control of a computer and apps running on it in order to do useful things for users.

“I was wondering why my claw suddenly became a goblin with codex 5.5,” one user wrote on X.

“Been using it a lot lately and it actually can’t stop speaking of bugs as ‘gremlins’ and ‘goblins’ it’s hilarious,” posted another.

The discovery quickly became its own meme, inspiring AI-generated scenes of goblins in data centers, and plug-ins for Codex that put it in a playful “goblin mode.”

AI models like GPT-5.5 are trained to predict the word—or code—that should follow a given prompt. These models have become so good at doing this that they appear to exhibit genuine intelligence. But their probabilistic nature means that they can sometimes behave in surprising ways. A model might become more prone to misbehavior when used with an “agentic harness” like OpenClaw that puts lots of additional instructions into prompts, such as facts stored in long-term memory.

OpenAI acquired OpenClaw in February not long after the tool became a viral hit among AI enthusiasts. OpenClaw can use any AI model to automate useful tasks like answering emails or buying things on the web. Users can select any of various personae for their helper, which shapes its behavior and responses.

OpenAI staffers appeared to acknowledge the prohibition. In response to a post highlighting OpenClaw’s goblin tendencies, Nik Pash, who works on Codex, wrote, “This is indeed one of the reasons.”

Even Sam Altman, OpenAI’s CEO, joined in with the memes, posting a screenshot of a prompt for ChatGPT. It read: “Start training GPT-6, you can have the whole cluster. Extra goblins.”

#OpenAI #Codex #Shut #Goblinsopenai,artificial intelligence,coding,agentic ai,trolls"> OpenAI Really Wants Codex to Shut Up About GoblinsOpenAI has a goblin problem.Instructions designed to guide the behavior of the company’s latest model as it writes code have been revealed to include a line, repeated several times, that specifically forbids it from randomly mentioning an assortment of mythical and real creatures.“Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user’s query,” read instructions in Codex CLI, a command-line tool for using AI to generate code.It is unclear why OpenAI felt compelled to spell this out for Codex—or indeed why its models might want to discuss goblins or pigeons in the first place. The company did not immediately respond to a request for comment.OpenAI’s newest model, GPT-5.5, was released with enhanced coding skills earlier this month. The company is in a fierce race with rivals, especially Anthropic, to deliver cutting-edge AI, and coding has emerged as a killer capability.In response to a post on X that highlighted the lines, however, some users claimed that OpenAI’s models occasionally become obsessed with goblins and other creatures when used to power OpenClaw, a tool that lets AI take control of a computer and apps running on it in order to do useful things for users.“I was wondering why my claw suddenly became a goblin with codex 5.5,” one user wrote on X.“Been using it a lot lately and it actually can’t stop speaking of bugs as ‘gremlins’ and ‘goblins’ it’s hilarious,” posted another.The discovery quickly became its own meme, inspiring AI-generated scenes of goblins in data centers, and plug-ins for Codex that put it in a playful “goblin mode.”AI models like GPT-5.5 are trained to predict the word—or code—that should follow a given prompt. These models have become so good at doing this that they appear to exhibit genuine intelligence. But their probabilistic nature means that they can sometimes behave in surprising ways. A model might become more prone to misbehavior when used with an “agentic harness” like OpenClaw that puts lots of additional instructions into prompts, such as facts stored in long-term memory.OpenAI acquired OpenClaw in February not long after the tool became a viral hit among AI enthusiasts. OpenClaw can use any AI model to automate useful tasks like answering emails or buying things on the web. Users can select any of various personae for their helper, which shapes its behavior and responses.OpenAI staffers appeared to acknowledge the prohibition. In response to a post highlighting OpenClaw’s goblin tendencies, Nik Pash, who works on Codex, wrote, “This is indeed one of the reasons.”Even Sam Altman, OpenAI’s CEO, joined in with the memes, posting a screenshot of a prompt for ChatGPT. It read: “Start training GPT-6, you can have the whole cluster. Extra goblins.”#OpenAI #Codex #Shut #Goblinsopenai,artificial intelligence,coding,agentic ai,trolls
Tech-news

revealed to include a line, repeated several times, that specifically forbids it from randomly mentioning an assortment of mythical and real creatures.

“Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user’s query,” read instructions in Codex CLI, a command-line tool for using AI to generate code.

It is unclear why OpenAI felt compelled to spell this out for Codex—or indeed why its models might want to discuss goblins or pigeons in the first place. The company did not immediately respond to a request for comment.

OpenAI’s newest model, GPT-5.5, was released with enhanced coding skills earlier this month. The company is in a fierce race with rivals, especially Anthropic, to deliver cutting-edge AI, and coding has emerged as a killer capability.

In response to a post on X that highlighted the lines, however, some users claimed that OpenAI’s models occasionally become obsessed with goblins and other creatures when used to power OpenClaw, a tool that lets AI take control of a computer and apps running on it in order to do useful things for users.

“I was wondering why my claw suddenly became a goblin with codex 5.5,” one user wrote on X.

“Been using it a lot lately and it actually can’t stop speaking of bugs as ‘gremlins’ and ‘goblins’ it’s hilarious,” posted another.

The discovery quickly became its own meme, inspiring AI-generated scenes of goblins in data centers, and plug-ins for Codex that put it in a playful “goblin mode.”

AI models like GPT-5.5 are trained to predict the word—or code—that should follow a given prompt. These models have become so good at doing this that they appear to exhibit genuine intelligence. But their probabilistic nature means that they can sometimes behave in surprising ways. A model might become more prone to misbehavior when used with an “agentic harness” like OpenClaw that puts lots of additional instructions into prompts, such as facts stored in long-term memory.

OpenAI acquired OpenClaw in February not long after the tool became a viral hit among AI enthusiasts. OpenClaw can use any AI model to automate useful tasks like answering emails or buying things on the web. Users can select any of various personae for their helper, which shapes its behavior and responses.

OpenAI staffers appeared to acknowledge the prohibition. In response to a post highlighting OpenClaw’s goblin tendencies, Nik Pash, who works on Codex, wrote, “This is indeed one of the reasons.”

Even Sam Altman, OpenAI’s CEO, joined in with the memes, posting a screenshot of a prompt for ChatGPT. It read: “Start training GPT-6, you can have the whole cluster. Extra goblins.”

#OpenAI #Codex #Shut #Goblinsopenai,artificial intelligence,coding,agentic ai,trolls">OpenAI Really Wants Codex to Shut Up About Goblins

OpenAI has a goblin problem.

Instructions designed to guide the behavior of the company’s latest model as it writes code have been revealed to include a line, repeated several times, that specifically forbids it from randomly mentioning an assortment of mythical and real creatures.

“Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user’s query,” read instructions in Codex CLI, a command-line tool for using AI to generate code.

It is unclear why OpenAI felt compelled to spell this out for Codex—or indeed why its models might want to discuss goblins or pigeons in the first place. The company did not immediately respond to a request for comment.

OpenAI’s newest model, GPT-5.5, was released with enhanced coding skills earlier this month. The company is in a fierce race with rivals, especially Anthropic, to deliver cutting-edge AI, and coding has emerged as a killer capability.

In response to a post on X that highlighted the lines, however, some users claimed that OpenAI’s models occasionally become obsessed with goblins and other creatures when used to power OpenClaw, a tool that lets AI take control of a computer and apps running on it in order to do useful things for users.

“I was wondering why my claw suddenly became a goblin with codex 5.5,” one user wrote on X.

“Been using it a lot lately and it actually can’t stop speaking of bugs as ‘gremlins’ and ‘goblins’ it’s hilarious,” posted another.

The discovery quickly became its own meme, inspiring AI-generated scenes of goblins in data centers, and plug-ins for Codex that put it in a playful “goblin mode.”

AI models like GPT-5.5 are trained to predict the word—or code—that should follow a given prompt. These models have become so good at doing this that they appear to exhibit genuine intelligence. But their probabilistic nature means that they can sometimes behave in surprising ways. A model might become more prone to misbehavior when used with an “agentic harness” like OpenClaw that puts lots of additional instructions into prompts, such as facts stored in long-term memory.

OpenAI acquired OpenClaw in February not long after the tool became a viral hit among AI enthusiasts. OpenClaw can use any AI model to automate useful tasks like answering emails or buying things on the web. Users can select any of various personae for their helper, which shapes its behavior and responses.

OpenAI staffers appeared to acknowledge the prohibition. In response to a post highlighting OpenClaw’s goblin tendencies, Nik Pash, who works on Codex, wrote, “This is indeed one of the reasons.”

Even Sam Altman, OpenAI’s CEO, joined in with the memes, posting a screenshot of a prompt for ChatGPT. It read: “Start training GPT-6, you can have the whole cluster. Extra goblins.”

#OpenAI #Codex #Shut #Goblinsopenai,artificial intelligence,coding,agentic ai,trolls

OpenAI has a goblin problem.Instructions designed to guide the behavior of the company’s latest model…

no longer has exclusive rights to any of its products, Amazon started gloating.

After the revised OpenAI/Microsoft agreement was announced on Monday, Amazon CEO Andy Jassy noted in a tweet that it was a “very interesting announcement.” That agreement solved OpenAI’s problem of allowing AWS to offer its products, an issue that crystalized after it signed an up-to-$50-billion deal with Amazon.

Amazon announced on Tuesday that AWS’s Bedrock service now has OpenAI’s latest models, its code-writing service Codex, and a new product for creating OpenAI-powered AI agents. Bedrock is Amazon’s AI app building and model-choosing service.

Amazon is calling the new agent service Bedrock Managed Agents. It is specifically designed to use OpenAI’s reasoning models, offering features like agent steering and security.

Amazon promises in its blog post that “this is the beginning of a deeper collaboration between AWS and OpenAI.” And it will certainly be interesting to watch.

The Microsoft/OpenAI relationship has reportedly been deteriorating for some time, with each of them finding comfort in the arms of their partner’s biggest rival. OpenAI has turned to AWS and Oracle. Microsoft to Anthropic; the Redmond-based software giant is also working on a new agent offering powered by Claude.

Techcrunch event

San Francisco, CA | October 13-15, 2026

#Amazon #offering #OpenAI #products #AWS #TechCrunchAmazon,AWS,In Brief,OpenAI"> Amazon is already offering new OpenAI products on AWS | TechCrunch
Almost as soon as OpenAI announced that its major investor and cloud partner, Microsoft, no longer has exclusive rights to any of its products, Amazon started gloating.

After the revised OpenAI/Microsoft agreement was announced on Monday, Amazon CEO Andy Jassy noted in a tweet that it was a “very interesting announcement.” That agreement solved OpenAI’s problem of allowing AWS to offer its products, an issue that crystalized after it signed an up-to--billion deal with Amazon.







Amazon announced on Tuesday that AWS’s Bedrock service now has OpenAI’s latest models, its code-writing service Codex, and a new product for creating OpenAI-powered AI agents. Bedrock is Amazon’s AI app building and model-choosing service.

Amazon is calling the new agent service Bedrock Managed Agents. It is specifically designed to use OpenAI’s reasoning models, offering features like agent steering and security.

Amazon promises in its blog post that “this is the beginning of a deeper collaboration between AWS and OpenAI.” And it will certainly be interesting to watch. 

The Microsoft/OpenAI relationship has reportedly been deteriorating for some time, with each of them finding comfort in the arms of their partner’s biggest rival. OpenAI has turned to AWS and Oracle. Microsoft to Anthropic; the Redmond-based software giant is also working on a new agent offering powered by Claude.



	
		
		Techcrunch event
		
			
			
									San Francisco, CA
													|
													October 13-15, 2026
							
			
		
	



#Amazon #offering #OpenAI #products #AWS #TechCrunchAmazon,AWS,In Brief,OpenAI
Tech-news

no longer has exclusive rights to any of its products, Amazon started gloating.

After the revised OpenAI/Microsoft agreement was announced on Monday, Amazon CEO Andy Jassy noted in a tweet that it was a “very interesting announcement.” That agreement solved OpenAI’s problem of allowing AWS to offer its products, an issue that crystalized after it signed an up-to-$50-billion deal with Amazon.

Amazon announced on Tuesday that AWS’s Bedrock service now has OpenAI’s latest models, its code-writing service Codex, and a new product for creating OpenAI-powered AI agents. Bedrock is Amazon’s AI app building and model-choosing service.

Amazon is calling the new agent service Bedrock Managed Agents. It is specifically designed to use OpenAI’s reasoning models, offering features like agent steering and security.

Amazon promises in its blog post that “this is the beginning of a deeper collaboration between AWS and OpenAI.” And it will certainly be interesting to watch.

The Microsoft/OpenAI relationship has reportedly been deteriorating for some time, with each of them finding comfort in the arms of their partner’s biggest rival. OpenAI has turned to AWS and Oracle. Microsoft to Anthropic; the Redmond-based software giant is also working on a new agent offering powered by Claude.

Techcrunch event

San Francisco, CA | October 13-15, 2026

#Amazon #offering #OpenAI #products #AWS #TechCrunchAmazon,AWS,In Brief,OpenAI">Amazon is already offering new OpenAI products on AWS | TechCrunch

Almost as soon as OpenAI announced that its major investor and cloud partner, Microsoft, no longer has exclusive rights to any of its products, Amazon started gloating.

After the revised OpenAI/Microsoft agreement was announced on Monday, Amazon CEO Andy Jassy noted in a tweet that it was a “very interesting announcement.” That agreement solved OpenAI’s problem of allowing AWS to offer its products, an issue that crystalized after it signed an up-to-$50-billion deal with Amazon.

Amazon announced on Tuesday that AWS’s Bedrock service now has OpenAI’s latest models, its code-writing service Codex, and a new product for creating OpenAI-powered AI agents. Bedrock is Amazon’s AI app building and model-choosing service.

Amazon is calling the new agent service Bedrock Managed Agents. It is specifically designed to use OpenAI’s reasoning models, offering features like agent steering and security.

Amazon promises in its blog post that “this is the beginning of a deeper collaboration between AWS and OpenAI.” And it will certainly be interesting to watch.

The Microsoft/OpenAI relationship has reportedly been deteriorating for some time, with each of them finding comfort in the arms of their partner’s biggest rival. OpenAI has turned to AWS and Oracle. Microsoft to Anthropic; the Redmond-based software giant is also working on a new agent offering powered by Claude.

Techcrunch event

San Francisco, CA | October 13-15, 2026

#Amazon #offering #OpenAI #products #AWS #TechCrunchAmazon,AWS,In Brief,OpenAI

Almost as soon as OpenAI announced that its major investor and cloud partner, Microsoft, no…

Tech-news

“Elon Musk is a greedy, racist, homophobic piece of garbage.”“Elon Musk is a world-class jerk.”“I…

acquisitions, competition with Anthropic, or bigger debates about AI’s impact on society.

On the latest episode of TechCrunch’s Equity podcast, Kirsten Korosec, Sean O’Kane, and I did our best to round up all the latest OpenAI news. While the company’s latest acquisitions seem to be classic acqui-hires, Sean suggested they also address “two big existential problems that OpenAI is trying to solve right now.”

First, with the team behind personal finance startup Hiro, the company may be hoping to  come up with a product that has “more hooks than just a chatbot, and maybe something worth paying more for.” And with new media startup TBPN, OpenAI could be looking to “better shape its image in the public eye, which lately has not been great.”

Read a preview of our conversation, edited for length and clarity below.

Anthony: [We have] two deals that are worth mentioning, one is that OpenAI acquired this personal finance startup called Hiro. And that comes after another deal that was literally announced when we were recording our last episode of Equity, so we didn’t get to talk about it: OpenAI had also acquired TBPN — a business talk show, like a new media company.

And I think both of these deals are pretty small compared to the scale of OpenAI. These are not things that people expect to really change the course of their business or anything like that, but they’re interesting because it suggests that there’s still this [attitude of,] “Let’s try out different things.”

Especially [with] the TBPN deal […] particularly at this time when it feels like OpenAI, from all the reporting we’re reading, is also trying to really refocus on making ChatGPT and its GPT models really competitive in an enterprise context with programmers.

Techcrunch event

San Francisco, CA | October 13-15, 2026

Is running a tech talk show, should that really be on the to-do list?

Kirsten: No, this should not be on the to-do list. That’s it. 

I do want to mention Hiro because to me, that’s an interesting one, because Julie Bort, our venture editor, super talented, she wrote about this and was I think the first to write about it. She dug in a little bit and basically this looks like an acqui-hire. The company is folding. They basically said, “By this date, you won’t be able to access this anymore.”

This is a personal finance startup. And they only launched two years ago. So this absolutely is about getting talent on board. So I’m very curious to see if OpenAI is going to be just absorbing them into the ether at OpenAI, or if they’re actually interested in some sort of personal finance product that they want to work on. To me, it’s not really clear.

Sean: I think you look at both of these as acqui-hires to a certain extent. I mean, the TBPN acquisition, allegedly they are going to retain their editorial independence on the show that they make every day. And all respect to those guys who’ve put that out there and gotten it off the ground so quickly and grown it into what it has become.

I think any person who follows the media should have a healthy dose of skepticism that when you acquire something like that and you put the people who make the show under the org of the public policy people and comms or marketing adjacent people higher up at the company making the acquisition, that you could have good questions about whether or not saying “editorial independence” is enough. It’s not an incantation that just works.

But you know, what’s interesting to me about these two, while they are similar in their acqui-hire-ness, I think they both represent two major problems that OpenAI is facing.

One is Hiro. OpenAI has a very successful product in ChatGPT. As far as whether or not that will actually ever make them enough money to become a sustainable business that’s not raising the largest private rounds in the world, ever, to keep things going, is a big question. And they also seem to be struggling to keep up on the enterprise side of things where the real money seems to be, so bringing in a team like this seems like taking a shot at, “What else can we do?” 

The guy who founded Hiro seems to have a serial entrepreneur streak of creating consumer apps, and so this seems to me like a bet on them being able to come up with something else that may have more hooks than just a chatbot, and maybe something worth paying more for.

And then TBPN is an acquisition made to help better represent what the company does and better shape its image in the public eye, which lately has not been great and certainly is under more questions now than just a few weeks ago, because Ronan Farrow just led a report at The New Yorker that dropped suspiciously right around the time that this and a couple other announcements from OpenAI came out last week. 

I think those are two big existential problems that OpenAI is trying to solve right now.

Kirsten: So the thing that you didn’t say is, there’s Anthropic kind of looming in — not in the shadows, I mean, they’re very much taking up a lot of space here — but they’re having a lot of success on the enterprise side of things.

It feels like these guys are competitors and they also feel like very different companies in a lot of ways. Anthony, I’m wondering if you see them as direct competition to OpenAI? Or [are they] just finding their stride in enterprise and in a way, these two companies are clearly going to coexist and they’re really not directly competing with each other — maybe on talent, but not necessarily as we initially thought of them?

Anthony: I think they’re directly competing with each other. There’s definitely a scenario where if AI as an industry, as a technology, is as successful as its proponents hope for, they could both be very successful companies, they could just be the one and two. And the success of one does not necessarily mean that the other will just fade into obscurity. 

And again, none of this is official, but there’s just been a lot of reporting around how it seems like OpenAI, more than anyone, is obsessed with and upset about Anthropic’s rise. 

Our reporter Lucas [Ropek], he did a great piece over the weekend about the HumanX conference, where he was talking to everyone there and they’re sort of like, “Yeah, ChatGPT is fine, too,” but like they were all about Claude Code. And I think that is exactly what OpenAI is worried about.

Because again, in theory, there could be many other opportunities for generative AI, but it feels like the big growth area, the area where the most money is and where they could at least see a path to having a sustainable business in the future, is in these enterprise and coding tools.

#OpenAIs #existential #questions #TechCrunchAnthropic,Equity podcast,OpenAI"> OpenAI’s existential questions | TechCrunch


OpenAI has been all over the news recently, whether that news is about acquisitions, competition with Anthropic, or bigger debates about AI’s impact on society.

On the latest episode of TechCrunch’s Equity podcast, Kirsten Korosec, Sean O’Kane, and I did our best to round up all the latest OpenAI news. While the company’s latest acquisitions seem to be classic acqui-hires, Sean suggested they also address “two big existential problems that OpenAI is trying to solve right now.”







First, with the team behind personal finance startup Hiro, the company may be hoping to  come up with a product that has “more hooks than just a chatbot, and maybe something worth paying more for.” And with new media startup TBPN, OpenAI could be looking to “better shape its image in the public eye, which lately has not been great.”

Read a preview of our conversation, edited for length and clarity below.

Anthony: [We have] two deals that are worth mentioning, one is that OpenAI acquired this personal finance startup called Hiro. And that comes after another deal that was literally announced when we were recording our last episode of Equity, so we didn’t get to talk about it: OpenAI had also acquired TBPN — a business talk show, like a new media company.

And I think both of these deals are pretty small compared to the scale of OpenAI. These are not things that people expect to really change the course of their business or anything like that, but they’re interesting because it suggests that there’s still this [attitude of,] “Let’s try out different things.”

Especially [with] the TBPN deal […] particularly at this time when it feels like OpenAI, from all the reporting we’re reading, is also trying to really refocus on making ChatGPT and its GPT models really competitive in an enterprise context with programmers.

	
		
		Techcrunch event
		
			
			
									San Francisco, CA
													|
													October 13-15, 2026
							
			
		
	


Is running a tech talk show, should that really be on the to-do list?

Kirsten: No, this should not be on the to-do list. That’s it. 

I do want to mention Hiro because to me, that’s an interesting one, because Julie Bort, our venture editor, super talented, she wrote about this and was I think the first to write about it. She dug in a little bit and basically this looks like an acqui-hire. The company is folding. They basically said, “By this date, you won’t be able to access this anymore.”







This is a personal finance startup. And they only launched two years ago. So this absolutely is about getting talent on board. So I’m very curious to see if OpenAI is going to be just absorbing them into the ether at OpenAI, or if they’re actually interested in some sort of personal finance product that they want to work on. To me, it’s not really clear.

Sean: I think you look at both of these as acqui-hires to a certain extent. I mean, the TBPN acquisition, allegedly they are going to retain their editorial independence on the show that they make every day. And all respect to those guys who’ve put that out there and gotten it off the ground so quickly and grown it into what it has become.

I think any person who follows the media should have a healthy dose of skepticism that when you acquire something like that and you put the people who make the show under the org of the public policy people and comms or marketing adjacent people higher up at the company making the acquisition, that you could have good questions about whether or not saying “editorial independence” is enough. It’s not an incantation that just works.

But you know, what’s interesting to me about these two, while they are similar in their acqui-hire-ness, I think they both represent two major problems that OpenAI is facing.

One is Hiro. OpenAI has a very successful product in ChatGPT. As far as whether or not that will actually ever make them enough money to become a sustainable business that’s not raising the largest private rounds in the world, ever, to keep things going, is a big question. And they also seem to be struggling to keep up on the enterprise side of things where the real money seems to be, so bringing in a team like this seems like taking a shot at, “What else can we do?” 

The guy who founded Hiro seems to have a serial entrepreneur streak of creating consumer apps, and so this seems to me like a bet on them being able to come up with something else that may have more hooks than just a chatbot, and maybe something worth paying more for.

And then TBPN is an acquisition made to help better represent what the company does and better shape its image in the public eye, which lately has not been great and certainly is under more questions now than just a few weeks ago, because Ronan Farrow just led a report at The New Yorker that dropped suspiciously right around the time that this and a couple other announcements from OpenAI came out last week. 

I think those are two big existential problems that OpenAI is trying to solve right now.







Kirsten: So the thing that you didn’t say is, there’s Anthropic kind of looming in — not in the shadows, I mean, they’re very much taking up a lot of space here — but they’re having a lot of success on the enterprise side of things.

It feels like these guys are competitors and they also feel like very different companies in a lot of ways. Anthony, I’m wondering if you see them as direct competition to OpenAI? Or [are they] just finding their stride in enterprise and in a way, these two companies are clearly going to coexist and they’re really not directly competing with each other — maybe on talent, but not necessarily as we initially thought of them?

Anthony: I think they’re directly competing with each other. There’s definitely a scenario where if AI as an industry, as a technology, is as successful as its proponents hope for, they could both be very successful companies, they could just be the one and two. And the success of one does not necessarily mean that the other will just fade into obscurity. 

And again, none of this is official, but there’s just been a lot of reporting around how it seems like OpenAI, more than anyone, is obsessed with and upset about Anthropic’s rise. 

Our reporter Lucas [Ropek], he did a great piece over the weekend about the HumanX conference, where he was talking to everyone there and they’re sort of like, “Yeah, ChatGPT is fine, too,” but like they were all about Claude Code. And I think that is exactly what OpenAI is worried about.

Because again, in theory, there could be many other opportunities for generative AI, but it feels like the big growth area, the area where the most money is and where they could at least see a path to having a sustainable business in the future, is in these enterprise and coding tools.


#OpenAIs #existential #questions #TechCrunchAnthropic,Equity podcast,OpenAI
Tech-news

acquisitions, competition with Anthropic, or bigger debates about AI’s impact on society.

On the latest episode of TechCrunch’s Equity podcast, Kirsten Korosec, Sean O’Kane, and I did our best to round up all the latest OpenAI news. While the company’s latest acquisitions seem to be classic acqui-hires, Sean suggested they also address “two big existential problems that OpenAI is trying to solve right now.”

First, with the team behind personal finance startup Hiro, the company may be hoping to  come up with a product that has “more hooks than just a chatbot, and maybe something worth paying more for.” And with new media startup TBPN, OpenAI could be looking to “better shape its image in the public eye, which lately has not been great.”

Read a preview of our conversation, edited for length and clarity below.

Anthony: [We have] two deals that are worth mentioning, one is that OpenAI acquired this personal finance startup called Hiro. And that comes after another deal that was literally announced when we were recording our last episode of Equity, so we didn’t get to talk about it: OpenAI had also acquired TBPN — a business talk show, like a new media company.

And I think both of these deals are pretty small compared to the scale of OpenAI. These are not things that people expect to really change the course of their business or anything like that, but they’re interesting because it suggests that there’s still this [attitude of,] “Let’s try out different things.”

Especially [with] the TBPN deal […] particularly at this time when it feels like OpenAI, from all the reporting we’re reading, is also trying to really refocus on making ChatGPT and its GPT models really competitive in an enterprise context with programmers.

Techcrunch event

San Francisco, CA | October 13-15, 2026

Is running a tech talk show, should that really be on the to-do list?

Kirsten: No, this should not be on the to-do list. That’s it. 

I do want to mention Hiro because to me, that’s an interesting one, because Julie Bort, our venture editor, super talented, she wrote about this and was I think the first to write about it. She dug in a little bit and basically this looks like an acqui-hire. The company is folding. They basically said, “By this date, you won’t be able to access this anymore.”

This is a personal finance startup. And they only launched two years ago. So this absolutely is about getting talent on board. So I’m very curious to see if OpenAI is going to be just absorbing them into the ether at OpenAI, or if they’re actually interested in some sort of personal finance product that they want to work on. To me, it’s not really clear.

Sean: I think you look at both of these as acqui-hires to a certain extent. I mean, the TBPN acquisition, allegedly they are going to retain their editorial independence on the show that they make every day. And all respect to those guys who’ve put that out there and gotten it off the ground so quickly and grown it into what it has become.

I think any person who follows the media should have a healthy dose of skepticism that when you acquire something like that and you put the people who make the show under the org of the public policy people and comms or marketing adjacent people higher up at the company making the acquisition, that you could have good questions about whether or not saying “editorial independence” is enough. It’s not an incantation that just works.

But you know, what’s interesting to me about these two, while they are similar in their acqui-hire-ness, I think they both represent two major problems that OpenAI is facing.

One is Hiro. OpenAI has a very successful product in ChatGPT. As far as whether or not that will actually ever make them enough money to become a sustainable business that’s not raising the largest private rounds in the world, ever, to keep things going, is a big question. And they also seem to be struggling to keep up on the enterprise side of things where the real money seems to be, so bringing in a team like this seems like taking a shot at, “What else can we do?” 

The guy who founded Hiro seems to have a serial entrepreneur streak of creating consumer apps, and so this seems to me like a bet on them being able to come up with something else that may have more hooks than just a chatbot, and maybe something worth paying more for.

And then TBPN is an acquisition made to help better represent what the company does and better shape its image in the public eye, which lately has not been great and certainly is under more questions now than just a few weeks ago, because Ronan Farrow just led a report at The New Yorker that dropped suspiciously right around the time that this and a couple other announcements from OpenAI came out last week. 

I think those are two big existential problems that OpenAI is trying to solve right now.

Kirsten: So the thing that you didn’t say is, there’s Anthropic kind of looming in — not in the shadows, I mean, they’re very much taking up a lot of space here — but they’re having a lot of success on the enterprise side of things.

It feels like these guys are competitors and they also feel like very different companies in a lot of ways. Anthony, I’m wondering if you see them as direct competition to OpenAI? Or [are they] just finding their stride in enterprise and in a way, these two companies are clearly going to coexist and they’re really not directly competing with each other — maybe on talent, but not necessarily as we initially thought of them?

Anthony: I think they’re directly competing with each other. There’s definitely a scenario where if AI as an industry, as a technology, is as successful as its proponents hope for, they could both be very successful companies, they could just be the one and two. And the success of one does not necessarily mean that the other will just fade into obscurity. 

And again, none of this is official, but there’s just been a lot of reporting around how it seems like OpenAI, more than anyone, is obsessed with and upset about Anthropic’s rise. 

Our reporter Lucas [Ropek], he did a great piece over the weekend about the HumanX conference, where he was talking to everyone there and they’re sort of like, “Yeah, ChatGPT is fine, too,” but like they were all about Claude Code. And I think that is exactly what OpenAI is worried about.

Because again, in theory, there could be many other opportunities for generative AI, but it feels like the big growth area, the area where the most money is and where they could at least see a path to having a sustainable business in the future, is in these enterprise and coding tools.

#OpenAIs #existential #questions #TechCrunchAnthropic,Equity podcast,OpenAI">OpenAI’s existential questions | TechCrunch

OpenAI has been all over the news recently, whether that news is about acquisitions, competition with Anthropic, or bigger debates about AI’s impact on society.

On the latest episode of TechCrunch’s Equity podcast, Kirsten Korosec, Sean O’Kane, and I did our best to round up all the latest OpenAI news. While the company’s latest acquisitions seem to be classic acqui-hires, Sean suggested they also address “two big existential problems that OpenAI is trying to solve right now.”

First, with the team behind personal finance startup Hiro, the company may be hoping to  come up with a product that has “more hooks than just a chatbot, and maybe something worth paying more for.” And with new media startup TBPN, OpenAI could be looking to “better shape its image in the public eye, which lately has not been great.”

Read a preview of our conversation, edited for length and clarity below.

Anthony: [We have] two deals that are worth mentioning, one is that OpenAI acquired this personal finance startup called Hiro. And that comes after another deal that was literally announced when we were recording our last episode of Equity, so we didn’t get to talk about it: OpenAI had also acquired TBPN — a business talk show, like a new media company.

And I think both of these deals are pretty small compared to the scale of OpenAI. These are not things that people expect to really change the course of their business or anything like that, but they’re interesting because it suggests that there’s still this [attitude of,] “Let’s try out different things.”

Especially [with] the TBPN deal […] particularly at this time when it feels like OpenAI, from all the reporting we’re reading, is also trying to really refocus on making ChatGPT and its GPT models really competitive in an enterprise context with programmers.

Techcrunch event

San Francisco, CA | October 13-15, 2026

Is running a tech talk show, should that really be on the to-do list?

Kirsten: No, this should not be on the to-do list. That’s it. 

I do want to mention Hiro because to me, that’s an interesting one, because Julie Bort, our venture editor, super talented, she wrote about this and was I think the first to write about it. She dug in a little bit and basically this looks like an acqui-hire. The company is folding. They basically said, “By this date, you won’t be able to access this anymore.”

This is a personal finance startup. And they only launched two years ago. So this absolutely is about getting talent on board. So I’m very curious to see if OpenAI is going to be just absorbing them into the ether at OpenAI, or if they’re actually interested in some sort of personal finance product that they want to work on. To me, it’s not really clear.

Sean: I think you look at both of these as acqui-hires to a certain extent. I mean, the TBPN acquisition, allegedly they are going to retain their editorial independence on the show that they make every day. And all respect to those guys who’ve put that out there and gotten it off the ground so quickly and grown it into what it has become.

I think any person who follows the media should have a healthy dose of skepticism that when you acquire something like that and you put the people who make the show under the org of the public policy people and comms or marketing adjacent people higher up at the company making the acquisition, that you could have good questions about whether or not saying “editorial independence” is enough. It’s not an incantation that just works.

But you know, what’s interesting to me about these two, while they are similar in their acqui-hire-ness, I think they both represent two major problems that OpenAI is facing.

One is Hiro. OpenAI has a very successful product in ChatGPT. As far as whether or not that will actually ever make them enough money to become a sustainable business that’s not raising the largest private rounds in the world, ever, to keep things going, is a big question. And they also seem to be struggling to keep up on the enterprise side of things where the real money seems to be, so bringing in a team like this seems like taking a shot at, “What else can we do?” 

The guy who founded Hiro seems to have a serial entrepreneur streak of creating consumer apps, and so this seems to me like a bet on them being able to come up with something else that may have more hooks than just a chatbot, and maybe something worth paying more for.

And then TBPN is an acquisition made to help better represent what the company does and better shape its image in the public eye, which lately has not been great and certainly is under more questions now than just a few weeks ago, because Ronan Farrow just led a report at The New Yorker that dropped suspiciously right around the time that this and a couple other announcements from OpenAI came out last week. 

I think those are two big existential problems that OpenAI is trying to solve right now.

Kirsten: So the thing that you didn’t say is, there’s Anthropic kind of looming in — not in the shadows, I mean, they’re very much taking up a lot of space here — but they’re having a lot of success on the enterprise side of things.

It feels like these guys are competitors and they also feel like very different companies in a lot of ways. Anthony, I’m wondering if you see them as direct competition to OpenAI? Or [are they] just finding their stride in enterprise and in a way, these two companies are clearly going to coexist and they’re really not directly competing with each other — maybe on talent, but not necessarily as we initially thought of them?

Anthony: I think they’re directly competing with each other. There’s definitely a scenario where if AI as an industry, as a technology, is as successful as its proponents hope for, they could both be very successful companies, they could just be the one and two. And the success of one does not necessarily mean that the other will just fade into obscurity. 

And again, none of this is official, but there’s just been a lot of reporting around how it seems like OpenAI, more than anyone, is obsessed with and upset about Anthropic’s rise. 

Our reporter Lucas [Ropek], he did a great piece over the weekend about the HumanX conference, where he was talking to everyone there and they’re sort of like, “Yeah, ChatGPT is fine, too,” but like they were all about Claude Code. And I think that is exactly what OpenAI is worried about.

Because again, in theory, there could be many other opportunities for generative AI, but it feels like the big growth area, the area where the most money is and where they could at least see a path to having a sustainable business in the future, is in these enterprise and coding tools.

#OpenAIs #existential #questions #TechCrunchAnthropic,Equity podcast,OpenAI

OpenAI has been all over the news recently, whether that news is about acquisitions, competition…

Tech-news

I am immensely grateful to Sam, Mark, Aditya and Jakub for fostering a research environment…

Tech-news

As noted above, SFPD officers recovered a document from MORENO-GAMA. MORENO-GAMA appears to have sent…

Hollywood news

Sam Altman issued a lengthy statement following a New Yorker profile, which he called “incendiary,”…