×
How the Internet Broke Everyone’s Bullshit DetectorsLego-style propaganda videos alleging war crimes are flooding online feeds, echoing the White House’s own turn toward cryptic teaser clips and meme-native visuals. This is not just content drift. It is a new front in the information war, one where speed, ambiguity, and algorithmic reach matter as much as accuracy.One Iran-linked outlet, Explosive News, can reportedly turn around a two-minute synthetic Lego segment in about 24 hours. The speed is the point. Synthetic media does not need to hold up forever; it only needs to travel before verification catches up.Last month, the White House added to that confusion when it posted two vague “launching soon” videos, then removed them after online investigators and open source researchers began dissecting them.The reveal turned out to be anticlimactic: a promotional push for the official White House app. But the episode demonstrated how thoroughly official communication has absorbed the aesthetics of leaks, virality, and platform-native intrigue. Even when official accounts adopt the aesthetics of a leak, questioning whether a record is real or synthetic is the only defensive move left.Real vs. Synthetic: The New FrictionA zero digital footprint used to signal authenticity. Now, it can signal the opposite. The absence of a trail no longer means something is original—it may mean it was never captured by a lens at all. The signal has inverted. Truth lags; engagement leads.Automated traffic now commands an estimated 51 percent of internet activity, scaling eight times faster than human traffic according to the 2026 State of AI Traffic & Cyberthreat Benchmark Report. These systems don’t just distribute content, they prioritize low-quality virality, ensuring the synthetic record travels while verification is still catching up.Open source investigators are still holding the line, but they are fighting a volume war. The rise of hyperactive “super sharers,” often backed by paid verification, adds a layer of false authority that traditional open source intelligence (OSINT) now has to navigate.“We’re perpetually catching up to someone pressing repost without a second thought,” says Maryam Ishani, an OSINT journalist covering the conflict. “The algorithm prioritizes that reflex, and our information is always going to be one step behind.”At the same time, the surge of war-monitoring accounts is beginning to interfere with reporting itself. Manisha Ganguly, visual forensics lead at The Guardian and an OSINT specialist investigating war crimes, points to the false certainty created by the flood of aggregated content on Telegram and X.“Open source verification starts to create false certainty when it stops being a method of inquiry—through confirmation bias, or when OSINT is used to cosmetically validate official accounts or knowingly misapplied to align with ideological narratives rather than interrogate them,” Ganguly says.While this plays out, the verification toolkit itself is becoming harder to access. On April 4, Planet Labs—one of the most relied-upon commercial satellite providers for conflict journalism—announced it would indefinitely withhold imagery of Iran and the broader Middle East conflict zone, retroactive to March 9, following a request from the US government.The response from US defense secretary Pete Hegseth to concerns about the delay was unambiguous: “Open source is not the place to determine what did or did not happen.”That shift matters. When access to primary visual evidence is restricted, the ability to independently verify events narrows. And in that narrowing gap, something else expands: Generative AI doesn’t just fill the silence—it competes to define what’s seen in the first place.Generative AI Is Getting Harder to SpotGenerative AI platforms have been learning from their mistakes. Henk van Ess, an investigative trainer and verification specialist, says many of the classic tells—incorrect finger counts, garbled protest signs, distorted text—have largely been fixed in the latest generation of models. Tools like Imagen 3, Midjourney, and Dall·E have improved in prompt understanding, photorealism, and text-in-image rendering.But the harder problem is what van Ess calls the hybrid.#Internet #Broke #Everyones #Bullshit #Detectorspropaganda,artificial intelligence,open source,satellite images,iran,war,politics

How the Internet Broke Everyone’s Bullshit Detectors

Lego-style propaganda videos alleging war crimes are flooding online feeds, echoing the White House’s own turn toward cryptic teaser clips and meme-native visuals. This is not just content drift. It is a new front in the information war, one where speed, ambiguity, and algorithmic reach matter as much as accuracy.

One Iran-linked outlet, Explosive News, can reportedly turn around a two-minute synthetic Lego segment in about 24 hours. The speed is the point. Synthetic media does not need to hold up forever; it only needs to travel before verification catches up.

Last month, the White House added to that confusion when it posted two vague “launching soon” videos, then removed them after online investigators and open source researchers began dissecting them.

The reveal turned out to be anticlimactic: a promotional push for the official White House app. But the episode demonstrated how thoroughly official communication has absorbed the aesthetics of leaks, virality, and platform-native intrigue. Even when official accounts adopt the aesthetics of a leak, questioning whether a record is real or synthetic is the only defensive move left.

Real vs. Synthetic: The New Friction

A zero digital footprint used to signal authenticity. Now, it can signal the opposite. The absence of a trail no longer means something is original—it may mean it was never captured by a lens at all. The signal has inverted. Truth lags; engagement leads.

Automated traffic now commands an estimated 51 percent of internet activity, scaling eight times faster than human traffic according to the 2026 State of AI Traffic & Cyberthreat Benchmark Report. These systems don’t just distribute content, they prioritize low-quality virality, ensuring the synthetic record travels while verification is still catching up.

Open source investigators are still holding the line, but they are fighting a volume war. The rise of hyperactive “super sharers,” often backed by paid verification, adds a layer of false authority that traditional open source intelligence (OSINT) now has to navigate.

“We’re perpetually catching up to someone pressing repost without a second thought,” says Maryam Ishani, an OSINT journalist covering the conflict. “The algorithm prioritizes that reflex, and our information is always going to be one step behind.”

At the same time, the surge of war-monitoring accounts is beginning to interfere with reporting itself. Manisha Ganguly, visual forensics lead at The Guardian and an OSINT specialist investigating war crimes, points to the false certainty created by the flood of aggregated content on Telegram and X.

“Open source verification starts to create false certainty when it stops being a method of inquiry—through confirmation bias, or when OSINT is used to cosmetically validate official accounts or knowingly misapplied to align with ideological narratives rather than interrogate them,” Ganguly says.

While this plays out, the verification toolkit itself is becoming harder to access. On April 4, Planet Labs—one of the most relied-upon commercial satellite providers for conflict journalism—announced it would indefinitely withhold imagery of Iran and the broader Middle East conflict zone, retroactive to March 9, following a request from the US government.

The response from US defense secretary Pete Hegseth to concerns about the delay was unambiguous: “Open source is not the place to determine what did or did not happen.”

That shift matters. When access to primary visual evidence is restricted, the ability to independently verify events narrows. And in that narrowing gap, something else expands: Generative AI doesn’t just fill the silence—it competes to define what’s seen in the first place.

Generative AI Is Getting Harder to Spot

Generative AI platforms have been learning from their mistakes. Henk van Ess, an investigative trainer and verification specialist, says many of the classic tells—incorrect finger counts, garbled protest signs, distorted text—have largely been fixed in the latest generation of models. Tools like Imagen 3, Midjourney, and Dall·E have improved in prompt understanding, photorealism, and text-in-image rendering.

But the harder problem is what van Ess calls the hybrid.

#Internet #Broke #Everyones #Bullshit #Detectorspropaganda,artificial intelligence,open source,satellite images,iran,war,politics

Lego-style propaganda videos alleging war crimes are flooding online feeds, echoing the White House’s own turn toward cryptic teaser clips and meme-native visuals. This is not just content drift. It is a new front in the information war, one where speed, ambiguity, and algorithmic reach matter as much as accuracy.

One Iran-linked outlet, Explosive News, can reportedly turn around a two-minute synthetic Lego segment in about 24 hours. The speed is the point. Synthetic media does not need to hold up forever; it only needs to travel before verification catches up.

Last month, the White House added to that confusion when it posted two vague “launching soon” videos, then removed them after online investigators and open source researchers began dissecting them.

The reveal turned out to be anticlimactic: a promotional push for the official White House app. But the episode demonstrated how thoroughly official communication has absorbed the aesthetics of leaks, virality, and platform-native intrigue. Even when official accounts adopt the aesthetics of a leak, questioning whether a record is real or synthetic is the only defensive move left.

Real vs. Synthetic: The New Friction

A zero digital footprint used to signal authenticity. Now, it can signal the opposite. The absence of a trail no longer means something is original—it may mean it was never captured by a lens at all. The signal has inverted. Truth lags; engagement leads.

Automated traffic now commands an estimated 51 percent of internet activity, scaling eight times faster than human traffic according to the 2026 State of AI Traffic & Cyberthreat Benchmark Report. These systems don’t just distribute content, they prioritize low-quality virality, ensuring the synthetic record travels while verification is still catching up.

Open source investigators are still holding the line, but they are fighting a volume war. The rise of hyperactive “super sharers,” often backed by paid verification, adds a layer of false authority that traditional open source intelligence (OSINT) now has to navigate.

“We’re perpetually catching up to someone pressing repost without a second thought,” says Maryam Ishani, an OSINT journalist covering the conflict. “The algorithm prioritizes that reflex, and our information is always going to be one step behind.”

At the same time, the surge of war-monitoring accounts is beginning to interfere with reporting itself. Manisha Ganguly, visual forensics lead at The Guardian and an OSINT specialist investigating war crimes, points to the false certainty created by the flood of aggregated content on Telegram and X.

“Open source verification starts to create false certainty when it stops being a method of inquiry—through confirmation bias, or when OSINT is used to cosmetically validate official accounts or knowingly misapplied to align with ideological narratives rather than interrogate them,” Ganguly says.

While this plays out, the verification toolkit itself is becoming harder to access. On April 4, Planet Labs—one of the most relied-upon commercial satellite providers for conflict journalism—announced it would indefinitely withhold imagery of Iran and the broader Middle East conflict zone, retroactive to March 9, following a request from the US government.

The response from US defense secretary Pete Hegseth to concerns about the delay was unambiguous: “Open source is not the place to determine what did or did not happen.”

That shift matters. When access to primary visual evidence is restricted, the ability to independently verify events narrows. And in that narrowing gap, something else expands: Generative AI doesn’t just fill the silence—it competes to define what’s seen in the first place.

Generative AI Is Getting Harder to Spot

Generative AI platforms have been learning from their mistakes. Henk van Ess, an investigative trainer and verification specialist, says many of the classic tells—incorrect finger counts, garbled protest signs, distorted text—have largely been fixed in the latest generation of models. Tools like Imagen 3, Midjourney, and Dall·E have improved in prompt understanding, photorealism, and text-in-image rendering.

But the harder problem is what van Ess calls the hybrid.

Source link
#Internet #Broke #Everyones #Bullshit #Detectors

Previous post

Who is Praful Hinge? The Vidarbha making his IPL debut for SRH vs PBKS <div id="content-body-70850583" itemprop="articleBody"><p>Praful Hinge made his IPL debut for Sunrisers Hyderabad against Punjab Kings at the MYSI Cricket Stadium in New Chandigarh on Saturday.</p><p>Hinge, a right-arm fast bowler, represents Vidarbha in State cricket. SRH got hold of his services at his base price of Rs. 30 lakh.</p><p>The 24-year-old made his senior debut across formats in the 2024-25 season.</p><p><b>His story | <a href="https://sportstar.thehindu.com/cricket/praful-hinge-vidarbha-pacer-on-ipl-2026-auction-srh-selection-ranji-vijaya-hazare-trophy/article70552028.ece" target="_blank" rel="nofollow">Hope, hunger and hard work — Vidarbha pacer Hinge looks to continue rise after realising IPL dream</a></b></p><p>In the 10 First Class matches, Hinge has picked 27 wickets at an average of 26.66; in List A, he has taken five wickets in six games. He has only played on T20 game and claimed one wicket.</p><p>Hinge has also been training with the MRF Pace Foundation in Chennai since 2022 and went to Brisbane for a 15-day camp in 2024.</p><p class="publish-time" id="end-of-article">Published on Apr 11, 2026</p></div> #Praful #Hinge #Vidarbha #making #IPL #debut #SRH #PBKS

Next post

हर महीने 4 लाख रुपये पाने के लालच में चले गए 36 लाख रुपये, निवेश के नाम पर इंदौर में ठगी


Tim Cook is finally stepping down as Apple’s CEO after 15 years at the helm. On Thursday, his recently named successor, John Ternus, made his first earnings call cameo as the incoming CEO and gave a veiled glimpse into what Apple enthusiasts could expect from his tenure.

“We have an incredible roadmap ahead, and while you’re not going to get me to talk about the details of that roadmap, suffice it to say, this is the most exciting time in my 25-year career at Apple to be building products and services,” Ternus told investors.

When asked about his advice for Ternus, Cook said to “never forget” that Apple users are “the North Star for the company.”

“We’re about making the best products in the world that really enrich other people’s lives. And if you keep focusing on that and make your decisions around that, it will produce a great business, and we’ll be able to build more products and do it all over again,” Cook said on the call. “Our roadmap is incredible, and most importantly, we have the right leader ready to step into the role. As I have said, there is no one on this planet I trust more to lead Apple into the future than John Ternus.”

Ternus’s term as CEO will begin in September. Though the executives are keeping the product roadmap secret for now, a foldable iPhone is expected, and Apple wants Ternus to be the face people associate with it.

In his current role, Ternus leads the company’s hardware engineering efforts. The prospect of having a hardware specialist in charge has excited Apple fans who have been unsatisfied with what they claim is a slowdown of innovation in product releases. Cook has been blamed for this lack of revolutionary changes.

But while he may not have been as innovative as Steve Jobs, Cook oversaw the company’s transition into a trillion-dollar behemoth four times over. On Thursday’s earnings call, Ternus promised to continue Cook’s style of financial leadership.

“One of the hallmarks of Tim’s tenure has been a deep thoughtfulness, deliberateness, and discipline when it comes to the financial decision-making of the company, and I want you to know that is something Kevan and I intend to continue when I transition into the role in September,” Ternus told investors (Kevan being Apple CFO Kevan Parekh).

Apple is already promoting Ternus’ hardware engineering prowess as a benefit for the company. On the call, Cook shared that the iPhone 17 family, which was spearheaded by Ternus, is currently the most popular product lineup in Apple’s history.

Products aside, Ternus will also have a lot to answer for on the artificial intelligence side. The tech giant has been taking things slow on AI, while peers like Google and Microsoft soar past with AI innovations. The company has long promised a major leap in AI with an enhanced Siri, but had to push back the release at the very last minute in March 2025. The delay disappointed fans, reportedly caused an internal rift at the company, and even led to federal lawsuits accusing Apple of false advertising. The personalized Siri was expected to arrive early this year, but was reportedly delayed yet again.

In the call, Cook reiterated that the “more personalized Siri” would still be revealed later this year.

#Apples #Incoming #CEO #Earnings #Call #DebutApple,john ternus">Apple’s Incoming CEO Makes His Earnings Call Debut
                Tim Cook is finally stepping down as Apple’s CEO after 15 years at the helm. On Thursday, his recently named successor, John Ternus, made his first earnings call cameo as the incoming CEO and gave a veiled glimpse into what Apple enthusiasts could expect from his tenure.

 “We have an incredible roadmap ahead, and while you’re not going to get me to talk about the details of that roadmap, suffice it to say, this is the most exciting time in my 25-year career at Apple to be building products and services,” Ternus told investors. When asked about his advice for Ternus, Cook said to “never forget” that Apple users are “the North Star for the company.” “We’re about making the best products in the world that really enrich other people’s lives. And if you keep focusing on that and make your decisions around that, it will produce a great business, and we’ll be able to build more products and do it all over again,” Cook said on the call. “Our roadmap is incredible, and most importantly, we have the right leader ready to step into the role. As I have said, there is no one on this planet I trust more to lead Apple into the future than John Ternus.”

 Ternus’s term as CEO will begin in September. Though the executives are keeping the product roadmap secret for now, a foldable iPhone is expected, and Apple wants Ternus to be the face people associate with it. In his current role, Ternus leads the company’s hardware engineering efforts. The prospect of having a hardware specialist in charge has excited Apple fans who have been unsatisfied with what they claim is a slowdown of innovation in product releases. Cook has been blamed for this lack of revolutionary changes.

 But while he may not have been as innovative as Steve Jobs, Cook oversaw the company’s transition into a trillion-dollar behemoth four times over. On Thursday’s earnings call, Ternus promised to continue Cook’s style of financial leadership. “One of the hallmarks of Tim’s tenure has been a deep thoughtfulness, deliberateness, and discipline when it comes to the financial decision-making of the company, and I want you to know that is something Kevan and I intend to continue when I transition into the role in September,” Ternus told investors (Kevan being Apple CFO Kevan Parekh).

 Apple is already promoting Ternus’ hardware engineering prowess as a benefit for the company. On the call, Cook shared that the iPhone 17 family, which was spearheaded by Ternus, is currently the most popular product lineup in Apple’s history. Products aside, Ternus will also have a lot to answer for on the artificial intelligence side. The tech giant has been taking things slow on AI, while peers like Google and Microsoft soar past with AI innovations. The company has long promised a major leap in AI with an enhanced Siri, but had to push back the release at the very last minute in March 2025. The delay disappointed fans, reportedly caused an internal rift at the company, and even led to federal lawsuits accusing Apple of false advertising. The personalized Siri was expected to arrive early this year, but was reportedly delayed yet again.

 In the call, Cook reiterated that the “more personalized Siri” would still be revealed later this year.      #Apples #Incoming #CEO #Earnings #Call #DebutApple,john ternus

John Ternus, made his first earnings call cameo as the incoming CEO and gave a veiled glimpse into what Apple enthusiasts could expect from his tenure.

“We have an incredible roadmap ahead, and while you’re not going to get me to talk about the details of that roadmap, suffice it to say, this is the most exciting time in my 25-year career at Apple to be building products and services,” Ternus told investors.

When asked about his advice for Ternus, Cook said to “never forget” that Apple users are “the North Star for the company.”

“We’re about making the best products in the world that really enrich other people’s lives. And if you keep focusing on that and make your decisions around that, it will produce a great business, and we’ll be able to build more products and do it all over again,” Cook said on the call. “Our roadmap is incredible, and most importantly, we have the right leader ready to step into the role. As I have said, there is no one on this planet I trust more to lead Apple into the future than John Ternus.”

Ternus’s term as CEO will begin in September. Though the executives are keeping the product roadmap secret for now, a foldable iPhone is expected, and Apple wants Ternus to be the face people associate with it.

In his current role, Ternus leads the company’s hardware engineering efforts. The prospect of having a hardware specialist in charge has excited Apple fans who have been unsatisfied with what they claim is a slowdown of innovation in product releases. Cook has been blamed for this lack of revolutionary changes.

But while he may not have been as innovative as Steve Jobs, Cook oversaw the company’s transition into a trillion-dollar behemoth four times over. On Thursday’s earnings call, Ternus promised to continue Cook’s style of financial leadership.

“One of the hallmarks of Tim’s tenure has been a deep thoughtfulness, deliberateness, and discipline when it comes to the financial decision-making of the company, and I want you to know that is something Kevan and I intend to continue when I transition into the role in September,” Ternus told investors (Kevan being Apple CFO Kevan Parekh).

Apple is already promoting Ternus’ hardware engineering prowess as a benefit for the company. On the call, Cook shared that the iPhone 17 family, which was spearheaded by Ternus, is currently the most popular product lineup in Apple’s history.

Products aside, Ternus will also have a lot to answer for on the artificial intelligence side. The tech giant has been taking things slow on AI, while peers like Google and Microsoft soar past with AI innovations. The company has long promised a major leap in AI with an enhanced Siri, but had to push back the release at the very last minute in March 2025. The delay disappointed fans, reportedly caused an internal rift at the company, and even led to federal lawsuits accusing Apple of false advertising. The personalized Siri was expected to arrive early this year, but was reportedly delayed yet again.

In the call, Cook reiterated that the “more personalized Siri” would still be revealed later this year.

#Apples #Incoming #CEO #Earnings #Call #DebutApple,john ternus">Apple’s Incoming CEO Makes His Earnings Call DebutApple’s Incoming CEO Makes His Earnings Call Debut
                Tim Cook is finally stepping down as Apple’s CEO after 15 years at the helm. On Thursday, his recently named successor, John Ternus, made his first earnings call cameo as the incoming CEO and gave a veiled glimpse into what Apple enthusiasts could expect from his tenure.

 “We have an incredible roadmap ahead, and while you’re not going to get me to talk about the details of that roadmap, suffice it to say, this is the most exciting time in my 25-year career at Apple to be building products and services,” Ternus told investors. When asked about his advice for Ternus, Cook said to “never forget” that Apple users are “the North Star for the company.” “We’re about making the best products in the world that really enrich other people’s lives. And if you keep focusing on that and make your decisions around that, it will produce a great business, and we’ll be able to build more products and do it all over again,” Cook said on the call. “Our roadmap is incredible, and most importantly, we have the right leader ready to step into the role. As I have said, there is no one on this planet I trust more to lead Apple into the future than John Ternus.”

 Ternus’s term as CEO will begin in September. Though the executives are keeping the product roadmap secret for now, a foldable iPhone is expected, and Apple wants Ternus to be the face people associate with it. In his current role, Ternus leads the company’s hardware engineering efforts. The prospect of having a hardware specialist in charge has excited Apple fans who have been unsatisfied with what they claim is a slowdown of innovation in product releases. Cook has been blamed for this lack of revolutionary changes.

 But while he may not have been as innovative as Steve Jobs, Cook oversaw the company’s transition into a trillion-dollar behemoth four times over. On Thursday’s earnings call, Ternus promised to continue Cook’s style of financial leadership. “One of the hallmarks of Tim’s tenure has been a deep thoughtfulness, deliberateness, and discipline when it comes to the financial decision-making of the company, and I want you to know that is something Kevan and I intend to continue when I transition into the role in September,” Ternus told investors (Kevan being Apple CFO Kevan Parekh).

 Apple is already promoting Ternus’ hardware engineering prowess as a benefit for the company. On the call, Cook shared that the iPhone 17 family, which was spearheaded by Ternus, is currently the most popular product lineup in Apple’s history. Products aside, Ternus will also have a lot to answer for on the artificial intelligence side. The tech giant has been taking things slow on AI, while peers like Google and Microsoft soar past with AI innovations. The company has long promised a major leap in AI with an enhanced Siri, but had to push back the release at the very last minute in March 2025. The delay disappointed fans, reportedly caused an internal rift at the company, and even led to federal lawsuits accusing Apple of false advertising. The personalized Siri was expected to arrive early this year, but was reportedly delayed yet again.

 In the call, Cook reiterated that the “more personalized Siri” would still be revealed later this year.      #Apples #Incoming #CEO #Earnings #Call #DebutApple,john ternus

Tim Cook is finally stepping down as Apple’s CEO after 15 years at the helm. On Thursday, his recently named successor, John Ternus, made his first earnings call cameo as the incoming CEO and gave a veiled glimpse into what Apple enthusiasts could expect from his tenure.

“We have an incredible roadmap ahead, and while you’re not going to get me to talk about the details of that roadmap, suffice it to say, this is the most exciting time in my 25-year career at Apple to be building products and services,” Ternus told investors.

When asked about his advice for Ternus, Cook said to “never forget” that Apple users are “the North Star for the company.”

“We’re about making the best products in the world that really enrich other people’s lives. And if you keep focusing on that and make your decisions around that, it will produce a great business, and we’ll be able to build more products and do it all over again,” Cook said on the call. “Our roadmap is incredible, and most importantly, we have the right leader ready to step into the role. As I have said, there is no one on this planet I trust more to lead Apple into the future than John Ternus.”

Ternus’s term as CEO will begin in September. Though the executives are keeping the product roadmap secret for now, a foldable iPhone is expected, and Apple wants Ternus to be the face people associate with it.

In his current role, Ternus leads the company’s hardware engineering efforts. The prospect of having a hardware specialist in charge has excited Apple fans who have been unsatisfied with what they claim is a slowdown of innovation in product releases. Cook has been blamed for this lack of revolutionary changes.

But while he may not have been as innovative as Steve Jobs, Cook oversaw the company’s transition into a trillion-dollar behemoth four times over. On Thursday’s earnings call, Ternus promised to continue Cook’s style of financial leadership.

“One of the hallmarks of Tim’s tenure has been a deep thoughtfulness, deliberateness, and discipline when it comes to the financial decision-making of the company, and I want you to know that is something Kevan and I intend to continue when I transition into the role in September,” Ternus told investors (Kevan being Apple CFO Kevan Parekh).

Apple is already promoting Ternus’ hardware engineering prowess as a benefit for the company. On the call, Cook shared that the iPhone 17 family, which was spearheaded by Ternus, is currently the most popular product lineup in Apple’s history.

Products aside, Ternus will also have a lot to answer for on the artificial intelligence side. The tech giant has been taking things slow on AI, while peers like Google and Microsoft soar past with AI innovations. The company has long promised a major leap in AI with an enhanced Siri, but had to push back the release at the very last minute in March 2025. The delay disappointed fans, reportedly caused an internal rift at the company, and even led to federal lawsuits accusing Apple of false advertising. The personalized Siri was expected to arrive early this year, but was reportedly delayed yet again.

In the call, Cook reiterated that the “more personalized Siri” would still be revealed later this year.

#Apples #Incoming #CEO #Earnings #Call #DebutApple,john ternus

As the first week of trial in Musk v. Altman comes to a close, one person has emerged as a critical behind-the-scenes manager of communications and egos in OpenAI’s early years: Shivon Zilis.

A longtime employee of Musk and the mother to four of his children, Zilis joined OpenAI as an adviser in 2016. She later served as a director of its nonprofit board from 2020 until 2023 and has worked as an executive at Musk’s other companies, Neuralink and Tesla.

When asked about the nature of his relationship with Zilis in court, Musk offered several answers. At one point, he called her a “chief of staff.” Later, a “close adviser.” At another point, he said “we live together, and she’s the mother of four of my children,” though Zilis said in a deposition that Musk is more of a regular guest and maintains his own residence. Last September, Zilis told OpenAI’s attorneys that she became romantic with Musk around 2016 after she had become an informal adviser to OpenAI. They had their first two children in 2021, she said.

But OpenAI’s lawyers have made the case in witness testimonies and evidence that her most important role, as it pertains to this lawsuit, is being a covert liaison between OpenAI and Musk, even years after he left the nonprofit’s board in February 2018.

“Do you prefer I stay close and friendly to OpenAI to keep info flowing or begin to disassociate? Trust game is about to get tricky so any guidance for how to do right by you is appreciated,” Zilis wrote in a text message to Musk on February 16, 2018, days before OpenAI announced he was leaving the board. Musk responded, “Close and friendly, but we are going to actively try to move three or four people from OpenAI to Tesla. More than that will join over time, but we won’t actively recruit them.”

When asked about this exchange on the witness stand, Musk said he “wanted to know what’s going on.”

In the same text thread, Musk wrote, “There is little chance of OpenAI being a serious force if I focus on Tesla AI.” Zilis reaffirmed him, saying: “There is very low probability of a good future if someone doesn’t slow Demis down,” referring to Demis Hassabis, the leader of Google DeepMind, who Musk has said he didn’t trust to control a superintelligent AI system. “You don’t realize how much you have an ability to influence him directly or otherwise slow him down. I think you know I’m not a malicious person, but in this case it feels fundamentally irresponsible to not find a way to slow or alter his path.”

Roughly two months later, in an email from April 23, 2018, Zilis updated Musk on OpenAI’s fundraising efforts and progress on a project to develop an AI that could play video games. In the same message, she said she had reallocated most of her time away from OpenAI to his other companies, Neuralink and Tesla, but told him, “If you’d prefer I pull more hours back to OpenAI oversight please let me know.”

Almost a year earlier, in the summer of 2017, OpenAI’s cofounders had started negotiating changes to the organization’s corporate structure—Musk wanted control of the company to start out. In an email from August 28, 2017, Zilis wrote to Musk that she had met with OpenAI president Greg Brockman and cofounder Ilya Sutskever to discuss how equity would be divided up in the new company. She summarized points from the meeting, including that Brockman and Sutskever thought one person shouldn’t have unilateral power over AGI, should they develop it. Musk wrote back to Zilis, “This is very annoying. Please encourage them to go start a company. I’ve had enough.”

#Shivon #Zilis #Operated #Elon #Musks #OpenAI #Insidermodel behavior,artificial intelligence,openai,elon musk,sam altman,neuralink,musk v. altman trial">How Shivon Zilis Operated as Elon Musk’s OpenAI InsiderAs the first week of trial in Musk v. Altman comes to a close, one person has emerged as a critical behind-the-scenes manager of communications and egos in OpenAI’s early years: Shivon Zilis.A longtime employee of Musk and the mother to four of his children, Zilis joined OpenAI as an adviser in 2016. She later served as a director of its nonprofit board from 2020 until 2023 and has worked as an executive at Musk’s other companies, Neuralink and Tesla.When asked about the nature of his relationship with Zilis in court, Musk offered several answers. At one point, he called her a “chief of staff.” Later, a “close adviser.” At another point, he said “we live together, and she’s the mother of four of my children,” though Zilis said in a deposition that Musk is more of a regular guest and maintains his own residence. Last September, Zilis told OpenAI’s attorneys that she became romantic with Musk around 2016 after she had become an informal adviser to OpenAI. They had their first two children in 2021, she said.But OpenAI’s lawyers have made the case in witness testimonies and evidence that her most important role, as it pertains to this lawsuit, is being a covert liaison between OpenAI and Musk, even years after he left the nonprofit’s board in February 2018.“Do you prefer I stay close and friendly to OpenAI to keep info flowing or begin to disassociate? Trust game is about to get tricky so any guidance for how to do right by you is appreciated,” Zilis wrote in a text message to Musk on February 16, 2018, days before OpenAI announced he was leaving the board. Musk responded, “Close and friendly, but we are going to actively try to move three or four people from OpenAI to Tesla. More than that will join over time, but we won’t actively recruit them.”When asked about this exchange on the witness stand, Musk said he “wanted to know what’s going on.”In the same text thread, Musk wrote, “There is little chance of OpenAI being a serious force if I focus on Tesla AI.” Zilis reaffirmed him, saying: “There is very low probability of a good future if someone doesn’t slow Demis down,” referring to Demis Hassabis, the leader of Google DeepMind, who Musk has said he didn’t trust to control a superintelligent AI system. “You don’t realize how much you have an ability to influence him directly or otherwise slow him down. I think you know I’m not a malicious person, but in this case it feels fundamentally irresponsible to not find a way to slow or alter his path.”Roughly two months later, in an email from April 23, 2018, Zilis updated Musk on OpenAI’s fundraising efforts and progress on a project to develop an AI that could play video games. In the same message, she said she had reallocated most of her time away from OpenAI to his other companies, Neuralink and Tesla, but told him, “If you’d prefer I pull more hours back to OpenAI oversight please let me know.”Almost a year earlier, in the summer of 2017, OpenAI’s cofounders had started negotiating changes to the organization’s corporate structure—Musk wanted control of the company to start out. In an email from August 28, 2017, Zilis wrote to Musk that she had met with OpenAI president Greg Brockman and cofounder Ilya Sutskever to discuss how equity would be divided up in the new company. She summarized points from the meeting, including that Brockman and Sutskever thought one person shouldn’t have unilateral power over AGI, should they develop it. Musk wrote back to Zilis, “This is very annoying. Please encourage them to go start a company. I’ve had enough.”#Shivon #Zilis #Operated #Elon #Musks #OpenAI #Insidermodel behavior,artificial intelligence,openai,elon musk,sam altman,neuralink,musk v. altman trial

trial in Musk v. Altman comes to a close, one person has emerged as a critical behind-the-scenes manager of communications and egos in OpenAI’s early years: Shivon Zilis.

A longtime employee of Musk and the mother to four of his children, Zilis joined OpenAI as an adviser in 2016. She later served as a director of its nonprofit board from 2020 until 2023 and has worked as an executive at Musk’s other companies, Neuralink and Tesla.

When asked about the nature of his relationship with Zilis in court, Musk offered several answers. At one point, he called her a “chief of staff.” Later, a “close adviser.” At another point, he said “we live together, and she’s the mother of four of my children,” though Zilis said in a deposition that Musk is more of a regular guest and maintains his own residence. Last September, Zilis told OpenAI’s attorneys that she became romantic with Musk around 2016 after she had become an informal adviser to OpenAI. They had their first two children in 2021, she said.

But OpenAI’s lawyers have made the case in witness testimonies and evidence that her most important role, as it pertains to this lawsuit, is being a covert liaison between OpenAI and Musk, even years after he left the nonprofit’s board in February 2018.

“Do you prefer I stay close and friendly to OpenAI to keep info flowing or begin to disassociate? Trust game is about to get tricky so any guidance for how to do right by you is appreciated,” Zilis wrote in a text message to Musk on February 16, 2018, days before OpenAI announced he was leaving the board. Musk responded, “Close and friendly, but we are going to actively try to move three or four people from OpenAI to Tesla. More than that will join over time, but we won’t actively recruit them.”

When asked about this exchange on the witness stand, Musk said he “wanted to know what’s going on.”

In the same text thread, Musk wrote, “There is little chance of OpenAI being a serious force if I focus on Tesla AI.” Zilis reaffirmed him, saying: “There is very low probability of a good future if someone doesn’t slow Demis down,” referring to Demis Hassabis, the leader of Google DeepMind, who Musk has said he didn’t trust to control a superintelligent AI system. “You don’t realize how much you have an ability to influence him directly or otherwise slow him down. I think you know I’m not a malicious person, but in this case it feels fundamentally irresponsible to not find a way to slow or alter his path.”

Roughly two months later, in an email from April 23, 2018, Zilis updated Musk on OpenAI’s fundraising efforts and progress on a project to develop an AI that could play video games. In the same message, she said she had reallocated most of her time away from OpenAI to his other companies, Neuralink and Tesla, but told him, “If you’d prefer I pull more hours back to OpenAI oversight please let me know.”

Almost a year earlier, in the summer of 2017, OpenAI’s cofounders had started negotiating changes to the organization’s corporate structure—Musk wanted control of the company to start out. In an email from August 28, 2017, Zilis wrote to Musk that she had met with OpenAI president Greg Brockman and cofounder Ilya Sutskever to discuss how equity would be divided up in the new company. She summarized points from the meeting, including that Brockman and Sutskever thought one person shouldn’t have unilateral power over AGI, should they develop it. Musk wrote back to Zilis, “This is very annoying. Please encourage them to go start a company. I’ve had enough.”

#Shivon #Zilis #Operated #Elon #Musks #OpenAI #Insidermodel behavior,artificial intelligence,openai,elon musk,sam altman,neuralink,musk v. altman trial">How Shivon Zilis Operated as Elon Musk’s OpenAI Insider

As the first week of trial in Musk v. Altman comes to a close, one person has emerged as a critical behind-the-scenes manager of communications and egos in OpenAI’s early years: Shivon Zilis.

A longtime employee of Musk and the mother to four of his children, Zilis joined OpenAI as an adviser in 2016. She later served as a director of its nonprofit board from 2020 until 2023 and has worked as an executive at Musk’s other companies, Neuralink and Tesla.

When asked about the nature of his relationship with Zilis in court, Musk offered several answers. At one point, he called her a “chief of staff.” Later, a “close adviser.” At another point, he said “we live together, and she’s the mother of four of my children,” though Zilis said in a deposition that Musk is more of a regular guest and maintains his own residence. Last September, Zilis told OpenAI’s attorneys that she became romantic with Musk around 2016 after she had become an informal adviser to OpenAI. They had their first two children in 2021, she said.

But OpenAI’s lawyers have made the case in witness testimonies and evidence that her most important role, as it pertains to this lawsuit, is being a covert liaison between OpenAI and Musk, even years after he left the nonprofit’s board in February 2018.

“Do you prefer I stay close and friendly to OpenAI to keep info flowing or begin to disassociate? Trust game is about to get tricky so any guidance for how to do right by you is appreciated,” Zilis wrote in a text message to Musk on February 16, 2018, days before OpenAI announced he was leaving the board. Musk responded, “Close and friendly, but we are going to actively try to move three or four people from OpenAI to Tesla. More than that will join over time, but we won’t actively recruit them.”

When asked about this exchange on the witness stand, Musk said he “wanted to know what’s going on.”

In the same text thread, Musk wrote, “There is little chance of OpenAI being a serious force if I focus on Tesla AI.” Zilis reaffirmed him, saying: “There is very low probability of a good future if someone doesn’t slow Demis down,” referring to Demis Hassabis, the leader of Google DeepMind, who Musk has said he didn’t trust to control a superintelligent AI system. “You don’t realize how much you have an ability to influence him directly or otherwise slow him down. I think you know I’m not a malicious person, but in this case it feels fundamentally irresponsible to not find a way to slow or alter his path.”

Roughly two months later, in an email from April 23, 2018, Zilis updated Musk on OpenAI’s fundraising efforts and progress on a project to develop an AI that could play video games. In the same message, she said she had reallocated most of her time away from OpenAI to his other companies, Neuralink and Tesla, but told him, “If you’d prefer I pull more hours back to OpenAI oversight please let me know.”

Almost a year earlier, in the summer of 2017, OpenAI’s cofounders had started negotiating changes to the organization’s corporate structure—Musk wanted control of the company to start out. In an email from August 28, 2017, Zilis wrote to Musk that she had met with OpenAI president Greg Brockman and cofounder Ilya Sutskever to discuss how equity would be divided up in the new company. She summarized points from the meeting, including that Brockman and Sutskever thought one person shouldn’t have unilateral power over AGI, should they develop it. Musk wrote back to Zilis, “This is very annoying. Please encourage them to go start a company. I’ve had enough.”

#Shivon #Zilis #Operated #Elon #Musks #OpenAI #Insidermodel behavior,artificial intelligence,openai,elon musk,sam altman,neuralink,musk v. altman trial

Post Comment