Everything to Know About Congressman Eric Swalwell’s Sexual Misconduct Scandal and Investigation
Congressman Eric Swalwell is facing sexual assault allegations from multiple women, a Manhattan District Attorney’s…
Congressman Eric Swalwell is facing sexual assault allegations from multiple women, a Manhattan District Attorney’s…
One Iran-linked outlet, Explosive News, can reportedly turn around a two-minute synthetic Lego segment in about 24 hours. The speed is the point. Synthetic media does not need to hold up forever; it only needs to travel before verification catches up.
Last month, the White House added to that confusion when it posted two vague “launching soon” videos, then removed them after online investigators and open source researchers began dissecting them.
The reveal turned out to be anticlimactic: a promotional push for the official White House app. But the episode demonstrated how thoroughly official communication has absorbed the aesthetics of leaks, virality, and platform-native intrigue. Even when official accounts adopt the aesthetics of a leak, questioning whether a record is real or synthetic is the only defensive move left.
A zero digital footprint used to signal authenticity. Now, it can signal the opposite. The absence of a trail no longer means something is original—it may mean it was never captured by a lens at all. The signal has inverted. Truth lags; engagement leads.
Automated traffic now commands an estimated 51 percent of internet activity, scaling eight times faster than human traffic according to the 2026 State of AI Traffic & Cyberthreat Benchmark Report. These systems don’t just distribute content, they prioritize low-quality virality, ensuring the synthetic record travels while verification is still catching up.
Open source investigators are still holding the line, but they are fighting a volume war. The rise of hyperactive “super sharers,” often backed by paid verification, adds a layer of false authority that traditional open source intelligence (OSINT) now has to navigate.
“We’re perpetually catching up to someone pressing repost without a second thought,” says Maryam Ishani, an OSINT journalist covering the conflict. “The algorithm prioritizes that reflex, and our information is always going to be one step behind.”
At the same time, the surge of war-monitoring accounts is beginning to interfere with reporting itself. Manisha Ganguly, visual forensics lead at The Guardian and an OSINT specialist investigating war crimes, points to the false certainty created by the flood of aggregated content on Telegram and X.
“Open source verification starts to create false certainty when it stops being a method of inquiry—through confirmation bias, or when OSINT is used to cosmetically validate official accounts or knowingly misapplied to align with ideological narratives rather than interrogate them,” Ganguly says.
While this plays out, the verification toolkit itself is becoming harder to access. On April 4, Planet Labs—one of the most relied-upon commercial satellite providers for conflict journalism—announced it would indefinitely withhold imagery of Iran and the broader Middle East conflict zone, retroactive to March 9, following a request from the US government.
The response from US defense secretary Pete Hegseth to concerns about the delay was unambiguous: “Open source is not the place to determine what did or did not happen.”
That shift matters. When access to primary visual evidence is restricted, the ability to independently verify events narrows. And in that narrowing gap, something else expands: Generative AI doesn’t just fill the silence—it competes to define what’s seen in the first place.
Generative AI platforms have been learning from their mistakes. Henk van Ess, an investigative trainer and verification specialist, says many of the classic tells—incorrect finger counts, garbled protest signs, distorted text—have largely been fixed in the latest generation of models. Tools like Imagen 3, Midjourney, and Dall·E have improved in prompt understanding, photorealism, and text-in-image rendering.
But the harder problem is what van Ess calls the hybrid.
Lego-style propaganda videos alleging war crimes are flooding online feeds, echoing the White House’s own turn toward cryptic teaser clips and meme-native visuals. This is not just content drift. It is a new front in the information war, one where speed, ambiguity, and algorithmic reach matter as much as accuracy.
One Iran-linked outlet, Explosive News, can reportedly turn around a two-minute synthetic Lego segment in about 24 hours. The speed is the point. Synthetic media does not need to hold up forever; it only needs to travel before verification catches up.
Last month, the White House added to that confusion when it posted two vague “launching soon” videos, then removed them after online investigators and open source researchers began dissecting them.
The reveal turned out to be anticlimactic: a promotional push for the official White House app. But the episode demonstrated how thoroughly official communication has absorbed the aesthetics of leaks, virality, and platform-native intrigue. Even when official accounts adopt the aesthetics of a leak, questioning whether a record is real or synthetic is the only defensive move left.
A zero digital footprint used to signal authenticity. Now, it can signal the opposite. The absence of a trail no longer means something is original—it may mean it was never captured by a lens at all. The signal has inverted. Truth lags; engagement leads.
Automated traffic now commands an estimated 51 percent of internet activity, scaling eight times faster than human traffic according to the 2026 State of AI Traffic & Cyberthreat Benchmark Report. These systems don’t just distribute content, they prioritize low-quality virality, ensuring the synthetic record travels while verification is still catching up.
Open source investigators are still holding the line, but they are fighting a volume war. The rise of hyperactive “super sharers,” often backed by paid verification, adds a layer of false authority that traditional open source intelligence (OSINT) now has to navigate.
“We’re perpetually catching up to someone pressing repost without a second thought,” says Maryam Ishani, an OSINT journalist covering the conflict. “The algorithm prioritizes that reflex, and our information is always going to be one step behind.”
At the same time, the surge of war-monitoring accounts is beginning to interfere with reporting itself. Manisha Ganguly, visual forensics lead at The Guardian and an OSINT specialist investigating war crimes, points to the false certainty created by the flood of aggregated content on Telegram and X.
“Open source verification starts to create false certainty when it stops being a method of inquiry—through confirmation bias, or when OSINT is used to cosmetically validate official accounts or knowingly misapplied to align with ideological narratives rather than interrogate them,” Ganguly says.
While this plays out, the verification toolkit itself is becoming harder to access. On April 4, Planet Labs—one of the most relied-upon commercial satellite providers for conflict journalism—announced it would indefinitely withhold imagery of Iran and the broader Middle East conflict zone, retroactive to March 9, following a request from the US government.
The response from US defense secretary Pete Hegseth to concerns about the delay was unambiguous: “Open source is not the place to determine what did or did not happen.”
That shift matters. When access to primary visual evidence is restricted, the ability to independently verify events narrows. And in that narrowing gap, something else expands: Generative AI doesn’t just fill the silence—it competes to define what’s seen in the first place.
Generative AI platforms have been learning from their mistakes. Henk van Ess, an investigative trainer and verification specialist, says many of the classic tells—incorrect finger counts, garbled protest signs, distorted text—have largely been fixed in the latest generation of models. Tools like Imagen 3, Midjourney, and Dall·E have improved in prompt understanding, photorealism, and text-in-image rendering.
But the harder problem is what van Ess calls the hybrid.
Lego-style propaganda videos alleging war crimes are flooding online feeds, echoing the White House’s own…
The bill (AB 105) would’ve required sites with more than one-third of their total content deemed harmful to minors to impose a “reasonable” form of age verification, such as asking users to show their government-issued ID. More than two dozen states have already passed similar age check requirements for access to adult content, including Arizona, Florida, Georgia, Missouri, Texas, and Virginia. As a result, Pornhub has blocked its site in these locations.
Last month, the Wisconsin American Civil Liberties Union testified that AB 105 “raises significant concerns around privacy, surveillance, and the First Amendment,” and it seems like Evers agreed. “I am vetoing this bill in its entirety because I object to this bill’s intrusion into the personal privacy of Wisconsin residents,” Evers writes, adding that he’s “concerned about data security and the potential for misuse of personally identifiable information” obtained as a result of the age verification process.
An early version of Wisconsin’s age verification bill also included a ban on virtual private networks (VPN), which people have been using to circumvent online age checks. Lawmakers dropped this provision in February, though VPNs are becoming a target for regulators around the globe.
Despite vetoing this bill, Evers is leaving the door open for other kinds of age verification solutions, such as “device-based” methods that would verify the age of users on their phone or computer.
The bill (AB 105) would’ve required sites with more than one-third of their total content deemed harmful to minors to impose a “reasonable” form of age verification, such as asking users to show their government-issued ID. More than two dozen states have already passed similar age check requirements for access to adult content, including Arizona, Florida, Georgia, Missouri, Texas, and Virginia. As a result, Pornhub has blocked its site in these locations.
Last month, the Wisconsin American Civil Liberties Union testified that AB 105 “raises significant concerns around privacy, surveillance, and the First Amendment,” and it seems like Evers agreed. “I am vetoing this bill in its entirety because I object to this bill’s intrusion into the personal privacy of Wisconsin residents,” Evers writes, adding that he’s “concerned about data security and the potential for misuse of personally identifiable information” obtained as a result of the age verification process.
An early version of Wisconsin’s age verification bill also included a ban on virtual private networks (VPN), which people have been using to circumvent online age checks. Lawmakers dropped this provision in February, though VPNs are becoming a target for regulators around the globe.
Despite vetoing this bill, Evers is leaving the door open for other kinds of age verification solutions, such as “device-based” methods that would verify the age of users on their phone or computer.
Wisconsin Gov. Tony Evers vetoed a bill that would’ve required residents to verify their age before accessing porn sites, as reported earlier by 404 Media. In a letter to the members of the assembly last week, Evers writes that the bill “imposes an intrusive burden on adults who are trying to access constitutionally protected materials.”
The bill (AB 105) would’ve required sites with more than one-third of their total content deemed harmful to minors to impose a “reasonable” form of age verification, such as asking users to show their government-issued ID. More than two dozen states have already passed similar age check requirements for access to adult content, including Arizona, Florida, Georgia, Missouri, Texas, and Virginia. As a result, Pornhub has blocked its site in these locations.
Last month, the Wisconsin American Civil Liberties Union testified that AB 105 “raises significant concerns around privacy, surveillance, and the First Amendment,” and it seems like Evers agreed. “I am vetoing this bill in its entirety because I object to this bill’s intrusion into the personal privacy of Wisconsin residents,” Evers writes, adding that he’s “concerned about data security and the potential for misuse of personally identifiable information” obtained as a result of the age verification process.
An early version of Wisconsin’s age verification bill also included a ban on virtual private networks (VPN), which people have been using to circumvent online age checks. Lawmakers dropped this provision in February, though VPNs are becoming a target for regulators around the globe.
Despite vetoing this bill, Evers is leaving the door open for other kinds of age verification solutions, such as “device-based” methods that would verify the age of users on their phone or computer.
Wisconsin Gov. Tony Evers vetoed a bill that would’ve required residents to verify their age…
Prior to her death, Virginia Giuffre accused Epstein of trafficking her to Charles’s brother, the…
A member of President Donald Trump’s administration claims he was “teleported” to a Waffle House…
Russian air attacks on northeast Ukraine over the past 24 hours have killed at least…
Spencer Pratt has become a strong contender in the race to become the mayor of…
DEVELOPING STORYDEVELOPING STORY, Min Aung Hlaing receives enough votes to pass the majority threshold and…
US president has said that he will use tariffs to bring down costly pharmaceutical drugs,…
Kate Knibbs: So, you went twice?Makena Kelly: Yes, Kate. I went twice.Kate Knibbs: I missed…