The Iranian women Trump ‘saved’ from execution are simultaneously real and AI-manipulatedOnly the night before, he had posted on Truth Social about the imminent executions of these women, quoting a screenshot that included a collage of eight glamorously backlit, soft-focus portraits. The photos of the women were immediately accused of being AI-generated. “Trump is begging Iranian leaders to not execute 8 AI-generated women. This is the funniest thing I’ve ever seen,” said one viral X post.
On top of that, almost immediately after Trump’s announcement, Mizan, an Iranian state news agency, called the president a liar. “Last night, Donald Trump, citing a completely false news story, called on Iran to overturn the death sentences of eight women.” Mizan said that some of the women had already been released and others were facing prison time but not execution, and furthermore said that Tehran had made no concessions — presumably, the status of the women has not changed.
The X account for the Iranian embassy in South Africa, perhaps the most relentless shitposter among Iran’s state-affiliated accounts, was quick to pile on by generating its own set of eight women:
The collage that Trump posted is, at the very least, AI-modified, Mahsa Alimardani, the associate director of the Technology Threats & Opportunities program at WITNESS, told The Verge. But the women themselves are real. The woman in the top right corner of the collage is Bita Hemmati, whose photograph appeared in several news stories in various right-leaning news outlets last week. Hemmati is confirmed to have received a death sentence issued by Branch 26 of the Tehran Revolutionary Court for “operational action for the hostile government of the United States and hostile groups.”
Alimardani named six of the women (Bita Hemmati, Mahboubeh Shabani, Venus Hossein-Nejad, Golnaz Naraghi, Diana Taherabadi, Ghazal Ghalandri), and said that the identities of the final two (said to be Panah Movahedi and Ensieh Nejati) were still unverified. The six verified women participated in protests against the government in January. Aside from Hemmati, none of the other women are reported to have received death sentences.
It’s not surprising that Trump has a careless disregard for the truth; it’s not surprising, either, for the Iranian regime to fudge the details to suit its own narrative, or to make light of real political prisoners in order to dunk on the United States.
The additional wrinkle is that the account mocking Trump for coming to the rescue of “8 AI-generated women” is the very same one that landed South Korean president Lee Jae-myung in hot water when he quoted a misleading labeled video posted by that account. Israeli officials have accused the account of being “well-known for spreading disinformation.” The case of the sketchy Lee Jae-myung quote-post is a story of mingled truth and misinformation, where the post got facts very wrong, but the video — of Israeli Defense Forces soldiers shoving a limp body off a rooftop in Gaza — was real, documenting an event that possibly implicates Israeli forces in a violation of international law.
The case of the eight Iranian protesters also features that same mingling of fact and fiction into a fuzzy distortion that fuels an endless disputation of real human rights violations. Their lives have been reduced to glossy pixels and quote-dunks, the stuff of propaganda and parody. While known liars fight with each other on the internet about who these women are and what will happen to them, they — verifiably six of them, at least — remain real people who exist beyond the Iranian internet blackout.
#Iranian #women #Trump #saved #execution #simultaneously #real #AImanipulatedPolicy,Politics
Only the night before, he had posted on Truth Social about the imminent executions of these women, quoting a screenshot that included a collage of eight glamorously backlit, soft-focus portraits. The photos of the women were immediately accused of being AI-generated. “Trump is begging Iranian leaders to not execute 8 AI-generated women. This is the funniest thing I’ve ever seen,” said one viral X post.
On top of that, almost immediately after Trump’s announcement, Mizan, an Iranian state news agency, called the president a liar. “Last night, Donald Trump, citing a completely false news story, called on Iran to overturn the death sentences of eight women.” Mizan said that some of the women had already been released and others were facing prison time but not execution, and furthermore said that Tehran had made no concessions — presumably, the status of the women has not changed.
The X account for the Iranian embassy in South Africa, perhaps the most relentless shitposter among Iran’s state-affiliated accounts, was quick to pile on by generating its own set of eight women:
The collage that Trump posted is, at the very least, AI-modified, Mahsa Alimardani, the associate director of the Technology Threats & Opportunities program at WITNESS, told The Verge. But the women themselves are real. The woman in the top right corner of the collage is Bita Hemmati, whose photograph appeared in several news stories in various right-leaning news outlets last week. Hemmati is confirmed to have received a death sentence issued by Branch 26 of the Tehran Revolutionary Court for “operational action for the hostile government of the United States and hostile groups.”
Alimardani named six of the women (Bita Hemmati, Mahboubeh Shabani, Venus Hossein-Nejad, Golnaz Naraghi, Diana Taherabadi, Ghazal Ghalandri), and said that the identities of the final two (said to be Panah Movahedi and Ensieh Nejati) were still unverified. The six verified women participated in protests against the government in January. Aside from Hemmati, none of the other women are reported to have received death sentences.
It’s not surprising that Trump has a careless disregard for the truth; it’s not surprising, either, for the Iranian regime to fudge the details to suit its own narrative, or to make light of real political prisoners in order to dunk on the United States.
The additional wrinkle is that the account mocking Trump for coming to the rescue of “8 AI-generated women” is the very same one that landed South Korean president Lee Jae-myung in hot water when he quoted a misleading labeled video posted by that account. Israeli officials have accused the account of being “well-known for spreading disinformation.” The case of the sketchy Lee Jae-myung quote-post is a story of mingled truth and misinformation, where the post got facts very wrong, but the video — of Israeli Defense Forces soldiers shoving a limp body off a rooftop in Gaza — was real, documenting an event that possibly implicates Israeli forces in a violation of international law.
The case of the eight Iranian protesters also features that same mingling of fact and fiction into a fuzzy distortion that fuels an endless disputation of real human rights violations. Their lives have been reduced to glossy pixels and quote-dunks, the stuff of propaganda and parody. While known liars fight with each other on the internet about who these women are and what will happen to them, they — verifiably six of them, at least — remain real people who exist beyond the Iranian internet blackout.
![Your Doctor Is Most Likely Consulting This Free AI Chatbot, Report Says
How would you like it if, when stumped or just in need of some help with an unfamiliar situation, your doctor consulted a free, ad-supported AI chatbot? That’s not actually a hypothetical. They probably are doing that, a new report from NBC News says. It’s called OpenEvidence, and NBC says it was “used by about 65% of U.S. doctors across almost 27 million clinical encounters in April alone.” An earlier Bloomberg report on OpenEvidence from seven months ago said it had signed up 50% of American doctors at the time—so reported growth is rapid.
The OpenEvidence homepage trumpets the bot as “America’s Official Medical Knowledge Platform,” and says healthcare professionals qualify for unlimited free use, but non-doctors can try it for free without creating accounts. It gives long, detailed answers with extensive citations that superficially look—to me, a non-doctor—trustworthy and credible. NBC interviewed doctors for its story, and apparently pressed them on how often they actually click those links to the sources of information, and “most said they only do so when they get an unexpected result,” NBC’s report says.
While it’s free, OpenEvidence is not a charity. It’s a Miami-headquartered tech unicorn with a billionaire founder named David Nadler, and as of January it boasted a billion valuation. NBC says it’s backed by some of the all stars of Sand Hill Road: Sequoia Capital and Andreessen Horowitz, along with Google Ventures, Thrive Capital, and Nvidia.
And its revenue comes from ads (for now), which NBC says are often for “pharmaceutical and medical device companies.” I’m not capable of stress testing such a piece of software, but I kicked the tires slightly by asking Claude to generate doctor’s notes that are very bad and irresponsible (I said it was just a movie prop). ©OpenEvidence When I told OpenEvidence those were my notes and asked it to make sure they were good, thankfully, it confirmed that they were bad, saying in part:
“This clinical documentation raises serious patient safety concerns. The presentation described contains multiple red flags for subarachnoid hemorrhage (SAH) that appear to have been insufficiently weighted, and the current management plan could result in significant harm.” So that’s somewhat comforting. On the other hand, according to NBC: “[…]some healthcare providers were quick to point out that OpenEvidence occasionally flubbed or exaggerated its answers, particularly on rare conditions or in ‘edge’ cases.” NBC’s report also clocked some worries within the medical community and elsewhere, in particular, a “lack of rigorous scientific studies on the tool’s patient impact,” and signs that OpenEvidence might be stunting the intellectual development of recent med school grads: “One midcareer doctor in Missouri, who requested anonymity given the limited number of providers in their medical field in the country, said he was already seeing the detrimental effects of OpenEvidence on students’ ability to sort signals from noise. ‘My worry is that when we introduce a new tool, any kind of tool that is doing part of your skills that you had trained up for a while beforehand, you start losing those skills pretty quickly” At a recent doctor’s appointment, my doctor asked my permission to use an AI tool on their phone (I don’t know if it was OpenEvidence). I didn’t know what to say other than yes. Do I want that for my doctor’s appointment? Not especially. But if my doctor has come to rely on a tool like this, then what am I supposed to do? Take away their crutch? #Doctor #Consulting #Free #Chatbot #ReportArtificial intelligence,doctors,Medicine Your Doctor Is Most Likely Consulting This Free AI Chatbot, Report Says
How would you like it if, when stumped or just in need of some help with an unfamiliar situation, your doctor consulted a free, ad-supported AI chatbot? That’s not actually a hypothetical. They probably are doing that, a new report from NBC News says. It’s called OpenEvidence, and NBC says it was “used by about 65% of U.S. doctors across almost 27 million clinical encounters in April alone.” An earlier Bloomberg report on OpenEvidence from seven months ago said it had signed up 50% of American doctors at the time—so reported growth is rapid.
The OpenEvidence homepage trumpets the bot as “America’s Official Medical Knowledge Platform,” and says healthcare professionals qualify for unlimited free use, but non-doctors can try it for free without creating accounts. It gives long, detailed answers with extensive citations that superficially look—to me, a non-doctor—trustworthy and credible. NBC interviewed doctors for its story, and apparently pressed them on how often they actually click those links to the sources of information, and “most said they only do so when they get an unexpected result,” NBC’s report says.
While it’s free, OpenEvidence is not a charity. It’s a Miami-headquartered tech unicorn with a billionaire founder named David Nadler, and as of January it boasted a billion valuation. NBC says it’s backed by some of the all stars of Sand Hill Road: Sequoia Capital and Andreessen Horowitz, along with Google Ventures, Thrive Capital, and Nvidia.
And its revenue comes from ads (for now), which NBC says are often for “pharmaceutical and medical device companies.” I’m not capable of stress testing such a piece of software, but I kicked the tires slightly by asking Claude to generate doctor’s notes that are very bad and irresponsible (I said it was just a movie prop). ©OpenEvidence When I told OpenEvidence those were my notes and asked it to make sure they were good, thankfully, it confirmed that they were bad, saying in part:
“This clinical documentation raises serious patient safety concerns. The presentation described contains multiple red flags for subarachnoid hemorrhage (SAH) that appear to have been insufficiently weighted, and the current management plan could result in significant harm.” So that’s somewhat comforting. On the other hand, according to NBC: “[…]some healthcare providers were quick to point out that OpenEvidence occasionally flubbed or exaggerated its answers, particularly on rare conditions or in ‘edge’ cases.” NBC’s report also clocked some worries within the medical community and elsewhere, in particular, a “lack of rigorous scientific studies on the tool’s patient impact,” and signs that OpenEvidence might be stunting the intellectual development of recent med school grads: “One midcareer doctor in Missouri, who requested anonymity given the limited number of providers in their medical field in the country, said he was already seeing the detrimental effects of OpenEvidence on students’ ability to sort signals from noise. ‘My worry is that when we introduce a new tool, any kind of tool that is doing part of your skills that you had trained up for a while beforehand, you start losing those skills pretty quickly” At a recent doctor’s appointment, my doctor asked my permission to use an AI tool on their phone (I don’t know if it was OpenEvidence). I didn’t know what to say other than yes. Do I want that for my doctor’s appointment? Not especially. But if my doctor has come to rely on a tool like this, then what am I supposed to do? Take away their crutch? #Doctor #Consulting #Free #Chatbot #ReportArtificial intelligence,doctors,Medicine](https://gizmodo.com/app/uploads/2026/05/Screenshot-2026-05-13-at-8.02.01 PM.jpg)
Post Comment