For all the power she wields with the White House’s affairs, Laura Loomer does not have the traditional tools that her rivals in the MAGA influencer industrial complex have — the highest follower count, the most political power, the most internet platforms, etc. But the fact remains that she’s the influencer responsible for getting Donald Trump to fire over a dozen members of his administration (and counting) for the hazily-defined crime of being disloyal to MAGA. This is something that none of her peers, individually, have been able to do. But to understand how she operates, look no further than Loomer’s latest attempted power play, which, as always, involves a fair amount of self-humiliation (and some disgusting slander).
Earlier this week, she leaked a private deposition that she’d given in a lawsuit wherein she explained, in her own words, why she’s published some of the most outre takes and allegations against her “enemies” on social media. The deposition was given as part of her defamation suit against HBO host Bill Maher, who claimed on a February 2025 show that Loomer was having a sexual affair with president Donald Trump. But you might have missed that context, since the social media chatter and headlines gravitated toward the sections of the deposition where Loomer was grilled about her most audacious posts: she claimed, among other things, that Sen. Lindsay Graham was secretly gay, that former Vice President Kamala Harris was a “DEI skank”, and her bitter MAGA rival Rep. Marjorie Taylor Greene (R-GA) divorced her husband because of the “Arby’s in her pants” and was a “political prostitute [who was] sucking McCarthy’s dick all day.”
Unsurprisingly, Loomer defended herself and her statements, both in the deposition (“I’m saying she literally—it’s so ridiculous. I’m saying she literally put Arby’s in her pants”) and in public, after the deposition was reported on by Will Sommer at The Bulwark. “I was asked about Lindsey during my deposition so I had to tell the truth. I was under oath,” Loomer tweeted — along with a screenshot of the section where she explained why she believed Graham was gay.
HBO’s lawyers immediately argued that Loomer was violating several laws by making the private deposition public, and the document was immediately re-sealed. Loomer, on the other hand, claimed that she was simply trying to showcase that her testimony “went so well that the leftist law firm [representing HBO] wanted to SEAL my deposition.” Whether it went well is somewhat debatable, given the potential legal implications for Loomer violating a court order, as well as the incontrovertibility of Loomer saying some absolutely wild things in a legal deposition, and defending them.
Loomer described the stunt as another example of “Loomering,” the term that she uses to describe her internet bullying campaigns. At first glance, it’s a tactic honed from her time working as a secret-camera sting operator at Project Veritas, which famously recorded members of the media and academics saying things that could be spun as anti-conservative and amoral. Loomer’s stepped up her game, however, putting herself in front of the camera (and eventually the public eye) to make her point.
The fact that she constantly embarrasses herself only enhances the efficacy of “Loomering”: first, shame and embarrassment does not matter in the long term, so long as people are paying attention online. (She said as much the next day: “I wake up everyday to another hit piece. It’s because I am effective and it threatens a lot of people.”) Second: the internet is forever, and even if a judge orders a deposition to be resealed, gawkers will save copies, screenshot the juicy bits, and publish them for posterity. Loomer might have been recorded reading her own tweets out loud and forced to explain the gross bits, but she has also gotten several White House administration officials fired.
While it remains to be seen whether her influence extends beyond Trump’s daily capriciousness, Loomer’s unnerving gift for being an internet bully remains effective as long as she has allies in the administration, who leak her the insider info she publicly uses to humiliate others. (As she said to HBO’s lawyers, her friends in the White House told her that Graham was gay.) And as long as the storm she creates can grab Trump’s attention, he will most likely keep picking up her calls.
Source link
#Laura #Loomer #leak #crazy #deposition

![Your Doctor Is Most Likely Consulting This Free AI Chatbot, Report Says
How would you like it if, when stumped or just in need of some help with an unfamiliar situation, your doctor consulted a free, ad-supported AI chatbot? That’s not actually a hypothetical. They probably are doing that, a new report from NBC News says. It’s called OpenEvidence, and NBC says it was “used by about 65% of U.S. doctors across almost 27 million clinical encounters in April alone.” An earlier Bloomberg report on OpenEvidence from seven months ago said it had signed up 50% of American doctors at the time—so reported growth is rapid.
The OpenEvidence homepage trumpets the bot as “America’s Official Medical Knowledge Platform,” and says healthcare professionals qualify for unlimited free use, but non-doctors can try it for free without creating accounts. It gives long, detailed answers with extensive citations that superficially look—to me, a non-doctor—trustworthy and credible. NBC interviewed doctors for its story, and apparently pressed them on how often they actually click those links to the sources of information, and “most said they only do so when they get an unexpected result,” NBC’s report says.
While it’s free, OpenEvidence is not a charity. It’s a Miami-headquartered tech unicorn with a billionaire founder named David Nadler, and as of January it boasted a billion valuation. NBC says it’s backed by some of the all stars of Sand Hill Road: Sequoia Capital and Andreessen Horowitz, along with Google Ventures, Thrive Capital, and Nvidia.
And its revenue comes from ads (for now), which NBC says are often for “pharmaceutical and medical device companies.” I’m not capable of stress testing such a piece of software, but I kicked the tires slightly by asking Claude to generate doctor’s notes that are very bad and irresponsible (I said it was just a movie prop). ©OpenEvidence When I told OpenEvidence those were my notes and asked it to make sure they were good, thankfully, it confirmed that they were bad, saying in part:
“This clinical documentation raises serious patient safety concerns. The presentation described contains multiple red flags for subarachnoid hemorrhage (SAH) that appear to have been insufficiently weighted, and the current management plan could result in significant harm.” So that’s somewhat comforting. On the other hand, according to NBC: “[…]some healthcare providers were quick to point out that OpenEvidence occasionally flubbed or exaggerated its answers, particularly on rare conditions or in ‘edge’ cases.” NBC’s report also clocked some worries within the medical community and elsewhere, in particular, a “lack of rigorous scientific studies on the tool’s patient impact,” and signs that OpenEvidence might be stunting the intellectual development of recent med school grads: “One midcareer doctor in Missouri, who requested anonymity given the limited number of providers in their medical field in the country, said he was already seeing the detrimental effects of OpenEvidence on students’ ability to sort signals from noise. ‘My worry is that when we introduce a new tool, any kind of tool that is doing part of your skills that you had trained up for a while beforehand, you start losing those skills pretty quickly” At a recent doctor’s appointment, my doctor asked my permission to use an AI tool on their phone (I don’t know if it was OpenEvidence). I didn’t know what to say other than yes. Do I want that for my doctor’s appointment? Not especially. But if my doctor has come to rely on a tool like this, then what am I supposed to do? Take away their crutch? #Doctor #Consulting #Free #Chatbot #ReportArtificial intelligence,doctors,Medicine Your Doctor Is Most Likely Consulting This Free AI Chatbot, Report Says
How would you like it if, when stumped or just in need of some help with an unfamiliar situation, your doctor consulted a free, ad-supported AI chatbot? That’s not actually a hypothetical. They probably are doing that, a new report from NBC News says. It’s called OpenEvidence, and NBC says it was “used by about 65% of U.S. doctors across almost 27 million clinical encounters in April alone.” An earlier Bloomberg report on OpenEvidence from seven months ago said it had signed up 50% of American doctors at the time—so reported growth is rapid.
The OpenEvidence homepage trumpets the bot as “America’s Official Medical Knowledge Platform,” and says healthcare professionals qualify for unlimited free use, but non-doctors can try it for free without creating accounts. It gives long, detailed answers with extensive citations that superficially look—to me, a non-doctor—trustworthy and credible. NBC interviewed doctors for its story, and apparently pressed them on how often they actually click those links to the sources of information, and “most said they only do so when they get an unexpected result,” NBC’s report says.
While it’s free, OpenEvidence is not a charity. It’s a Miami-headquartered tech unicorn with a billionaire founder named David Nadler, and as of January it boasted a billion valuation. NBC says it’s backed by some of the all stars of Sand Hill Road: Sequoia Capital and Andreessen Horowitz, along with Google Ventures, Thrive Capital, and Nvidia.
And its revenue comes from ads (for now), which NBC says are often for “pharmaceutical and medical device companies.” I’m not capable of stress testing such a piece of software, but I kicked the tires slightly by asking Claude to generate doctor’s notes that are very bad and irresponsible (I said it was just a movie prop). ©OpenEvidence When I told OpenEvidence those were my notes and asked it to make sure they were good, thankfully, it confirmed that they were bad, saying in part:
“This clinical documentation raises serious patient safety concerns. The presentation described contains multiple red flags for subarachnoid hemorrhage (SAH) that appear to have been insufficiently weighted, and the current management plan could result in significant harm.” So that’s somewhat comforting. On the other hand, according to NBC: “[…]some healthcare providers were quick to point out that OpenEvidence occasionally flubbed or exaggerated its answers, particularly on rare conditions or in ‘edge’ cases.” NBC’s report also clocked some worries within the medical community and elsewhere, in particular, a “lack of rigorous scientific studies on the tool’s patient impact,” and signs that OpenEvidence might be stunting the intellectual development of recent med school grads: “One midcareer doctor in Missouri, who requested anonymity given the limited number of providers in their medical field in the country, said he was already seeing the detrimental effects of OpenEvidence on students’ ability to sort signals from noise. ‘My worry is that when we introduce a new tool, any kind of tool that is doing part of your skills that you had trained up for a while beforehand, you start losing those skills pretty quickly” At a recent doctor’s appointment, my doctor asked my permission to use an AI tool on their phone (I don’t know if it was OpenEvidence). I didn’t know what to say other than yes. Do I want that for my doctor’s appointment? Not especially. But if my doctor has come to rely on a tool like this, then what am I supposed to do? Take away their crutch? #Doctor #Consulting #Free #Chatbot #ReportArtificial intelligence,doctors,Medicine](https://gizmodo.com/app/uploads/2026/05/Screenshot-2026-05-13-at-8.02.01 PM.jpg)
Post Comment