Age verification is perhaps the hottest battleground for online speech, and the Supreme Court just settled a pivotal question: does using it to gate adult content violate the First Amendment in the US? For roughly the past 20 years the answer has been “yes” — now, as of Friday, it’s an unambiguous “no.”
Justice Clarence Thomas’ opinion in Free Speech Coalition v. Paxton is relatively straightforward as Supreme Court rulings go. To summarize, its conclusion is that:
- States have a valid interest in keeping kids away from pornography
- Making people prove their ages is a valid strategy to enforce that
- Internet age verification only “incidentally” affects how adults can access protected speech
- The risks aren’t meaningfully different from showing your ID at a liquor store
- Yes, the Supreme Court threw out age verification rules repeatedly in the early 2000s, but the internet of 2025 is so different the old reasoning no longer applies.
Around this string of logic, you’ll find a huge number of objections and unknowns. Many of these were laid out before the decision: the Electronic Frontier Foundation has an overview of the issues, and 404 Media goes deeper on the potential consequences. With the actual ruling in hand, while people are working out the serious implications for future legal cases and the scale of the potential damage, I’ve got a few immediate, prosaic questions.
What’s the privacy threat level?
Even the best age verification usually requires collecting information that links people (directly or indirectly) to some of their most sensitive web history, creating an almost inherent risk of leaks. The only silver lining is that current systems seem to at least largely make good-faith attempts to avoid intentional snooping, and legislation includes attempts to discourage unnecessary data retention.
The problem is, proponents of these systems had the strongest incentives to make privacy-preserving efforts while age verification was still a contested legal issue. Any breaches could have undercut the claim that age-gating is harmless. Unfortunately, the incentives are now almost perfectly flipped. Companies benefit from collecting and exploiting as much data as they can. (Remember when Twitter secretly used two-factor authentication addresses for ad targeting?) Most state and federal privacy frameworks were weak even before federal regulatory agencies started getting gutted, and services may not expect any serious punishment for siphoning data or cutting security corners. Meanwhile, law enforcement agencies could quietly demand security backdoors for any number of reasons, including catching people viewing illegal material. Once you create those gaps, they leave everyone vulnerable.
Will we see deliberate privacy invasions? Not necessarily! And many people will probably evade age verification altogether by using VPNs or finding sites that skirt the rules. But in an increasingly surveillance-happy world, it’s a reasonable concern.
Is Pornhub coming back to Texas (and a bunch of other states)?
Over the past couple of years Pornhub has prominently blocked access to a number of states, including Texas, in protest of local laws requiring age verification. Denying service has been one of the adult industry’s big points of leverage, demonstrating one potential outcome of age verification laws, but even with VPN workarounds this tactic ultimately limits the site’s reach and hurts its bottom line. The Supreme Court ruling cites 21 other states with rules similar to the Texas one, and now that this approach has been deemed constitutional, it’s plausible more will follow suit. At a certain point Pornhub’s parent company Aylo will need to weigh the costs and benefits, particularly if a fight against age verification looks futile — and the Supreme Court decision is a step in that direction.
In the UK, Pornhub ceded territory on that very front a couple of days ago, agreeing (according to British regulator Ofcom) to implement “robust” age verification by July 25th. The company declined comment to The Verge on the impact of FSC v. Paxton, but backing down wouldn’t be a surprising move here.
I don’t ask this question with respect to the law itself — you can read the legal definitions within the text of the Texas law right here. I’m wondering, rather, how far Texas and other states think they can push those limits.
If states stick to policing content that most people would classify as intentional porn or erotica, age-gating on Pornhub and its many sister companies is a given, along with other, smaller sites. Non-video but still sex-focused sites like fiction portal Literotica seem probably covered. More hypothetically, there are general-focus sites that happen to allow visual, text, and audio porn and have a lot of it, like 4chan — though a full one-third of the service being adult content is a high bar to clear.
Beyond that, we’re pretty much left speculating about how malicious state attorneys general might be. It’s easy to imagine LGBTQ resources or sex education sites becoming targets despite having the exact kind of social value the law is supposed to exempt. (I’m not even getting into a federal attempt to redefine obscenity in general.) At this point, of course, it’s debatable how much justification is required before a government can mount an attack on a website. Remember when Texas investigated Media Matters for fraud because it posted unflattering X screenshots? That was roughly the legal equivalent of Mad Libs, but the attorney general was mad enough to give it a shot. Age verification laws are, rather, tailor-made methods to take aim at any given site.
The question “What is porn?” is going to have a tremendous impact on the internet — not just because of what courts believe is obscene for minors, but because of what website operators believe the courts believe is obscene. This is a subtle distinction, but an important one.
We know legislation limiting adult content has chilling effects, even when the laws are rarely used. While age verification rules were in flux, sites could reasonably delay making a call on how to handle them. But that grace period is over — seemingly for good. Many websites are going to start making fairly drastic decisions about what they host, where they operate, and what kind of user information they collect, based not just on hard legal decisions but on preemptive phantom versions of them. In the US, during an escalating push for government censorship, the balance of power has just tipped dramatically. We don’t know how far it has left to go.
Source link
#Supreme #Court #upended #internet #law #questions

![Your Doctor Is Most Likely Consulting This Free AI Chatbot, Report Says
How would you like it if, when stumped or just in need of some help with an unfamiliar situation, your doctor consulted a free, ad-supported AI chatbot? That’s not actually a hypothetical. They probably are doing that, a new report from NBC News says. It’s called OpenEvidence, and NBC says it was “used by about 65% of U.S. doctors across almost 27 million clinical encounters in April alone.” An earlier Bloomberg report on OpenEvidence from seven months ago said it had signed up 50% of American doctors at the time—so reported growth is rapid.
The OpenEvidence homepage trumpets the bot as “America’s Official Medical Knowledge Platform,” and says healthcare professionals qualify for unlimited free use, but non-doctors can try it for free without creating accounts. It gives long, detailed answers with extensive citations that superficially look—to me, a non-doctor—trustworthy and credible. NBC interviewed doctors for its story, and apparently pressed them on how often they actually click those links to the sources of information, and “most said they only do so when they get an unexpected result,” NBC’s report says.
While it’s free, OpenEvidence is not a charity. It’s a Miami-headquartered tech unicorn with a billionaire founder named David Nadler, and as of January it boasted a billion valuation. NBC says it’s backed by some of the all stars of Sand Hill Road: Sequoia Capital and Andreessen Horowitz, along with Google Ventures, Thrive Capital, and Nvidia.
And its revenue comes from ads (for now), which NBC says are often for “pharmaceutical and medical device companies.” I’m not capable of stress testing such a piece of software, but I kicked the tires slightly by asking Claude to generate doctor’s notes that are very bad and irresponsible (I said it was just a movie prop). ©OpenEvidence When I told OpenEvidence those were my notes and asked it to make sure they were good, thankfully, it confirmed that they were bad, saying in part:
“This clinical documentation raises serious patient safety concerns. The presentation described contains multiple red flags for subarachnoid hemorrhage (SAH) that appear to have been insufficiently weighted, and the current management plan could result in significant harm.” So that’s somewhat comforting. On the other hand, according to NBC: “[…]some healthcare providers were quick to point out that OpenEvidence occasionally flubbed or exaggerated its answers, particularly on rare conditions or in ‘edge’ cases.” NBC’s report also clocked some worries within the medical community and elsewhere, in particular, a “lack of rigorous scientific studies on the tool’s patient impact,” and signs that OpenEvidence might be stunting the intellectual development of recent med school grads: “One midcareer doctor in Missouri, who requested anonymity given the limited number of providers in their medical field in the country, said he was already seeing the detrimental effects of OpenEvidence on students’ ability to sort signals from noise. ‘My worry is that when we introduce a new tool, any kind of tool that is doing part of your skills that you had trained up for a while beforehand, you start losing those skills pretty quickly” At a recent doctor’s appointment, my doctor asked my permission to use an AI tool on their phone (I don’t know if it was OpenEvidence). I didn’t know what to say other than yes. Do I want that for my doctor’s appointment? Not especially. But if my doctor has come to rely on a tool like this, then what am I supposed to do? Take away their crutch? #Doctor #Consulting #Free #Chatbot #ReportArtificial intelligence,doctors,Medicine Your Doctor Is Most Likely Consulting This Free AI Chatbot, Report Says
How would you like it if, when stumped or just in need of some help with an unfamiliar situation, your doctor consulted a free, ad-supported AI chatbot? That’s not actually a hypothetical. They probably are doing that, a new report from NBC News says. It’s called OpenEvidence, and NBC says it was “used by about 65% of U.S. doctors across almost 27 million clinical encounters in April alone.” An earlier Bloomberg report on OpenEvidence from seven months ago said it had signed up 50% of American doctors at the time—so reported growth is rapid.
The OpenEvidence homepage trumpets the bot as “America’s Official Medical Knowledge Platform,” and says healthcare professionals qualify for unlimited free use, but non-doctors can try it for free without creating accounts. It gives long, detailed answers with extensive citations that superficially look—to me, a non-doctor—trustworthy and credible. NBC interviewed doctors for its story, and apparently pressed them on how often they actually click those links to the sources of information, and “most said they only do so when they get an unexpected result,” NBC’s report says.
While it’s free, OpenEvidence is not a charity. It’s a Miami-headquartered tech unicorn with a billionaire founder named David Nadler, and as of January it boasted a billion valuation. NBC says it’s backed by some of the all stars of Sand Hill Road: Sequoia Capital and Andreessen Horowitz, along with Google Ventures, Thrive Capital, and Nvidia.
And its revenue comes from ads (for now), which NBC says are often for “pharmaceutical and medical device companies.” I’m not capable of stress testing such a piece of software, but I kicked the tires slightly by asking Claude to generate doctor’s notes that are very bad and irresponsible (I said it was just a movie prop). ©OpenEvidence When I told OpenEvidence those were my notes and asked it to make sure they were good, thankfully, it confirmed that they were bad, saying in part:
“This clinical documentation raises serious patient safety concerns. The presentation described contains multiple red flags for subarachnoid hemorrhage (SAH) that appear to have been insufficiently weighted, and the current management plan could result in significant harm.” So that’s somewhat comforting. On the other hand, according to NBC: “[…]some healthcare providers were quick to point out that OpenEvidence occasionally flubbed or exaggerated its answers, particularly on rare conditions or in ‘edge’ cases.” NBC’s report also clocked some worries within the medical community and elsewhere, in particular, a “lack of rigorous scientific studies on the tool’s patient impact,” and signs that OpenEvidence might be stunting the intellectual development of recent med school grads: “One midcareer doctor in Missouri, who requested anonymity given the limited number of providers in their medical field in the country, said he was already seeing the detrimental effects of OpenEvidence on students’ ability to sort signals from noise. ‘My worry is that when we introduce a new tool, any kind of tool that is doing part of your skills that you had trained up for a while beforehand, you start losing those skills pretty quickly” At a recent doctor’s appointment, my doctor asked my permission to use an AI tool on their phone (I don’t know if it was OpenEvidence). I didn’t know what to say other than yes. Do I want that for my doctor’s appointment? Not especially. But if my doctor has come to rely on a tool like this, then what am I supposed to do? Take away their crutch? #Doctor #Consulting #Free #Chatbot #ReportArtificial intelligence,doctors,Medicine](https://gizmodo.com/app/uploads/2026/05/Screenshot-2026-05-13-at-8.02.01 PM.jpg)
Post Comment