Fallout is not a direct adaptation of the beloved games—neither the Black Isle/Interplay classics nor the revived Bethesda/Obsidian era of the series—which means that there’s plenty of space for it to carve out its own imagining of the franchise in its corner of the wasteland. That hasn’t stopped game fans, especially as the show begins to play more directly with plot beats from the games, from bristling when the show makes a departure from the source material, big or small. But in this week’s episode of the show’s second season, the series is already returning to its biggest and most controversial swing away from the games… and tying it all into another fascinating twist it’s dealing with this season.
After the premiere episode caught us up with the Ghoul and Lucy’s journey towards New Vegas, “The Golden Rule” checks in on the now-separated Maximus, back firmly within the arms of the Brotherhood of Steel after he was cut off from finding Lucy again in the climax of season one. It’s an episode that is more broadly about how the ramifications of Maximus’ actions across season two have come home to roost and his willingness to survive in spite of those ramifications regardless of the cost. But before we even get to all that (and a potential simmering schism among the Brotherhood), it opens with a flashback to a defining moment in both Maximus’ life and one of Fallout‘s most controversial choices: the destruction of Shady Sands, the heart of the New California Republic, through the machinations of Vault-Tec.
In isolation, it’s a great sequence—tense and tragic in equal measure, not just for the impact it has on poor young Maximus’ life as his parents sacrifice themselves to keep him protected from the blast, but because of that inevitable gutwrench of getting to see a civilization that managed to really thrive and establish itself in the aftermath of the war get snuffed out just as it was on the verge of endurance. An endurance, of course, that Vault-Tec (mostly through Hank and his personal spite at the NCR for taking his wife from him) cannot abide, because a future for the wasteland championed through community rather than capitalism is a bigger threat to its own survival than any nuclear war could be.

But it’s also interesting that the season comes right back to this moment before we rejoin Maximus as an adult, the horrors of his childhood losses having shaped him into the man he is. The destruction of Shady Sands was a major shock in Fallout‘s first season—and for game fans, not for good reasons. In the game series, Shady Sands was the heart of the NCR as a thriving, major faction in the Fallout universe’s political landscape. Although the main show itself is set decades after the modern games, the destruction of Shady Sands was established as taking place between the events of Fallout: New Vegas and Fallout 4—representing what some saw as a disruption to the franchise’s continuity and a perceived betrayal of what the games had established, not just in terms of canon, but in wiping the slate for a viable future for Fallout‘s world in order to maintain the wasteland status quo forevermore. Of the many settlements and attempts at renewal Fallout had given us over the years, Shady Sands and the NCR’s continuation was arguably one of the most hopeful outcomes—but in the show, it was all just gone in an instant.
Instead of addressing that controversy, Fallout simply stands by it with confidence that it is still telling its own story, rather than being beholden to or riffing off of the games’ established events. If anything, the sequence represents the series telling the audience that this change is even more important to the show than being simple worldbuilding tweaks, and this interpretation of Fallout is willing to explore the emotional and thematic impact its loss has on the world and its characters. Fascinatingly, in revisiting it now, Fallout also ties the destruction of Shady Sands into another controversial step away from the games that is forming a major part of season two: the arrival of Robert House as a major player and his plans (and seemingly Vault-Tec’s in turn) to dominate the minds of others with the brain-computer interface chips.

In almost cheeky style, Fallout pays direct homage to the games by integrating the role of the chip into Shady Sands’ destruction. It’s revealed that the nuke is smuggled into the heart of the city by an NCR trooper who’s been implanted with one of the devices, only capable of muttering “patrolling the Mojave almost makes you wish for a nuclear winter” over and over before he keels over, a nod to an oft-repeated line of NPC dialogue from New Vegas, the NCR equivalent of Skyrim‘s “arrow to the knee” meme. But the interface chips, and their connection to Mr. House and Hank MacLean alike, are themselves quickly becoming another similar point of controversy for the show among game fans: while there are definitely plenty of mind-control-adjacent pieces of tech in Fallout, there’s nothing really like the interface chip in the games, and definitely nothing related to anything House was planning as he and RobCo prepared to survive the coming war as explored in New Vegas.
By tying these two points of controversy together—or rather, points of differentiation between the show and the games—Fallout is having the confidence to say to its audience that it is telling its own story, inspired by, but not beholden to, the games. It’s a refreshingly candid tone for a video game adaptation to take, as we increasingly see more and more of them, and more and more of them making a selling point of being faithful to the gaming source material. Fallout‘s not dismissing the games here, so far at least: it’s showing its love for the franchise by being bold enough to push things in new directions and make things of its own volition. They might clash sometimes, but it’s an overall additive effect that enriches both the show and the wider franchise. Even if they have to nuke a city to make the proverbial omelette, in this case.
Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.
Source link
#Fallout #Revisits #Controversial #Change #Games
![Your Doctor Is Most Likely Consulting This Free AI Chatbot, Report Says
How would you like it if, when stumped or just in need of some help with an unfamiliar situation, your doctor consulted a free, ad-supported AI chatbot? That’s not actually a hypothetical. They probably are doing that, a new report from NBC News says. It’s called OpenEvidence, and NBC says it was “used by about 65% of U.S. doctors across almost 27 million clinical encounters in April alone.” An earlier Bloomberg report on OpenEvidence from seven months ago said it had signed up 50% of American doctors at the time—so reported growth is rapid.
The OpenEvidence homepage trumpets the bot as “America’s Official Medical Knowledge Platform,” and says healthcare professionals qualify for unlimited free use, but non-doctors can try it for free without creating accounts. It gives long, detailed answers with extensive citations that superficially look—to me, a non-doctor—trustworthy and credible. NBC interviewed doctors for its story, and apparently pressed them on how often they actually click those links to the sources of information, and “most said they only do so when they get an unexpected result,” NBC’s report says.
While it’s free, OpenEvidence is not a charity. It’s a Miami-headquartered tech unicorn with a billionaire founder named David Nadler, and as of January it boasted a billion valuation. NBC says it’s backed by some of the all stars of Sand Hill Road: Sequoia Capital and Andreessen Horowitz, along with Google Ventures, Thrive Capital, and Nvidia.
And its revenue comes from ads (for now), which NBC says are often for “pharmaceutical and medical device companies.” I’m not capable of stress testing such a piece of software, but I kicked the tires slightly by asking Claude to generate doctor’s notes that are very bad and irresponsible (I said it was just a movie prop). ©OpenEvidence When I told OpenEvidence those were my notes and asked it to make sure they were good, thankfully, it confirmed that they were bad, saying in part:
“This clinical documentation raises serious patient safety concerns. The presentation described contains multiple red flags for subarachnoid hemorrhage (SAH) that appear to have been insufficiently weighted, and the current management plan could result in significant harm.” So that’s somewhat comforting. On the other hand, according to NBC: “[…]some healthcare providers were quick to point out that OpenEvidence occasionally flubbed or exaggerated its answers, particularly on rare conditions or in ‘edge’ cases.” NBC’s report also clocked some worries within the medical community and elsewhere, in particular, a “lack of rigorous scientific studies on the tool’s patient impact,” and signs that OpenEvidence might be stunting the intellectual development of recent med school grads: “One midcareer doctor in Missouri, who requested anonymity given the limited number of providers in their medical field in the country, said he was already seeing the detrimental effects of OpenEvidence on students’ ability to sort signals from noise. ‘My worry is that when we introduce a new tool, any kind of tool that is doing part of your skills that you had trained up for a while beforehand, you start losing those skills pretty quickly” At a recent doctor’s appointment, my doctor asked my permission to use an AI tool on their phone (I don’t know if it was OpenEvidence). I didn’t know what to say other than yes. Do I want that for my doctor’s appointment? Not especially. But if my doctor has come to rely on a tool like this, then what am I supposed to do? Take away their crutch? #Doctor #Consulting #Free #Chatbot #ReportArtificial intelligence,doctors,Medicine Your Doctor Is Most Likely Consulting This Free AI Chatbot, Report Says
How would you like it if, when stumped or just in need of some help with an unfamiliar situation, your doctor consulted a free, ad-supported AI chatbot? That’s not actually a hypothetical. They probably are doing that, a new report from NBC News says. It’s called OpenEvidence, and NBC says it was “used by about 65% of U.S. doctors across almost 27 million clinical encounters in April alone.” An earlier Bloomberg report on OpenEvidence from seven months ago said it had signed up 50% of American doctors at the time—so reported growth is rapid.
The OpenEvidence homepage trumpets the bot as “America’s Official Medical Knowledge Platform,” and says healthcare professionals qualify for unlimited free use, but non-doctors can try it for free without creating accounts. It gives long, detailed answers with extensive citations that superficially look—to me, a non-doctor—trustworthy and credible. NBC interviewed doctors for its story, and apparently pressed them on how often they actually click those links to the sources of information, and “most said they only do so when they get an unexpected result,” NBC’s report says.
While it’s free, OpenEvidence is not a charity. It’s a Miami-headquartered tech unicorn with a billionaire founder named David Nadler, and as of January it boasted a billion valuation. NBC says it’s backed by some of the all stars of Sand Hill Road: Sequoia Capital and Andreessen Horowitz, along with Google Ventures, Thrive Capital, and Nvidia.
And its revenue comes from ads (for now), which NBC says are often for “pharmaceutical and medical device companies.” I’m not capable of stress testing such a piece of software, but I kicked the tires slightly by asking Claude to generate doctor’s notes that are very bad and irresponsible (I said it was just a movie prop). ©OpenEvidence When I told OpenEvidence those were my notes and asked it to make sure they were good, thankfully, it confirmed that they were bad, saying in part:
“This clinical documentation raises serious patient safety concerns. The presentation described contains multiple red flags for subarachnoid hemorrhage (SAH) that appear to have been insufficiently weighted, and the current management plan could result in significant harm.” So that’s somewhat comforting. On the other hand, according to NBC: “[…]some healthcare providers were quick to point out that OpenEvidence occasionally flubbed or exaggerated its answers, particularly on rare conditions or in ‘edge’ cases.” NBC’s report also clocked some worries within the medical community and elsewhere, in particular, a “lack of rigorous scientific studies on the tool’s patient impact,” and signs that OpenEvidence might be stunting the intellectual development of recent med school grads: “One midcareer doctor in Missouri, who requested anonymity given the limited number of providers in their medical field in the country, said he was already seeing the detrimental effects of OpenEvidence on students’ ability to sort signals from noise. ‘My worry is that when we introduce a new tool, any kind of tool that is doing part of your skills that you had trained up for a while beforehand, you start losing those skills pretty quickly” At a recent doctor’s appointment, my doctor asked my permission to use an AI tool on their phone (I don’t know if it was OpenEvidence). I didn’t know what to say other than yes. Do I want that for my doctor’s appointment? Not especially. But if my doctor has come to rely on a tool like this, then what am I supposed to do? Take away their crutch? #Doctor #Consulting #Free #Chatbot #ReportArtificial intelligence,doctors,Medicine](https://gizmodo.com/app/uploads/2026/05/Screenshot-2026-05-13-at-8.02.01 PM.jpg)
Post Comment