With newer and newer crops of manga vying for fans’ attention every waking second, sifting through the noise to find something with the staying power of a hidden gem is pretty much the perpetual endeavor of fans. One gem I’ve recently set my peepers on and can’t get enough of is After God, a manga that’s quietly been building one of the most intriguing fantasy premises I’ve seen in a long time.
After God, created by Sumi Eno, is a dark-fantasy manga that takes a dystopian spin on polytheism in the modern age. In it, the world is suddenly invaded by creatures known as gods because there’s honestly no better description for them with how inconceivably powerful they are. Their rules, as far as humanity has tried and errored in figuring out, are as follows: they can’t be captured by cameras because then their image would be that of a false idol. So, the only way to see them is, well, to see them.
But once you do, it’s already too late. Rumors have it that when you look into the eyes of a god, what you see is the most beautiful being you’ve ever laid eyes on in your entire life, leading to a sense of euphoric bliss. But what’s actually happening is more akin to an anglerfish luring in its prey. Once you’re captured, they basically blow a kiss at you, and you’re turned into water. That’s it. Done.
In their wake, these gods have developed a bit of a split following. On the one hand, there are anti-god researchers dedicated to finding a way to kill these gods, whom they refer to as Idolatry Prohibited Organisms (IPOs). On the other hand, these so-called gods are at the center of social upheaval because they’re treated like deities by folks who practice religion, are agnostic, or pledge their lives to science, yet have no choice but to bend the knee to IPO’s awesome might and influence. They’ve got acolytes and all kinds of zealots following their mysterious wishes. Those who don’t ascribe to drinking the Kool-Aid wear garments and masks to cover their faces to protect themselves on the off chance they encounter a god in the wild.
In that same vein, they’re great for the worst content creators you know to enter the uninhabitable danger zone—locations where gods have been quartered off in Japan, leaving ecological disasters in their territory—to shoot Logan Paul-esque suicide forest videos for the views. In summation, society is fucked.

All that is just the groundwork for the series’ premise from chapter one. The story proper follows Waka Kamikura, a high schooler who travels to the city in search of answers about her best friend’s disappearance (she’s the lady in the above image). To do so, she almost wanders into one of those aforementioned danger zones before a researcher named Tokigawa Sachiyuki stops her. Their chance encounter leads to both of them running into an acolyte who seems to kill Waka by piercing her skull with a support beam of a playground swing.
But because this is all still chapter one, the other shoe still has to drop. What we discover, as the Viz Media trailer scooped, is that Waka has the eyes of a god. What’s more, she also seems to be harboring a more bloodthirsty identity under her unassuming disposition. Thankfully, Waka appears to be fighting on the side of humanity and swears, in no uncertain terms, that she wants to kill every god for their involvement in her best friend’s disappearance.

Granted, much of After God‘s plot progression is one that fans, present company included, have sworn to keep under an unspoken bond of spoiler secrecy. And for good reason; the series is one best explored without having one iota of an idea of where it’s going. Not since picking up Kasumi Yasuda’s Fool Night (another dystopian manga folks should totally read, about people volunteering to be transformed into plants to save a world engulfed in eternal darkness) have I encountered a series whose story is so entrancing. Its big double-page spreads of gods are equal parts grotesque and breathtaking.

Eno’s artwork comes as close to what those biblically accurate angel meme trends would look like as if it were inside a hauntingly gorgeous manga. And a lot of that feels by design, with how ingrained the manga is in the hyperspeed discourse of a media cycle trying to make sense of IPOs and how social media flattens them into an amorphous meme to be taken lightly. That sinking feeling of distrust practically emanates from chapter to chapter as everyone—gods, acolytes, Waka, Tokigawa, and the anti-God researchers—all have ulterior motives at play and will use each other to their own ends to pursue them.
But the series also balances its dour, weighty plotting and body horror with a fair share of humor that actually lands. These elements (like the herd of cats seen below) somehow don’t detract from the story’s overall weight but add another wrinkle to how off-kilter and vexing the whole thing is. They’ve kind of got a Pluck from Berserk quality of making the read less depressing, and they’re greatly appreciated.

So if you’re fiending to check out a manga series with an adult cast that’s equal parts endearing and detestable, world-building that doesn’t feel like it’s spinning its wheels, and drop-dead gorgeous art, you should definitely add After God to your shelf.
Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.
Source link
#Believing #Peril #God
![Your Doctor Is Most Likely Consulting This Free AI Chatbot, Report Says
How would you like it if, when stumped or just in need of some help with an unfamiliar situation, your doctor consulted a free, ad-supported AI chatbot? That’s not actually a hypothetical. They probably are doing that, a new report from NBC News says. It’s called OpenEvidence, and NBC says it was “used by about 65% of U.S. doctors across almost 27 million clinical encounters in April alone.” An earlier Bloomberg report on OpenEvidence from seven months ago said it had signed up 50% of American doctors at the time—so reported growth is rapid.
The OpenEvidence homepage trumpets the bot as “America’s Official Medical Knowledge Platform,” and says healthcare professionals qualify for unlimited free use, but non-doctors can try it for free without creating accounts. It gives long, detailed answers with extensive citations that superficially look—to me, a non-doctor—trustworthy and credible. NBC interviewed doctors for its story, and apparently pressed them on how often they actually click those links to the sources of information, and “most said they only do so when they get an unexpected result,” NBC’s report says.
While it’s free, OpenEvidence is not a charity. It’s a Miami-headquartered tech unicorn with a billionaire founder named David Nadler, and as of January it boasted a billion valuation. NBC says it’s backed by some of the all stars of Sand Hill Road: Sequoia Capital and Andreessen Horowitz, along with Google Ventures, Thrive Capital, and Nvidia.
And its revenue comes from ads (for now), which NBC says are often for “pharmaceutical and medical device companies.” I’m not capable of stress testing such a piece of software, but I kicked the tires slightly by asking Claude to generate doctor’s notes that are very bad and irresponsible (I said it was just a movie prop). ©OpenEvidence When I told OpenEvidence those were my notes and asked it to make sure they were good, thankfully, it confirmed that they were bad, saying in part:
“This clinical documentation raises serious patient safety concerns. The presentation described contains multiple red flags for subarachnoid hemorrhage (SAH) that appear to have been insufficiently weighted, and the current management plan could result in significant harm.” So that’s somewhat comforting. On the other hand, according to NBC: “[…]some healthcare providers were quick to point out that OpenEvidence occasionally flubbed or exaggerated its answers, particularly on rare conditions or in ‘edge’ cases.” NBC’s report also clocked some worries within the medical community and elsewhere, in particular, a “lack of rigorous scientific studies on the tool’s patient impact,” and signs that OpenEvidence might be stunting the intellectual development of recent med school grads: “One midcareer doctor in Missouri, who requested anonymity given the limited number of providers in their medical field in the country, said he was already seeing the detrimental effects of OpenEvidence on students’ ability to sort signals from noise. ‘My worry is that when we introduce a new tool, any kind of tool that is doing part of your skills that you had trained up for a while beforehand, you start losing those skills pretty quickly” At a recent doctor’s appointment, my doctor asked my permission to use an AI tool on their phone (I don’t know if it was OpenEvidence). I didn’t know what to say other than yes. Do I want that for my doctor’s appointment? Not especially. But if my doctor has come to rely on a tool like this, then what am I supposed to do? Take away their crutch? #Doctor #Consulting #Free #Chatbot #ReportArtificial intelligence,doctors,Medicine Your Doctor Is Most Likely Consulting This Free AI Chatbot, Report Says
How would you like it if, when stumped or just in need of some help with an unfamiliar situation, your doctor consulted a free, ad-supported AI chatbot? That’s not actually a hypothetical. They probably are doing that, a new report from NBC News says. It’s called OpenEvidence, and NBC says it was “used by about 65% of U.S. doctors across almost 27 million clinical encounters in April alone.” An earlier Bloomberg report on OpenEvidence from seven months ago said it had signed up 50% of American doctors at the time—so reported growth is rapid.
The OpenEvidence homepage trumpets the bot as “America’s Official Medical Knowledge Platform,” and says healthcare professionals qualify for unlimited free use, but non-doctors can try it for free without creating accounts. It gives long, detailed answers with extensive citations that superficially look—to me, a non-doctor—trustworthy and credible. NBC interviewed doctors for its story, and apparently pressed them on how often they actually click those links to the sources of information, and “most said they only do so when they get an unexpected result,” NBC’s report says.
While it’s free, OpenEvidence is not a charity. It’s a Miami-headquartered tech unicorn with a billionaire founder named David Nadler, and as of January it boasted a billion valuation. NBC says it’s backed by some of the all stars of Sand Hill Road: Sequoia Capital and Andreessen Horowitz, along with Google Ventures, Thrive Capital, and Nvidia.
And its revenue comes from ads (for now), which NBC says are often for “pharmaceutical and medical device companies.” I’m not capable of stress testing such a piece of software, but I kicked the tires slightly by asking Claude to generate doctor’s notes that are very bad and irresponsible (I said it was just a movie prop). ©OpenEvidence When I told OpenEvidence those were my notes and asked it to make sure they were good, thankfully, it confirmed that they were bad, saying in part:
“This clinical documentation raises serious patient safety concerns. The presentation described contains multiple red flags for subarachnoid hemorrhage (SAH) that appear to have been insufficiently weighted, and the current management plan could result in significant harm.” So that’s somewhat comforting. On the other hand, according to NBC: “[…]some healthcare providers were quick to point out that OpenEvidence occasionally flubbed or exaggerated its answers, particularly on rare conditions or in ‘edge’ cases.” NBC’s report also clocked some worries within the medical community and elsewhere, in particular, a “lack of rigorous scientific studies on the tool’s patient impact,” and signs that OpenEvidence might be stunting the intellectual development of recent med school grads: “One midcareer doctor in Missouri, who requested anonymity given the limited number of providers in their medical field in the country, said he was already seeing the detrimental effects of OpenEvidence on students’ ability to sort signals from noise. ‘My worry is that when we introduce a new tool, any kind of tool that is doing part of your skills that you had trained up for a while beforehand, you start losing those skills pretty quickly” At a recent doctor’s appointment, my doctor asked my permission to use an AI tool on their phone (I don’t know if it was OpenEvidence). I didn’t know what to say other than yes. Do I want that for my doctor’s appointment? Not especially. But if my doctor has come to rely on a tool like this, then what am I supposed to do? Take away their crutch? #Doctor #Consulting #Free #Chatbot #ReportArtificial intelligence,doctors,Medicine](https://gizmodo.com/app/uploads/2026/05/Screenshot-2026-05-13-at-8.02.01 PM.jpg)
Post Comment