Giant monsters and manga are a goated combo that has only gotten stronger as a staple in pop culture. Much of this is thanks in part to series like Kaiju No. 8, which set the bar high with its adult-cast twist, serving as an iconoclast to the well-trodden teenage somethings who’ve exclusively been allowed to play starring roles in shonen series.
But another series hewing close to its winning formula, deserving just as much praise as its star begins to rise, is Rai Rai Rai, an underappreciated Viz Media manga rich with gag-comedy charm and a deceptively provocative narrative hidden beneath the appeal of its cute girl donning an even cuter kaiju design.
Rai Rai Rai (which translates to “Lightning Lightning Lightning”), written by Yoshiaki, is a post-apocalyptic sci-fi action comedy series. In 2052, the world is on the upswing after an alien invasion half a century prior. Now, organizations are tasked with cleaning up the remaining alien monsters called varmints.
The series follows our crybaby hero, Sumire Ichigaya, an 18-year-old woman who, after getting abducted by aliens, has the power to transform into a kaiju. At this point, you don’t have to squint too hard to think that its premise is pretty much a gender-bent version of Kaiju No. 8, only trading homegrown kaiju for space kaiju. And well, yeah. That’s certainly much of its onboarding, but as the series evolves, Rai Rai Rai branches itself out from being a twin series to Kaiju No. 8 in interesting ways worth getting in on the ground floor now before it really takes off or gets cancelled (KNOCKS ON WOOD).
This is not a spoiler for the series’ twist, but what Rai Rai Rai does more than be the kind of “Kaiju No.8 manga is over, here’s something similar” recommendation that would come readers’ way is that, despite feeling like the median of multiple manga’s core premises, it still manages to dig its feet in and hold strong as a series worth reading for its own merits. Those inspirations include early Dragon Ball‘s comedic timing, Ranma 1/2 and Kaiju No. 8‘s aesthetics, and a hint of Gunbuster and Chainsaw Man‘s rule of cool to round it out.
For one, Rai Rai Rai harkens back to the softer, rounder character designs of seminal manga series. Sumire’s ponytail look is peak Ramna 1/2—a style newer manga like Gokurakugai and Dandadan have wisely folded into their DNA, because creator Rumiko Takahashi is worth mimicking. Yet the series doesn’t just bask in charm; it layers an edge that’s reminiscent of, of course, Kaiju No. 8, but also Chainsaw Man.
That edge shows most clearly in the militaristic varmint-killing organization Sumire is coerced into joining, Raiden, where operatives are outfitted in sleek plug suits that boost one’s combat prowess—always a plus in any sci-fi series. But clumsy Sumire, Rai Rai Rai‘s crybaby hero—born to whimsy, forced to lock in—anchors the story by persevering as its loveable goofball harboring her own tragedy.
Despite Rai Rai Rai‘s deceptively cute veneer, the series digs into heavy themes. Key among them being the physical abuse Sumire suffered at the hands of her mother, her parents’ crushing debt, and the exploitative jobs she takes to help them crawl out of it.
She’s a Denji-like figure, throwing herself into harm’s way for pay to the point that Raiden doesn’t have to bother hiding that they’re using her as some grand secret. You’d think all of this would coalesce in her kaiju transformation to look like something that crawled out of Q Hayashida’s Dai Dark drafts. Instead, we have a cute subversion: Sumire’s kaiju form is more like an overstuffed plushy (or a Labubu). Witnessing her struggle to repress a Godzilla-style atomic breath, only to rally as a symbol citizens can embrace rather than fear (peep her Gunbuster pose), feels closer to Superman-levels of hope-maxing than the sharp-edged poster boys of shonen manga usually parade.

More crucially, despite being only roughly 40 chapters deep, Rai Rai Rai strikes a charming balance between gag‑manga comedy and its battle‑shonen‑meets‑horror aesthetic. In the same way that Magilumiere Co. LTD. riffs on My Hero Academia and Sailor Moon to prove girls can lead these series without looking like Hot Topic knockoffs, Rai Rai Rai pushes the oddly winning combo of a cute girl in a cute kaiju suit fighting for her life as something that doesn’t feel derivative but fresh. It’s mile‑a‑minute physical comedy that knows what makes kaiju media cool and leans heavily into that, with gnarly battles, unsettling kaiju designs, and a sharp critique of rah-rah militaristic obedience, making its whimsy feel not just charming but subversive and vital.
The manga industry is cutthroat, with countless promising series cancelled before they ever take off. Especially when women are at their centers, too often their survival depends on word‑of‑mouth to champion them long enough to reach their full potential—as we’ve seen with titles like Love Bullet.
Hopefully, Rai Rai Rai sparks that same groundswell, because I want to see Yoshiaki keep cooking. It just introduced a Metal Gear Rising-coded muscle grandma as a wild new rival character, and it’d be a shame if this series ended up as another “what could’ve been” manga.
Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.
Source link
#Rai #Rai #Rai #Pastiche #Manga #Greats #Rolled #Cutesy #Kaiju #Package
![Your Doctor Is Most Likely Consulting This Free AI Chatbot, Report Says
How would you like it if, when stumped or just in need of some help with an unfamiliar situation, your doctor consulted a free, ad-supported AI chatbot? That’s not actually a hypothetical. They probably are doing that, a new report from NBC News says. It’s called OpenEvidence, and NBC says it was “used by about 65% of U.S. doctors across almost 27 million clinical encounters in April alone.” An earlier Bloomberg report on OpenEvidence from seven months ago said it had signed up 50% of American doctors at the time—so reported growth is rapid.
The OpenEvidence homepage trumpets the bot as “America’s Official Medical Knowledge Platform,” and says healthcare professionals qualify for unlimited free use, but non-doctors can try it for free without creating accounts. It gives long, detailed answers with extensive citations that superficially look—to me, a non-doctor—trustworthy and credible. NBC interviewed doctors for its story, and apparently pressed them on how often they actually click those links to the sources of information, and “most said they only do so when they get an unexpected result,” NBC’s report says.
While it’s free, OpenEvidence is not a charity. It’s a Miami-headquartered tech unicorn with a billionaire founder named David Nadler, and as of January it boasted a billion valuation. NBC says it’s backed by some of the all stars of Sand Hill Road: Sequoia Capital and Andreessen Horowitz, along with Google Ventures, Thrive Capital, and Nvidia.
And its revenue comes from ads (for now), which NBC says are often for “pharmaceutical and medical device companies.” I’m not capable of stress testing such a piece of software, but I kicked the tires slightly by asking Claude to generate doctor’s notes that are very bad and irresponsible (I said it was just a movie prop). ©OpenEvidence When I told OpenEvidence those were my notes and asked it to make sure they were good, thankfully, it confirmed that they were bad, saying in part:
“This clinical documentation raises serious patient safety concerns. The presentation described contains multiple red flags for subarachnoid hemorrhage (SAH) that appear to have been insufficiently weighted, and the current management plan could result in significant harm.” So that’s somewhat comforting. On the other hand, according to NBC: “[…]some healthcare providers were quick to point out that OpenEvidence occasionally flubbed or exaggerated its answers, particularly on rare conditions or in ‘edge’ cases.” NBC’s report also clocked some worries within the medical community and elsewhere, in particular, a “lack of rigorous scientific studies on the tool’s patient impact,” and signs that OpenEvidence might be stunting the intellectual development of recent med school grads: “One midcareer doctor in Missouri, who requested anonymity given the limited number of providers in their medical field in the country, said he was already seeing the detrimental effects of OpenEvidence on students’ ability to sort signals from noise. ‘My worry is that when we introduce a new tool, any kind of tool that is doing part of your skills that you had trained up for a while beforehand, you start losing those skills pretty quickly” At a recent doctor’s appointment, my doctor asked my permission to use an AI tool on their phone (I don’t know if it was OpenEvidence). I didn’t know what to say other than yes. Do I want that for my doctor’s appointment? Not especially. But if my doctor has come to rely on a tool like this, then what am I supposed to do? Take away their crutch? #Doctor #Consulting #Free #Chatbot #ReportArtificial intelligence,doctors,Medicine Your Doctor Is Most Likely Consulting This Free AI Chatbot, Report Says
How would you like it if, when stumped or just in need of some help with an unfamiliar situation, your doctor consulted a free, ad-supported AI chatbot? That’s not actually a hypothetical. They probably are doing that, a new report from NBC News says. It’s called OpenEvidence, and NBC says it was “used by about 65% of U.S. doctors across almost 27 million clinical encounters in April alone.” An earlier Bloomberg report on OpenEvidence from seven months ago said it had signed up 50% of American doctors at the time—so reported growth is rapid.
The OpenEvidence homepage trumpets the bot as “America’s Official Medical Knowledge Platform,” and says healthcare professionals qualify for unlimited free use, but non-doctors can try it for free without creating accounts. It gives long, detailed answers with extensive citations that superficially look—to me, a non-doctor—trustworthy and credible. NBC interviewed doctors for its story, and apparently pressed them on how often they actually click those links to the sources of information, and “most said they only do so when they get an unexpected result,” NBC’s report says.
While it’s free, OpenEvidence is not a charity. It’s a Miami-headquartered tech unicorn with a billionaire founder named David Nadler, and as of January it boasted a billion valuation. NBC says it’s backed by some of the all stars of Sand Hill Road: Sequoia Capital and Andreessen Horowitz, along with Google Ventures, Thrive Capital, and Nvidia.
And its revenue comes from ads (for now), which NBC says are often for “pharmaceutical and medical device companies.” I’m not capable of stress testing such a piece of software, but I kicked the tires slightly by asking Claude to generate doctor’s notes that are very bad and irresponsible (I said it was just a movie prop). ©OpenEvidence When I told OpenEvidence those were my notes and asked it to make sure they were good, thankfully, it confirmed that they were bad, saying in part:
“This clinical documentation raises serious patient safety concerns. The presentation described contains multiple red flags for subarachnoid hemorrhage (SAH) that appear to have been insufficiently weighted, and the current management plan could result in significant harm.” So that’s somewhat comforting. On the other hand, according to NBC: “[…]some healthcare providers were quick to point out that OpenEvidence occasionally flubbed or exaggerated its answers, particularly on rare conditions or in ‘edge’ cases.” NBC’s report also clocked some worries within the medical community and elsewhere, in particular, a “lack of rigorous scientific studies on the tool’s patient impact,” and signs that OpenEvidence might be stunting the intellectual development of recent med school grads: “One midcareer doctor in Missouri, who requested anonymity given the limited number of providers in their medical field in the country, said he was already seeing the detrimental effects of OpenEvidence on students’ ability to sort signals from noise. ‘My worry is that when we introduce a new tool, any kind of tool that is doing part of your skills that you had trained up for a while beforehand, you start losing those skills pretty quickly” At a recent doctor’s appointment, my doctor asked my permission to use an AI tool on their phone (I don’t know if it was OpenEvidence). I didn’t know what to say other than yes. Do I want that for my doctor’s appointment? Not especially. But if my doctor has come to rely on a tool like this, then what am I supposed to do? Take away their crutch? #Doctor #Consulting #Free #Chatbot #ReportArtificial intelligence,doctors,Medicine](https://gizmodo.com/app/uploads/2026/05/Screenshot-2026-05-13-at-8.02.01 PM.jpg)
Post Comment