What is a U.S. reality TV star all over Chinese social media? The simple answer is “Why shouldn’t she be?” But if you’re American, and you feel lost without a deep understanding of Kris Jenner’s sudden popularity among China’s netizens, you need a lot of context.
We don’t often think of memes in the same terms as the rest of the digital economy, but there are parallels to be made between the evolution of how our time and money is spent online and how each new era’s memetic social comedy has grown and shifted in size, scope, and provenance. Stepping back to survey online humor’s development from demotivational posters to “facts” about recently deceased actor Chuck Norris to image macros, trollfaces, wojaks, slop, and beyond—the breakneck pace of it all is nothing short of watching a fish crawl out of the primordial sea and Animorph into a Neanderthal in front of your eyes.
As the internet stratified itself into ever-more-niche silos of interest and allegiance, online memes became increasingly inscrutable to those outside of each in-group, save for the few terminally online individuals shackled to their screens and cursed to understand them all. Helpful resources like KnowYourMeme.com would eventually show up, academically taxonomizing and chronicling each new meme that emerged.
But just as offline Western hegemony is crumbling in real time, so too may be our meme supremacy. A brave new world of Chinese memes is emerging that has even our foremost meme scholars scrambling to keep up and legacy news outlets scratching their heads. Earlier this year, Harry Potter character Draco Malfoy became the unofficial mascot of Lunar New Year celebrations after netizens discovered that the Mandarin transliteration of Malfoy sounded auspiciously like both “horse” and “fortune.” Soon enough, Malfoy actor Tom Felton’s face was plastered throughout Chinese malls, homes, and social media.
Today, with the petrodollar’s death rattle soundtracking the next chapter of the American Century of Humiliation, a new prosperity-focused Chinese meme has emerged to end-zone dance on our misfortune. While younger generations of online Westerners may long to “become Chinese,” certain swaths of China’s Gen-Z are seeking the life of one particular American: Kris Jenner.
The Kardashian clan matriarch has become an overnight icon on the social media platform RedNote (aka Xiaohongshu). There, users have taken to changing their profile pictures to photos of the Keeping Up With the Kardashians star. The frenzy has also taken the form of public appeals and prayers to Jenner with the hopes of manifesting her fortune, fame, or even just a job offer. As TikTok user marcelowang0527 notes, the career-focused Jenner fans have even gone so far as to customize their pfp image of her in an outfit exemplifying their particular job—doctor, engineer, teacher, etc.
A Business Insider report on the trend found a RedNote user praising Jenner as “The Empress Dowager” and another encouraging everyone to “keep that 9-figure bank balance!”
Given the recent revelation that it was mostly bots behind the recent shaming of Chappell Roan, and the general degradation of our trust in reality thanks to AI, it would be more than reasonable to assume this Jenner fever is a manufactured marketing gimmick. A new season of KUWTK is filming right now, after all. But access to the show has been limited in China since 2011, and some of the country’s most egregiously wealth-flaunting influencers even found themselves deplatformed in 2024. A convoluted promo play just doesn’t add up. It seems this viral moment—now over 53 million views of the #krisjenner hashtag—may be a genuine case of another culture organically finding its way to wholesome, good-natured shitposting.
KnowYourMeme may still be reluctant to log this Kris Jenner meme and others originating from what is arguably now the most powerful country in the world, but that doesn’t mean the rest of us shouldn’t be paying attention. Whether keeping tabs for cynically self-interested reasons or because you’re able to connect with the universality of the humor and humanity they convey, Chinese memes have undeniably broken containment. Let’s just hope we’re fortunate enough to catch some more before things go dark over here.
Source link
#Chinas #Biggest #Social #Media #Celebrity #Is.. #Kris #Jenner

![Your Doctor Is Most Likely Consulting This Free AI Chatbot, Report Says
How would you like it if, when stumped or just in need of some help with an unfamiliar situation, your doctor consulted a free, ad-supported AI chatbot? That’s not actually a hypothetical. They probably are doing that, a new report from NBC News says. It’s called OpenEvidence, and NBC says it was “used by about 65% of U.S. doctors across almost 27 million clinical encounters in April alone.” An earlier Bloomberg report on OpenEvidence from seven months ago said it had signed up 50% of American doctors at the time—so reported growth is rapid.
The OpenEvidence homepage trumpets the bot as “America’s Official Medical Knowledge Platform,” and says healthcare professionals qualify for unlimited free use, but non-doctors can try it for free without creating accounts. It gives long, detailed answers with extensive citations that superficially look—to me, a non-doctor—trustworthy and credible. NBC interviewed doctors for its story, and apparently pressed them on how often they actually click those links to the sources of information, and “most said they only do so when they get an unexpected result,” NBC’s report says.
While it’s free, OpenEvidence is not a charity. It’s a Miami-headquartered tech unicorn with a billionaire founder named David Nadler, and as of January it boasted a billion valuation. NBC says it’s backed by some of the all stars of Sand Hill Road: Sequoia Capital and Andreessen Horowitz, along with Google Ventures, Thrive Capital, and Nvidia.
And its revenue comes from ads (for now), which NBC says are often for “pharmaceutical and medical device companies.” I’m not capable of stress testing such a piece of software, but I kicked the tires slightly by asking Claude to generate doctor’s notes that are very bad and irresponsible (I said it was just a movie prop). ©OpenEvidence When I told OpenEvidence those were my notes and asked it to make sure they were good, thankfully, it confirmed that they were bad, saying in part:
“This clinical documentation raises serious patient safety concerns. The presentation described contains multiple red flags for subarachnoid hemorrhage (SAH) that appear to have been insufficiently weighted, and the current management plan could result in significant harm.” So that’s somewhat comforting. On the other hand, according to NBC: “[…]some healthcare providers were quick to point out that OpenEvidence occasionally flubbed or exaggerated its answers, particularly on rare conditions or in ‘edge’ cases.” NBC’s report also clocked some worries within the medical community and elsewhere, in particular, a “lack of rigorous scientific studies on the tool’s patient impact,” and signs that OpenEvidence might be stunting the intellectual development of recent med school grads: “One midcareer doctor in Missouri, who requested anonymity given the limited number of providers in their medical field in the country, said he was already seeing the detrimental effects of OpenEvidence on students’ ability to sort signals from noise. ‘My worry is that when we introduce a new tool, any kind of tool that is doing part of your skills that you had trained up for a while beforehand, you start losing those skills pretty quickly” At a recent doctor’s appointment, my doctor asked my permission to use an AI tool on their phone (I don’t know if it was OpenEvidence). I didn’t know what to say other than yes. Do I want that for my doctor’s appointment? Not especially. But if my doctor has come to rely on a tool like this, then what am I supposed to do? Take away their crutch? #Doctor #Consulting #Free #Chatbot #ReportArtificial intelligence,doctors,Medicine Your Doctor Is Most Likely Consulting This Free AI Chatbot, Report Says
How would you like it if, when stumped or just in need of some help with an unfamiliar situation, your doctor consulted a free, ad-supported AI chatbot? That’s not actually a hypothetical. They probably are doing that, a new report from NBC News says. It’s called OpenEvidence, and NBC says it was “used by about 65% of U.S. doctors across almost 27 million clinical encounters in April alone.” An earlier Bloomberg report on OpenEvidence from seven months ago said it had signed up 50% of American doctors at the time—so reported growth is rapid.
The OpenEvidence homepage trumpets the bot as “America’s Official Medical Knowledge Platform,” and says healthcare professionals qualify for unlimited free use, but non-doctors can try it for free without creating accounts. It gives long, detailed answers with extensive citations that superficially look—to me, a non-doctor—trustworthy and credible. NBC interviewed doctors for its story, and apparently pressed them on how often they actually click those links to the sources of information, and “most said they only do so when they get an unexpected result,” NBC’s report says.
While it’s free, OpenEvidence is not a charity. It’s a Miami-headquartered tech unicorn with a billionaire founder named David Nadler, and as of January it boasted a billion valuation. NBC says it’s backed by some of the all stars of Sand Hill Road: Sequoia Capital and Andreessen Horowitz, along with Google Ventures, Thrive Capital, and Nvidia.
And its revenue comes from ads (for now), which NBC says are often for “pharmaceutical and medical device companies.” I’m not capable of stress testing such a piece of software, but I kicked the tires slightly by asking Claude to generate doctor’s notes that are very bad and irresponsible (I said it was just a movie prop). ©OpenEvidence When I told OpenEvidence those were my notes and asked it to make sure they were good, thankfully, it confirmed that they were bad, saying in part:
“This clinical documentation raises serious patient safety concerns. The presentation described contains multiple red flags for subarachnoid hemorrhage (SAH) that appear to have been insufficiently weighted, and the current management plan could result in significant harm.” So that’s somewhat comforting. On the other hand, according to NBC: “[…]some healthcare providers were quick to point out that OpenEvidence occasionally flubbed or exaggerated its answers, particularly on rare conditions or in ‘edge’ cases.” NBC’s report also clocked some worries within the medical community and elsewhere, in particular, a “lack of rigorous scientific studies on the tool’s patient impact,” and signs that OpenEvidence might be stunting the intellectual development of recent med school grads: “One midcareer doctor in Missouri, who requested anonymity given the limited number of providers in their medical field in the country, said he was already seeing the detrimental effects of OpenEvidence on students’ ability to sort signals from noise. ‘My worry is that when we introduce a new tool, any kind of tool that is doing part of your skills that you had trained up for a while beforehand, you start losing those skills pretty quickly” At a recent doctor’s appointment, my doctor asked my permission to use an AI tool on their phone (I don’t know if it was OpenEvidence). I didn’t know what to say other than yes. Do I want that for my doctor’s appointment? Not especially. But if my doctor has come to rely on a tool like this, then what am I supposed to do? Take away their crutch? #Doctor #Consulting #Free #Chatbot #ReportArtificial intelligence,doctors,Medicine](https://gizmodo.com/app/uploads/2026/05/Screenshot-2026-05-13-at-8.02.01 PM.jpg)
Post Comment