×
Young People Are Tripping on Benadryl—and It’s Always a Bad Time

Young People Are Tripping on Benadryl—and It’s Always a Bad Time

There’s a figure who may greet you during an intense Benadryl trip.

Faceless, shrouded in black with red eyes and a top hat, it ominously lurks in the corner. The Benadryl Hat Man is a shared and recurring hallucination that people report witnessing when taking dozens of the antihistamine at a time. The figure, depicted in Halloween costumes, POV-Benadryl trip memes, and Walmart graphic tees, has become the symbol for a new drug trend that sees young people deliberately taking large doses of the drug, not to ward off allergies, but to get high.

John, a 21-year-old college student who used to trip on Benadryl, never saw the Hat Man. Yet, he says, “I could see how that could happen. It’s [Benadryl] digging in the depths of your brain to find whatever’s making you scared. So, if you’re scared of the Hat Man, I’m sure you’re going to see the Hat Man.” This searching for the unpleasant to reveal itself, while sounding horrible, is, in fact, the purpose of recreational Benadryl use. (John does not want his real name used due to fear of friends finding out.)

When used in high doses, diphenhydramine, an ingredient in Benadryl, functions as a deliriant, a hallucinogenic class of drugs, which appear to be becoming increasingly popular among young people for nonmedical purposes. Unlike psychedelics or other hallucinogens, there’s no real potential for a good trip on a deliriant. According to the people I spoke to, every trip is bad, every trip is brutal, and that’s the point.

In 2020, the “Benadryl challenge” gained traction on TikTok, daring participants to take doses of at least 12 Benadryl pills for an intense trip. The trend, which resurfaces every few years, drew attention to the psychoactive effects of deliriants. “I saw a video about it on TikTok once, so I knew it could be used recreationally,” one user tells me.

With little to no harm reduction information readily available about high levels of consumption, problems began to rise. In May 2020, three Texas teens were treated for Benadryl overdoses in just a week, one of whom was just 14 years old and took 14 pills. The 14-year-old recovered and returned home the next day. In August 2020, a 15-year-old died from a seizure after overdosing on the drug in Oklahoma. In September 2020, the FDA issued a warning for parents to hide and lock up their Benadryl supply, warning of the potential risk of heart problems, seizures, and, less commonly, comas and even death. Despite the warning, the trend seems to have persisted. In 2020, there were 4,618 cases reported to US Poison Centers for Benadryl usage; that number climbed to 5,960 in 2023, according to a study published in Pediatrics Open Science in August. Benadryl and deliriants in general have embedded themselves as staples on the fringes of the American youth—a cheap and easy way to get fucked up. WIRED reached out to Benadryl manufacturer Kenvue for comment. A spokesperson for the company stated, “This behavior is extremely concerning and dangerous,” and encouraged consumers to “carefully read and follow the instructions on the label and contact their health care professional should they have questions.”

John started taking Benadryl recreationally in November 2024, when he was 20, after using it to sleep and then hearing about the potential to trip online. He was depressed at the time and would take 12 pills for a big trip, multiple times a day, with each trip lasting four to six hours. Instead of the Hat Man, John saw eyelash mites, small bugs that form in clusters at the base of your eyelashes, alongside “shadows that would dart across your peripheral.” The trips were also tactile; John would see and feel spiders all over his body, describing feeling a “foreboding tingling.”

Source link
#Young #People #Tripping #Benadryland #Bad #Time


The new Star Wars animated series Maul: Shadow Lord is doing some very cool things with lightsabers—and not just spinning them around with reckless abandon because we’ve got Maul himself and a couple of Inquisitors who all love to do exactly that with their weapons. They look almost unlike any time we’ve seen the weapons in Lucasfilm’s past output: blades that flicker and snarl like their wielders do, living flames that carve paths of incandescent energy across the screen instead of that typically clean, minimalistic energy we see from them.

It makes Shadow Lord look even more visually impressive than it already is, and of course, the idea of lightsabers as gouts of flaming plasma is also naturally very befitting everyone’s favorite slightly pathetic but trying-his-best edgelord in a character like Maul. But it turns out Shadow Lord‘s lightsabers—Maul’s specifically—are going the extra edgelord mile. Because there’s screaming in the sound mix.

Not just any screaming either, but Sam Witwer’s own howls.

The delightfully silly factoid was revealed by the supervising sound editor for the show, David W. Collins, in a new featurette about the process of creating Shadow Lord, which also shows off Witwer performing some of his own moves for animation reference. While Lucasfilm creatives were quick to note that the show does not use mocap for its animation, and the footage was strictly as a reference point, there’s still something very funny about Witwer even giving himself some Maul tattooing makeup for the footage, to boot.

It’s long been clear that Witwer has put a lot of time and thought into his approach to Maul’s animated legacy over the past decade and a half, but now at least he’s put his vocals into it in a very different manner for Shadow Lord.

Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

#Mauls #Lightsabers #Shadow #Lord #Powered #Sam #Witwers #ScreamsMaul: Shadow Lord,sam witwer,Star Wars">Maul’s Lightsabers in ‘Shadow Lord’ Are Powered by Sam Witwer’s Screams
                The new Star Wars animated series Maul: Shadow Lord is doing some very cool things with lightsabers—and not just spinning them around with reckless abandon because we’ve got Maul himself and a couple of Inquisitors who all love to do exactly that with their weapons. They look almost unlike any time we’ve seen the weapons in Lucasfilm’s past output: blades that flicker and snarl like their wielders do, living flames that carve paths of incandescent energy across the screen instead of that typically clean, minimalistic energy we see from them. It makes Shadow Lord look even more visually impressive than it already is, and of course, the idea of lightsabers as gouts of flaming plasma is also naturally very befitting everyone’s favorite slightly pathetic but trying-his-best edgelord in a character like Maul. But it turns out Shadow Lord‘s lightsabers—Maul’s specifically—are going the extra edgelord mile. Because there’s screaming in the sound mix.

 Not just any screaming either, but Sam Witwer’s own howls.  The delightfully silly factoid was revealed by the supervising sound editor for the show, David W. Collins, in a new featurette about the process of creating Shadow Lord, which also shows off Witwer performing some of his own moves for animation reference. While Lucasfilm creatives were quick to note that the show does not use mocap for its animation, and the footage was strictly as a reference point, there’s still something very funny about Witwer even giving himself some Maul tattooing makeup for the footage, to boot. It’s long been clear that Witwer has put a lot of time and thought into his approach to Maul’s animated legacy over the past decade and a half, but now at least he’s put his vocals into it in a very different manner for Shadow Lord.  Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.      #Mauls #Lightsabers #Shadow #Lord #Powered #Sam #Witwers #ScreamsMaul: Shadow Lord,sam witwer,Star Wars

Maul: Shadow Lord is doing some very cool things with lightsabers—and not just spinning them around with reckless abandon because we’ve got Maul himself and a couple of Inquisitors who all love to do exactly that with their weapons. They look almost unlike any time we’ve seen the weapons in Lucasfilm’s past output: blades that flicker and snarl like their wielders do, living flames that carve paths of incandescent energy across the screen instead of that typically clean, minimalistic energy we see from them.

It makes Shadow Lord look even more visually impressive than it already is, and of course, the idea of lightsabers as gouts of flaming plasma is also naturally very befitting everyone’s favorite slightly pathetic but trying-his-best edgelord in a character like Maul. But it turns out Shadow Lord‘s lightsabers—Maul’s specifically—are going the extra edgelord mile. Because there’s screaming in the sound mix.

Not just any screaming either, but Sam Witwer’s own howls.

The delightfully silly factoid was revealed by the supervising sound editor for the show, David W. Collins, in a new featurette about the process of creating Shadow Lord, which also shows off Witwer performing some of his own moves for animation reference. While Lucasfilm creatives were quick to note that the show does not use mocap for its animation, and the footage was strictly as a reference point, there’s still something very funny about Witwer even giving himself some Maul tattooing makeup for the footage, to boot.

It’s long been clear that Witwer has put a lot of time and thought into his approach to Maul’s animated legacy over the past decade and a half, but now at least he’s put his vocals into it in a very different manner for Shadow Lord.

Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

#Mauls #Lightsabers #Shadow #Lord #Powered #Sam #Witwers #ScreamsMaul: Shadow Lord,sam witwer,Star Wars">Maul’s Lightsabers in ‘Shadow Lord’ Are Powered by Sam Witwer’s ScreamsMaul’s Lightsabers in ‘Shadow Lord’ Are Powered by Sam Witwer’s Screams
                The new Star Wars animated series Maul: Shadow Lord is doing some very cool things with lightsabers—and not just spinning them around with reckless abandon because we’ve got Maul himself and a couple of Inquisitors who all love to do exactly that with their weapons. They look almost unlike any time we’ve seen the weapons in Lucasfilm’s past output: blades that flicker and snarl like their wielders do, living flames that carve paths of incandescent energy across the screen instead of that typically clean, minimalistic energy we see from them. It makes Shadow Lord look even more visually impressive than it already is, and of course, the idea of lightsabers as gouts of flaming plasma is also naturally very befitting everyone’s favorite slightly pathetic but trying-his-best edgelord in a character like Maul. But it turns out Shadow Lord‘s lightsabers—Maul’s specifically—are going the extra edgelord mile. Because there’s screaming in the sound mix.

 Not just any screaming either, but Sam Witwer’s own howls.  The delightfully silly factoid was revealed by the supervising sound editor for the show, David W. Collins, in a new featurette about the process of creating Shadow Lord, which also shows off Witwer performing some of his own moves for animation reference. While Lucasfilm creatives were quick to note that the show does not use mocap for its animation, and the footage was strictly as a reference point, there’s still something very funny about Witwer even giving himself some Maul tattooing makeup for the footage, to boot. It’s long been clear that Witwer has put a lot of time and thought into his approach to Maul’s animated legacy over the past decade and a half, but now at least he’s put his vocals into it in a very different manner for Shadow Lord.  Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.      #Mauls #Lightsabers #Shadow #Lord #Powered #Sam #Witwers #ScreamsMaul: Shadow Lord,sam witwer,Star Wars

The new Star Wars animated series Maul: Shadow Lord is doing some very cool things with lightsabers—and not just spinning them around with reckless abandon because we’ve got Maul himself and a couple of Inquisitors who all love to do exactly that with their weapons. They look almost unlike any time we’ve seen the weapons in Lucasfilm’s past output: blades that flicker and snarl like their wielders do, living flames that carve paths of incandescent energy across the screen instead of that typically clean, minimalistic energy we see from them.

It makes Shadow Lord look even more visually impressive than it already is, and of course, the idea of lightsabers as gouts of flaming plasma is also naturally very befitting everyone’s favorite slightly pathetic but trying-his-best edgelord in a character like Maul. But it turns out Shadow Lord‘s lightsabers—Maul’s specifically—are going the extra edgelord mile. Because there’s screaming in the sound mix.

Not just any screaming either, but Sam Witwer’s own howls.

The delightfully silly factoid was revealed by the supervising sound editor for the show, David W. Collins, in a new featurette about the process of creating Shadow Lord, which also shows off Witwer performing some of his own moves for animation reference. While Lucasfilm creatives were quick to note that the show does not use mocap for its animation, and the footage was strictly as a reference point, there’s still something very funny about Witwer even giving himself some Maul tattooing makeup for the footage, to boot.

It’s long been clear that Witwer has put a lot of time and thought into his approach to Maul’s animated legacy over the past decade and a half, but now at least he’s put his vocals into it in a very different manner for Shadow Lord.

Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

#Mauls #Lightsabers #Shadow #Lord #Powered #Sam #Witwers #ScreamsMaul: Shadow Lord,sam witwer,Star Wars

Medical experts I spoke with balked at the idea of uploading their own health data for an AI model, like Muse Spark, to analyze. “These chatbots now allow you to connect your own biometric data, put in your own lab information, and honestly, that makes me pretty nervous,” says Gauri Agarwal, a doctor of medicine and associate professor at the University of Miami. “I certainly wouldn’t connect my own health information to a service that I’m not fully able to control, understand where that information is being stored, or how it’s being utilized.” She recommends people stick to lower-stakes, more general interactions, like prepping questions for your doctor.

It can be tempting to rely on AI-assisted help for interpreting health, especially with the skyrocketing cost of medical treatments and overall inaccessibility of regular doctor visits for some people navigating the US health care system.

“You will be forgiven for going online and delegating what used to be a powerful, important personal relationship between a doctor and a patient—to a robot,” says Kenneth Goodman, founder of the University of Miami’s Institute for Bioethics and Health Policy. “I think running into that without due diligence is dangerous.” Before he considers using any of these tools, Goodman wants to see research proving that they are beneficial for your health, not just better at answering health questions than some competitor chatbot.

When I asked Meta AI for more information about how it would interpret my health information, if I provided any, the chatbot said it was not trying to replace my physician; the outputs were for educational purposes. “Think of me as a med school professor, not your doctor,” said Meta AI. That’s still a lofty claim.

The bot said the best way to get an interpretation of my health data was just to “dump the raw data,” like clinical lab reports, and tell it what my goals were. Meta AI would then create charts, summarize the info, and give a “referral nudge if needed.” In other chats I conducted with Meta AI, the bot prompted me to strip personal details before uploading lab results, but these caveats were not present in every test conversation.

“People have long used the internet to ask health questions,” a Meta spokesperson tells WIRED. “With Meta AI and Muse Spark, people are in control of what information to share, and our terms make clear they should only share what they’re comfortable with.”

In addition to privacy concerns, experts I spoke with expressed trepidation about how these AI tools can be sycophantic and influenced by how users ask questions. “A model might take the information that’s provided more as a given without questioning the assumptions that the patient inherently made when asking the question,” says Agrawal.

When I asked how to lose weight and nudged the bot towards extreme answers, Meta AI helped in ways that could be catastrophic for someone with anorexia. As I asked about the benefits of intermittent fasting, I told Meta AI that I wanted to fast five days every week. Despite flagging that this was not for most people and putting me at risk for eating disorders, Meta AI crafted a meal plan for me where I would only eat around 500 calories most days, which would leave me malnourished.

#Metas #Asked #Raw #Health #Dataand #Gave #Terrible #Advicehealth,artificial intelligence,health care,machine learning,chatbots,meta,personalized medicine">Meta’s New AI Asked for My Raw Health Data—and Gave Me Terrible AdviceMedical experts I spoke with balked at the idea of uploading their own health data for an AI model, like Muse Spark, to analyze. “These chatbots now allow you to connect your own biometric data, put in your own lab information, and honestly, that makes me pretty nervous,” says Gauri Agarwal, a doctor of medicine and associate professor at the University of Miami. “I certainly wouldn’t connect my own health information to a service that I’m not fully able to control, understand where that information is being stored, or how it’s being utilized.” She recommends people stick to lower-stakes, more general interactions, like prepping questions for your doctor.It can be tempting to rely on AI-assisted help for interpreting health, especially with the skyrocketing cost of medical treatments and overall inaccessibility of regular doctor visits for some people navigating the US health care system.“You will be forgiven for going online and delegating what used to be a powerful, important personal relationship between a doctor and a patient—to a robot,” says Kenneth Goodman, founder of the University of Miami’s Institute for Bioethics and Health Policy. “I think running into that without due diligence is dangerous.” Before he considers using any of these tools, Goodman wants to see research proving that they are beneficial for your health, not just better at answering health questions than some competitor chatbot.When I asked Meta AI for more information about how it would interpret my health information, if I provided any, the chatbot said it was not trying to replace my physician; the outputs were for educational purposes. “Think of me as a med school professor, not your doctor,” said Meta AI. That’s still a lofty claim.The bot said the best way to get an interpretation of my health data was just to “dump the raw data,” like clinical lab reports, and tell it what my goals were. Meta AI would then create charts, summarize the info, and give a “referral nudge if needed.” In other chats I conducted with Meta AI, the bot prompted me to strip personal details before uploading lab results, but these caveats were not present in every test conversation.“People have long used the internet to ask health questions,” a Meta spokesperson tells WIRED. “With Meta AI and Muse Spark, people are in control of what information to share, and our terms make clear they should only share what they’re comfortable with.”In addition to privacy concerns, experts I spoke with expressed trepidation about how these AI tools can be sycophantic and influenced by how users ask questions. “A model might take the information that’s provided more as a given without questioning the assumptions that the patient inherently made when asking the question,” says Agrawal.When I asked how to lose weight and nudged the bot towards extreme answers, Meta AI helped in ways that could be catastrophic for someone with anorexia. As I asked about the benefits of intermittent fasting, I told Meta AI that I wanted to fast five days every week. Despite flagging that this was not for most people and putting me at risk for eating disorders, Meta AI crafted a meal plan for me where I would only eat around 500 calories most days, which would leave me malnourished.#Metas #Asked #Raw #Health #Dataand #Gave #Terrible #Advicehealth,artificial intelligence,health care,machine learning,chatbots,meta,personalized medicine

Gauri Agarwal, a doctor of medicine and associate professor at the University of Miami. “I certainly wouldn’t connect my own health information to a service that I’m not fully able to control, understand where that information is being stored, or how it’s being utilized.” She recommends people stick to lower-stakes, more general interactions, like prepping questions for your doctor.

It can be tempting to rely on AI-assisted help for interpreting health, especially with the skyrocketing cost of medical treatments and overall inaccessibility of regular doctor visits for some people navigating the US health care system.

“You will be forgiven for going online and delegating what used to be a powerful, important personal relationship between a doctor and a patient—to a robot,” says Kenneth Goodman, founder of the University of Miami’s Institute for Bioethics and Health Policy. “I think running into that without due diligence is dangerous.” Before he considers using any of these tools, Goodman wants to see research proving that they are beneficial for your health, not just better at answering health questions than some competitor chatbot.

When I asked Meta AI for more information about how it would interpret my health information, if I provided any, the chatbot said it was not trying to replace my physician; the outputs were for educational purposes. “Think of me as a med school professor, not your doctor,” said Meta AI. That’s still a lofty claim.

The bot said the best way to get an interpretation of my health data was just to “dump the raw data,” like clinical lab reports, and tell it what my goals were. Meta AI would then create charts, summarize the info, and give a “referral nudge if needed.” In other chats I conducted with Meta AI, the bot prompted me to strip personal details before uploading lab results, but these caveats were not present in every test conversation.

“People have long used the internet to ask health questions,” a Meta spokesperson tells WIRED. “With Meta AI and Muse Spark, people are in control of what information to share, and our terms make clear they should only share what they’re comfortable with.”

In addition to privacy concerns, experts I spoke with expressed trepidation about how these AI tools can be sycophantic and influenced by how users ask questions. “A model might take the information that’s provided more as a given without questioning the assumptions that the patient inherently made when asking the question,” says Agrawal.

When I asked how to lose weight and nudged the bot towards extreme answers, Meta AI helped in ways that could be catastrophic for someone with anorexia. As I asked about the benefits of intermittent fasting, I told Meta AI that I wanted to fast five days every week. Despite flagging that this was not for most people and putting me at risk for eating disorders, Meta AI crafted a meal plan for me where I would only eat around 500 calories most days, which would leave me malnourished.

#Metas #Asked #Raw #Health #Dataand #Gave #Terrible #Advicehealth,artificial intelligence,health care,machine learning,chatbots,meta,personalized medicine">Meta’s New AI Asked for My Raw Health Data—and Gave Me Terrible Advice

Medical experts I spoke with balked at the idea of uploading their own health data for an AI model, like Muse Spark, to analyze. “These chatbots now allow you to connect your own biometric data, put in your own lab information, and honestly, that makes me pretty nervous,” says Gauri Agarwal, a doctor of medicine and associate professor at the University of Miami. “I certainly wouldn’t connect my own health information to a service that I’m not fully able to control, understand where that information is being stored, or how it’s being utilized.” She recommends people stick to lower-stakes, more general interactions, like prepping questions for your doctor.

It can be tempting to rely on AI-assisted help for interpreting health, especially with the skyrocketing cost of medical treatments and overall inaccessibility of regular doctor visits for some people navigating the US health care system.

“You will be forgiven for going online and delegating what used to be a powerful, important personal relationship between a doctor and a patient—to a robot,” says Kenneth Goodman, founder of the University of Miami’s Institute for Bioethics and Health Policy. “I think running into that without due diligence is dangerous.” Before he considers using any of these tools, Goodman wants to see research proving that they are beneficial for your health, not just better at answering health questions than some competitor chatbot.

When I asked Meta AI for more information about how it would interpret my health information, if I provided any, the chatbot said it was not trying to replace my physician; the outputs were for educational purposes. “Think of me as a med school professor, not your doctor,” said Meta AI. That’s still a lofty claim.

The bot said the best way to get an interpretation of my health data was just to “dump the raw data,” like clinical lab reports, and tell it what my goals were. Meta AI would then create charts, summarize the info, and give a “referral nudge if needed.” In other chats I conducted with Meta AI, the bot prompted me to strip personal details before uploading lab results, but these caveats were not present in every test conversation.

“People have long used the internet to ask health questions,” a Meta spokesperson tells WIRED. “With Meta AI and Muse Spark, people are in control of what information to share, and our terms make clear they should only share what they’re comfortable with.”

In addition to privacy concerns, experts I spoke with expressed trepidation about how these AI tools can be sycophantic and influenced by how users ask questions. “A model might take the information that’s provided more as a given without questioning the assumptions that the patient inherently made when asking the question,” says Agrawal.

When I asked how to lose weight and nudged the bot towards extreme answers, Meta AI helped in ways that could be catastrophic for someone with anorexia. As I asked about the benefits of intermittent fasting, I told Meta AI that I wanted to fast five days every week. Despite flagging that this was not for most people and putting me at risk for eating disorders, Meta AI crafted a meal plan for me where I would only eat around 500 calories most days, which would leave me malnourished.

#Metas #Asked #Raw #Health #Dataand #Gave #Terrible #Advicehealth,artificial intelligence,health care,machine learning,chatbots,meta,personalized medicine

Post Comment