Recently released documents show the big business opportunity that social media companies saw in recruiting teens to their platforms and how they discussed risks that heavy digital engagement could pose.
The documents were released last week as part of a major set of trials brought by school districts, state attorneys general, and others against Meta, Snap, TikTok and YouTube, alleging the design of their products harmed young users. The Tech Oversight Project, which advocates for more regulations on tech platforms to safeguard teens online, compiled a report on the newly released documents, which were independently reviewed by The Verge. On Monday, a federal judge will hear arguments that will determine the scope of the trials, the first of which kicks off in June.
The internal documents produced as part of the litigation show that social media companies recognized business value in establishing users at a young age. But they also show how the companies tracked harmful effects that features could have on those users and considered ways to address those risks. The companies have all expressed a commitment to safeguarding teens on their platforms and generally complained that evidence presented by the plaintiffs lacks relevant context. Meta, for example, launched a webpage that responds to FAQs about the litigation and lists research that describes the other factors impacting teens’ mental health, or finding minimal association between teens’ use of digital platforms and their mental well-being.
Some emails and slides demonstrate just how valuable some of the companies saw teen users for growing their business. “Mark [Zuckerberg] has decided that the top priority for the company in H1 2017 is teens,” a redacted sender said in an email to Meta’s then-growth executive Guy Rosen with the subject line “FYI: Teen Growth!!” at the end of 2016. It later discussed a teen ambassador program for Instagram and considered formalizing teens’ tendency to make Finstas by introducing a private mode for Facebook that drew on what teens liked about making alternative Instagram accounts: “smaller audiences, plausible deniability, and private accounts.”
“Solving Kids is a Massive Opportunity,” the title of a November 2020 slide from Google said, citing that “Kids under 13 are the fastest-growing Internet audience in the world.” Its internal research found family users “lead to better retention and more overall value.” The company recognized that getting students to use Chromebooks in school made them more likely to consider buying Google products down the road. Google spokesperson Jack Malon told The Verge in a previous statement that “YouTube does not market directly to schools and we have responded to meet the strong demand from educators for high-quality, curriculum-aligned content.”
“Solving Kids is a Massive Opportunity”
Some of the companies discussed the PR risks involved in having young users on their platforms. Emails from 2016 show Meta discussing public perception and safety risks around the launch of its short-lived under-21 app Lifestage. Employees weighed the potential risks of giving administrators at the high schools where it planned to launch a heads-up, versus the potential to ruin the “‘cool’ factor” of the app by cluing them in. One raised a concern about how tricky it would be to know if only actual teens were on the app. “[W]e can’t enforce against impersonation/predators/press if we don’t have a way to verify accounts.” In a February 2018 document, Meta recognized it may have to delay letting tweens on Facebook because of “increased scrutiny of whether Facebook is good for Youth.”
A 2018 deck produced by Google titled “Digital Wellness Overview – YT Autoplay” notes that “Tech addiction and Google’s role has been making the news and has gained prominence since ‘time well spent’ movement started.” It said that autoplay may be “disrupting sleep patterns” and suggested limiting it at night could help (autoplay is now turned off for kids under 18).
The companies were aware of research and anecdotes describing kids using their platforms below the age they should be allowed to, or at times they shouldn’t. A 2017 study commissioned by Snap found 64 percent of users 13-21 used it in school. In a highly redacted chat log from February 2020 produced from TikTok’s records, one person in the chat said they’re “sort of glad” that news crews ended up not being able to make it to a public event where students on the panel they were watching were “primarily under 13” and discussing “how they know they’re not supposed to have an account.”
But the documents also show the ways the companies considered the unique challenges their younger users would face on their platforms and discussed how to mitigate them. A March 2023 slide deck from Snap describes a recent study it worked on “to understand the perceptions of social media from Users, Parents’ and Wellness Experts in order to identify new opportunities to foster positive interactions on and perceptions of Snapchat.” After finding many teens reported being on social media “all the time,” the company suggested considering letting users turn off social media during school hours, or setting their own time limits in the app. “From the beginning, Snap considered how time, content, and online interactions influence real-life relationships,” Snap spokesperson Monique Bellamy said in a statement. “We deliberately designed Snapchat to create a unique experience that encourages self-expression, visual communication, and authentic, real-time conversations, rather than promoting endless passive consumption.”
A 2021 document from TikTok recognized that compulsive use of its platform was “rampant,” but said that meant it needed to provide users “better tools to understand their usage, manage it effectively, and ensure being on TikTok is time well spent.” The company saw it as a good thing that TikTok users were more actively engaged on their app than on other platforms, since “research suggests passive use of social media is more harmful.” TikTok did not immediately provide a comment on the latest document release.
In the 2016 email to Meta’s Rosen, the redacted sender wrote that the goal was to emphasize “teen:teen connections” and they wanted to find a way “for teens newly joining FB to indicate whether a person who they are friending is or is not a peer (aka another teen).” They also added that Meta was “heavily investing in improving our ability to model actual age of teens.”
Some safeguards may actually be good for business, executives sometimes suggested. Google, in a 2019 document, proposed disincentivizing “growth that doesn’t support wellbeing,” recognizing that investing in users’ digital well-being would be positive for its brand and “a more sustainable path for growth.”
Source link
#Internal #chats #show #social #media #companies #discussed #teen #engagement
![Your Doctor Is Most Likely Consulting This Free AI Chatbot, Report Says
How would you like it if, when stumped or just in need of some help with an unfamiliar situation, your doctor consulted a free, ad-supported AI chatbot? That’s not actually a hypothetical. They probably are doing that, a new report from NBC News says. It’s called OpenEvidence, and NBC says it was “used by about 65% of U.S. doctors across almost 27 million clinical encounters in April alone.” An earlier Bloomberg report on OpenEvidence from seven months ago said it had signed up 50% of American doctors at the time—so reported growth is rapid.
The OpenEvidence homepage trumpets the bot as “America’s Official Medical Knowledge Platform,” and says healthcare professionals qualify for unlimited free use, but non-doctors can try it for free without creating accounts. It gives long, detailed answers with extensive citations that superficially look—to me, a non-doctor—trustworthy and credible. NBC interviewed doctors for its story, and apparently pressed them on how often they actually click those links to the sources of information, and “most said they only do so when they get an unexpected result,” NBC’s report says.
While it’s free, OpenEvidence is not a charity. It’s a Miami-headquartered tech unicorn with a billionaire founder named David Nadler, and as of January it boasted a billion valuation. NBC says it’s backed by some of the all stars of Sand Hill Road: Sequoia Capital and Andreessen Horowitz, along with Google Ventures, Thrive Capital, and Nvidia.
And its revenue comes from ads (for now), which NBC says are often for “pharmaceutical and medical device companies.” I’m not capable of stress testing such a piece of software, but I kicked the tires slightly by asking Claude to generate doctor’s notes that are very bad and irresponsible (I said it was just a movie prop). ©OpenEvidence When I told OpenEvidence those were my notes and asked it to make sure they were good, thankfully, it confirmed that they were bad, saying in part:
“This clinical documentation raises serious patient safety concerns. The presentation described contains multiple red flags for subarachnoid hemorrhage (SAH) that appear to have been insufficiently weighted, and the current management plan could result in significant harm.” So that’s somewhat comforting. On the other hand, according to NBC: “[…]some healthcare providers were quick to point out that OpenEvidence occasionally flubbed or exaggerated its answers, particularly on rare conditions or in ‘edge’ cases.” NBC’s report also clocked some worries within the medical community and elsewhere, in particular, a “lack of rigorous scientific studies on the tool’s patient impact,” and signs that OpenEvidence might be stunting the intellectual development of recent med school grads: “One midcareer doctor in Missouri, who requested anonymity given the limited number of providers in their medical field in the country, said he was already seeing the detrimental effects of OpenEvidence on students’ ability to sort signals from noise. ‘My worry is that when we introduce a new tool, any kind of tool that is doing part of your skills that you had trained up for a while beforehand, you start losing those skills pretty quickly” At a recent doctor’s appointment, my doctor asked my permission to use an AI tool on their phone (I don’t know if it was OpenEvidence). I didn’t know what to say other than yes. Do I want that for my doctor’s appointment? Not especially. But if my doctor has come to rely on a tool like this, then what am I supposed to do? Take away their crutch? #Doctor #Consulting #Free #Chatbot #ReportArtificial intelligence,doctors,Medicine Your Doctor Is Most Likely Consulting This Free AI Chatbot, Report Says
How would you like it if, when stumped or just in need of some help with an unfamiliar situation, your doctor consulted a free, ad-supported AI chatbot? That’s not actually a hypothetical. They probably are doing that, a new report from NBC News says. It’s called OpenEvidence, and NBC says it was “used by about 65% of U.S. doctors across almost 27 million clinical encounters in April alone.” An earlier Bloomberg report on OpenEvidence from seven months ago said it had signed up 50% of American doctors at the time—so reported growth is rapid.
The OpenEvidence homepage trumpets the bot as “America’s Official Medical Knowledge Platform,” and says healthcare professionals qualify for unlimited free use, but non-doctors can try it for free without creating accounts. It gives long, detailed answers with extensive citations that superficially look—to me, a non-doctor—trustworthy and credible. NBC interviewed doctors for its story, and apparently pressed them on how often they actually click those links to the sources of information, and “most said they only do so when they get an unexpected result,” NBC’s report says.
While it’s free, OpenEvidence is not a charity. It’s a Miami-headquartered tech unicorn with a billionaire founder named David Nadler, and as of January it boasted a billion valuation. NBC says it’s backed by some of the all stars of Sand Hill Road: Sequoia Capital and Andreessen Horowitz, along with Google Ventures, Thrive Capital, and Nvidia.
And its revenue comes from ads (for now), which NBC says are often for “pharmaceutical and medical device companies.” I’m not capable of stress testing such a piece of software, but I kicked the tires slightly by asking Claude to generate doctor’s notes that are very bad and irresponsible (I said it was just a movie prop). ©OpenEvidence When I told OpenEvidence those were my notes and asked it to make sure they were good, thankfully, it confirmed that they were bad, saying in part:
“This clinical documentation raises serious patient safety concerns. The presentation described contains multiple red flags for subarachnoid hemorrhage (SAH) that appear to have been insufficiently weighted, and the current management plan could result in significant harm.” So that’s somewhat comforting. On the other hand, according to NBC: “[…]some healthcare providers were quick to point out that OpenEvidence occasionally flubbed or exaggerated its answers, particularly on rare conditions or in ‘edge’ cases.” NBC’s report also clocked some worries within the medical community and elsewhere, in particular, a “lack of rigorous scientific studies on the tool’s patient impact,” and signs that OpenEvidence might be stunting the intellectual development of recent med school grads: “One midcareer doctor in Missouri, who requested anonymity given the limited number of providers in their medical field in the country, said he was already seeing the detrimental effects of OpenEvidence on students’ ability to sort signals from noise. ‘My worry is that when we introduce a new tool, any kind of tool that is doing part of your skills that you had trained up for a while beforehand, you start losing those skills pretty quickly” At a recent doctor’s appointment, my doctor asked my permission to use an AI tool on their phone (I don’t know if it was OpenEvidence). I didn’t know what to say other than yes. Do I want that for my doctor’s appointment? Not especially. But if my doctor has come to rely on a tool like this, then what am I supposed to do? Take away their crutch? #Doctor #Consulting #Free #Chatbot #ReportArtificial intelligence,doctors,Medicine](https://gizmodo.com/app/uploads/2026/05/Screenshot-2026-05-13-at-8.02.01 PM.jpg)
Post Comment