Have you heard of The Velvet Sundown? It’s kind of like The Velvet Underground, except The Velvet Underground is definitely real, and the jury is still out for The Velvet Sundown.
The band’s photos look remarkably AI-generated — too clean, not quite textured enough, oddly inhuman, yet it has racked up more than 372,000 monthly listeners on Spotify. Their bio reads that they are “quietly spellbinding” and uses those odd, unspecific metaphors so common in AI-generated text, like comparing the band’s music to “a scent that suddenly takes you back somewhere you didn’t expect.”
Their bio purports that it was formed by singer and mellotron player Gabe Farrow, guitarist Lennie West, synth player Milo Raines, and percussionist Orion “Rio” Del Mar. None of them has ever been interviewed. And, not that a social media account is necessarily proof of life, but none of them have an Instagram, TikTok, or Facebook account — and neither does the band itself. In fact, none of the band members seems to have a single shred of an internet presence.
AI music startup Suno admits to using copyrighted music, but says it’s ‘fair use’
The song credits on Spotify are also a bit suspicious. Most artists will have multiple people in the credits, but the credits on Spotify for every single one of their songs are “Performed by,” “Written by,” and “Source” by The Velvet Sundown. There is no producer listed.
“The Velvet Sundown aren’t trying to revive the past,” their Spotify bio reads. “They’re rewriting it. They sound like the memory of a time that never actually happened… but somehow they make it feel real.”
Mashable Trend Report
Are they playing with us? Listening to the band myself, it does sound AI-generated — the lyrics lack specificity, and the music itself lacks depth. But it’s also kind of… fine music? Suno and Udio, two of the most-used AI-powered music generators, have been “churning out soulless slop” for about two years, as Music Radar reported, and if The Velvet Sundown is using those tools to create music, it might be one of the first more successful uses of the platforms’ ability to “capture the public’s imagination in the way that many of the technology’s critics had feared.”
On YouTube, there’s an entire ecosystem of AI-generated music. One standout is AI For The Culture, a channel that reimagines rap and R&B tracks as vintage Motown or blues cuts — complete with fictional artists and AI-generated bios to match. One particularly notable example: an AI-rendered cover of Future’s “Turn On the Lights,” which was later sampled by rapper JPEGMAFIA on his latest album.
While the band hasn’t confirmed that it’s AI-generated, it has also done little to prove people wrong. Music Radar says it “bears the unmistakably lo-fi veneer of a Suno creation.” One Reddit post says there isn’t a “shred of evidence on the internet that this band has ever existed.”
But, in the end, there’s no actual proof that the band is generated by AI, and therein lies the struggle. When AI music becomes this difficult to catch, whose job is it to catch it? The trouble has led some users to post their disappointment in Spotify for not informing listeners that the band is or is not AI-generated. “We should be boycotting Spotify by now,” one person said on Reddit, and another person responded by pointing out that the band is also on Apple Music and Amazon Music.
Spotify did not immediately respond to a request for comment from Mashable.
Topics
Artificial Intelligence
Music
Source link
#Velvet #Sundown #AIgenerated #band

![Your Doctor Is Most Likely Consulting This Free AI Chatbot, Report Says
How would you like it if, when stumped or just in need of some help with an unfamiliar situation, your doctor consulted a free, ad-supported AI chatbot? That’s not actually a hypothetical. They probably are doing that, a new report from NBC News says. It’s called OpenEvidence, and NBC says it was “used by about 65% of U.S. doctors across almost 27 million clinical encounters in April alone.” An earlier Bloomberg report on OpenEvidence from seven months ago said it had signed up 50% of American doctors at the time—so reported growth is rapid.
The OpenEvidence homepage trumpets the bot as “America’s Official Medical Knowledge Platform,” and says healthcare professionals qualify for unlimited free use, but non-doctors can try it for free without creating accounts. It gives long, detailed answers with extensive citations that superficially look—to me, a non-doctor—trustworthy and credible. NBC interviewed doctors for its story, and apparently pressed them on how often they actually click those links to the sources of information, and “most said they only do so when they get an unexpected result,” NBC’s report says.
While it’s free, OpenEvidence is not a charity. It’s a Miami-headquartered tech unicorn with a billionaire founder named David Nadler, and as of January it boasted a billion valuation. NBC says it’s backed by some of the all stars of Sand Hill Road: Sequoia Capital and Andreessen Horowitz, along with Google Ventures, Thrive Capital, and Nvidia.
And its revenue comes from ads (for now), which NBC says are often for “pharmaceutical and medical device companies.” I’m not capable of stress testing such a piece of software, but I kicked the tires slightly by asking Claude to generate doctor’s notes that are very bad and irresponsible (I said it was just a movie prop). ©OpenEvidence When I told OpenEvidence those were my notes and asked it to make sure they were good, thankfully, it confirmed that they were bad, saying in part:
“This clinical documentation raises serious patient safety concerns. The presentation described contains multiple red flags for subarachnoid hemorrhage (SAH) that appear to have been insufficiently weighted, and the current management plan could result in significant harm.” So that’s somewhat comforting. On the other hand, according to NBC: “[…]some healthcare providers were quick to point out that OpenEvidence occasionally flubbed or exaggerated its answers, particularly on rare conditions or in ‘edge’ cases.” NBC’s report also clocked some worries within the medical community and elsewhere, in particular, a “lack of rigorous scientific studies on the tool’s patient impact,” and signs that OpenEvidence might be stunting the intellectual development of recent med school grads: “One midcareer doctor in Missouri, who requested anonymity given the limited number of providers in their medical field in the country, said he was already seeing the detrimental effects of OpenEvidence on students’ ability to sort signals from noise. ‘My worry is that when we introduce a new tool, any kind of tool that is doing part of your skills that you had trained up for a while beforehand, you start losing those skills pretty quickly” At a recent doctor’s appointment, my doctor asked my permission to use an AI tool on their phone (I don’t know if it was OpenEvidence). I didn’t know what to say other than yes. Do I want that for my doctor’s appointment? Not especially. But if my doctor has come to rely on a tool like this, then what am I supposed to do? Take away their crutch? #Doctor #Consulting #Free #Chatbot #ReportArtificial intelligence,doctors,Medicine Your Doctor Is Most Likely Consulting This Free AI Chatbot, Report Says
How would you like it if, when stumped or just in need of some help with an unfamiliar situation, your doctor consulted a free, ad-supported AI chatbot? That’s not actually a hypothetical. They probably are doing that, a new report from NBC News says. It’s called OpenEvidence, and NBC says it was “used by about 65% of U.S. doctors across almost 27 million clinical encounters in April alone.” An earlier Bloomberg report on OpenEvidence from seven months ago said it had signed up 50% of American doctors at the time—so reported growth is rapid.
The OpenEvidence homepage trumpets the bot as “America’s Official Medical Knowledge Platform,” and says healthcare professionals qualify for unlimited free use, but non-doctors can try it for free without creating accounts. It gives long, detailed answers with extensive citations that superficially look—to me, a non-doctor—trustworthy and credible. NBC interviewed doctors for its story, and apparently pressed them on how often they actually click those links to the sources of information, and “most said they only do so when they get an unexpected result,” NBC’s report says.
While it’s free, OpenEvidence is not a charity. It’s a Miami-headquartered tech unicorn with a billionaire founder named David Nadler, and as of January it boasted a billion valuation. NBC says it’s backed by some of the all stars of Sand Hill Road: Sequoia Capital and Andreessen Horowitz, along with Google Ventures, Thrive Capital, and Nvidia.
And its revenue comes from ads (for now), which NBC says are often for “pharmaceutical and medical device companies.” I’m not capable of stress testing such a piece of software, but I kicked the tires slightly by asking Claude to generate doctor’s notes that are very bad and irresponsible (I said it was just a movie prop). ©OpenEvidence When I told OpenEvidence those were my notes and asked it to make sure they were good, thankfully, it confirmed that they were bad, saying in part:
“This clinical documentation raises serious patient safety concerns. The presentation described contains multiple red flags for subarachnoid hemorrhage (SAH) that appear to have been insufficiently weighted, and the current management plan could result in significant harm.” So that’s somewhat comforting. On the other hand, according to NBC: “[…]some healthcare providers were quick to point out that OpenEvidence occasionally flubbed or exaggerated its answers, particularly on rare conditions or in ‘edge’ cases.” NBC’s report also clocked some worries within the medical community and elsewhere, in particular, a “lack of rigorous scientific studies on the tool’s patient impact,” and signs that OpenEvidence might be stunting the intellectual development of recent med school grads: “One midcareer doctor in Missouri, who requested anonymity given the limited number of providers in their medical field in the country, said he was already seeing the detrimental effects of OpenEvidence on students’ ability to sort signals from noise. ‘My worry is that when we introduce a new tool, any kind of tool that is doing part of your skills that you had trained up for a while beforehand, you start losing those skills pretty quickly” At a recent doctor’s appointment, my doctor asked my permission to use an AI tool on their phone (I don’t know if it was OpenEvidence). I didn’t know what to say other than yes. Do I want that for my doctor’s appointment? Not especially. But if my doctor has come to rely on a tool like this, then what am I supposed to do? Take away their crutch? #Doctor #Consulting #Free #Chatbot #ReportArtificial intelligence,doctors,Medicine](https://gizmodo.com/app/uploads/2026/05/Screenshot-2026-05-13-at-8.02.01 PM.jpg)
Post Comment