The U.S. Department of Homeland Security posted a bizarre new video to social media platforms on Thursday featuring footage of federal agents arresting protesters in Portland, Oregon. The video uses a song that became very popular among Nazis and white supremacists at the tail end of President Donald Trump’s first term, in what appears to be a dog whistle to far-right extremists.
DHS captioned the video, “End of the Dark Age, beginning of the Golden Age,” on sites like X and Instagram, along with a link to the ICE recruitment website. The video was also posted to Bluesky, the social media platform that many federal agencies joined one week ago to troll its more liberal userbase.
End of the Dark Age, beginning of the Golden Age.https://t.co/nZkBEj3GGi pic.twitter.com/6TRdCB6Tw2
— Homeland Security (@DHSgov) October 23, 2025
The song in the video, MGMT’s “Little Dark Age,” was released in 2018, though it’s been slowed down to an absurd degree. And while nothing in the song suggests sympathy with far-right ideology (quite the opposite, in fact), the song was adopted by far-right content creators in late 2020 to pair with Nazi and white supremacist imagery.
The Institute for Strategic Dialogue, a British think tank that tracks global extremism online, published a study in 2021 that noted how popular the song was with Nazis. One example used in the report shows how the song was paired on TikTok with a slideshow of George Lincoln Rockwell, the founder of the American Nazi Party, who was killed in 1967.
But the report also explains how popular the song has been to promote esoteric Nazism, featuring memes and fictional characters with far-right symbols like the Sonnenrad or Black Sun. The fact that the song is also slowed down in a very exaggerated manner in the DHS video is another hallmark of the far-right videos that went viral in the early 2020s.
Again, nothing about the song makes sense as a ballad for the far-right, as you can see from some of the lyrics, which seem to be criticizing police violence:
Policemen swear to God, love seeping from their gunsI know my friends and I would probably turn and runIf you get out of bed, come find us heading for the bridgeBring a stone, all the rage, my little dark age
The Guardian described the far-right’s affinity for the song in an article from 2024: “Certainly, its adoption doesn’t say much for your average neo-Nazi’s ability to understand English. Little Dark Age’s lyrics are, fairly obviously, an excoriation of Trump-era America and racist police violence.”
Gizmodo reached out to DHS for comment, and the agency was characteristically indignant about our questions.
“Just because you don’t like something doesn’t make it Nazi propaganda—this is bottom barrel ‘journalism.’ MGMT’s ‘Little Dark Age’ is wildly popular on both sides of the political spectrum. Go outside, touch grass, and get a grip,” read an unsigned email, attributed to a “DHS spokesperson.”
The agency also sent a link to a 2022 article in Spin about the song and highlighted a quote from MGMT co-founder Ben Goldwasser that reads, “A lot of times, there is no deeper meaning.” DHS didn’t respond to a follow-up question about who may have created the video.
That kind of response from DHS is to be expected, of course. The far-right often operates in a world of plausible deniability. But since President Trump returned to office in January, DHS has posted a lot of fascist content clearly intended to signal to Americans just how extreme the agency has become.
Back in August, Border Patrol, which is part of DHS, posted a video to Instagram and Facebook with the antisemitic lyrics “Jew me” and “kike me,” which only gained widespread attention last week. Border Patrol removed the video and reuploaded it with new music, but never explained why it was posted in the first place. The agency just sent a statement similar to that of a petulant child.
But people on social media know what the song “Little Dark Age” can mean. One right-wing political commentator on X even had the idea back in July, writing, “DHS should drop a little dark age edit just to fuck with people.” And many far-right accounts on X clearly understood the message that was intended by posting a video with that song.
“Dhs is posting little dark age edits. Crazy timeline on our hands,” wrote one account that features a profile picture of an anime character wearing a Nazi hat.
Another extremist account quote-tweeted the DHS video with, “Good job @DHS! You caught up to were we where 4 years ago!” That account included an upload of another video, which features Adolf Hitler along with the text “12 years not a slave,” and a screenshot from the livestreamed rampage of white supremacist terrorist Brenton Tarrant, who killed 51 people at two mosques in Christchurch, New Zealand, in 2019.
It’s not just the song that DHS has chosen that suggests the agency knows what it’s doing. The imagery in Homeland Security’s “Little Dark Age” edit is pretty haunting, utilizing footage from the protests at an ICE facility in Portland and a glitchy aesthetic that’s so common among so-called fashwave creators. (Yes, the fash stands for fascist.) The video features an “antifa” logo that’s usurped by the DHS logo, as well as clips of agents wearing gas masks while arresting people amid a haze of smoke.
Obviously, when you start talking about obscure corners of the far-right internet while using terms like fashwave it can sound a little silly. These are just internet memes, after all. But there’s a visual language that has developed online among the far-right. And while DHS can insist they didn’t intend for it to be interpreted as Nazi propaganda, there are plenty of literal Nazis online who believe otherwise.
Source link
#DHS #Posts #Video #Featuring #Song #Popular #Nazi #Creators

![Your Doctor Is Most Likely Consulting This Free AI Chatbot, Report Says
How would you like it if, when stumped or just in need of some help with an unfamiliar situation, your doctor consulted a free, ad-supported AI chatbot? That’s not actually a hypothetical. They probably are doing that, a new report from NBC News says. It’s called OpenEvidence, and NBC says it was “used by about 65% of U.S. doctors across almost 27 million clinical encounters in April alone.” An earlier Bloomberg report on OpenEvidence from seven months ago said it had signed up 50% of American doctors at the time—so reported growth is rapid.
The OpenEvidence homepage trumpets the bot as “America’s Official Medical Knowledge Platform,” and says healthcare professionals qualify for unlimited free use, but non-doctors can try it for free without creating accounts. It gives long, detailed answers with extensive citations that superficially look—to me, a non-doctor—trustworthy and credible. NBC interviewed doctors for its story, and apparently pressed them on how often they actually click those links to the sources of information, and “most said they only do so when they get an unexpected result,” NBC’s report says.
While it’s free, OpenEvidence is not a charity. It’s a Miami-headquartered tech unicorn with a billionaire founder named David Nadler, and as of January it boasted a billion valuation. NBC says it’s backed by some of the all stars of Sand Hill Road: Sequoia Capital and Andreessen Horowitz, along with Google Ventures, Thrive Capital, and Nvidia.
And its revenue comes from ads (for now), which NBC says are often for “pharmaceutical and medical device companies.” I’m not capable of stress testing such a piece of software, but I kicked the tires slightly by asking Claude to generate doctor’s notes that are very bad and irresponsible (I said it was just a movie prop). ©OpenEvidence When I told OpenEvidence those were my notes and asked it to make sure they were good, thankfully, it confirmed that they were bad, saying in part:
“This clinical documentation raises serious patient safety concerns. The presentation described contains multiple red flags for subarachnoid hemorrhage (SAH) that appear to have been insufficiently weighted, and the current management plan could result in significant harm.” So that’s somewhat comforting. On the other hand, according to NBC: “[…]some healthcare providers were quick to point out that OpenEvidence occasionally flubbed or exaggerated its answers, particularly on rare conditions or in ‘edge’ cases.” NBC’s report also clocked some worries within the medical community and elsewhere, in particular, a “lack of rigorous scientific studies on the tool’s patient impact,” and signs that OpenEvidence might be stunting the intellectual development of recent med school grads: “One midcareer doctor in Missouri, who requested anonymity given the limited number of providers in their medical field in the country, said he was already seeing the detrimental effects of OpenEvidence on students’ ability to sort signals from noise. ‘My worry is that when we introduce a new tool, any kind of tool that is doing part of your skills that you had trained up for a while beforehand, you start losing those skills pretty quickly” At a recent doctor’s appointment, my doctor asked my permission to use an AI tool on their phone (I don’t know if it was OpenEvidence). I didn’t know what to say other than yes. Do I want that for my doctor’s appointment? Not especially. But if my doctor has come to rely on a tool like this, then what am I supposed to do? Take away their crutch? #Doctor #Consulting #Free #Chatbot #ReportArtificial intelligence,doctors,Medicine Your Doctor Is Most Likely Consulting This Free AI Chatbot, Report Says
How would you like it if, when stumped or just in need of some help with an unfamiliar situation, your doctor consulted a free, ad-supported AI chatbot? That’s not actually a hypothetical. They probably are doing that, a new report from NBC News says. It’s called OpenEvidence, and NBC says it was “used by about 65% of U.S. doctors across almost 27 million clinical encounters in April alone.” An earlier Bloomberg report on OpenEvidence from seven months ago said it had signed up 50% of American doctors at the time—so reported growth is rapid.
The OpenEvidence homepage trumpets the bot as “America’s Official Medical Knowledge Platform,” and says healthcare professionals qualify for unlimited free use, but non-doctors can try it for free without creating accounts. It gives long, detailed answers with extensive citations that superficially look—to me, a non-doctor—trustworthy and credible. NBC interviewed doctors for its story, and apparently pressed them on how often they actually click those links to the sources of information, and “most said they only do so when they get an unexpected result,” NBC’s report says.
While it’s free, OpenEvidence is not a charity. It’s a Miami-headquartered tech unicorn with a billionaire founder named David Nadler, and as of January it boasted a billion valuation. NBC says it’s backed by some of the all stars of Sand Hill Road: Sequoia Capital and Andreessen Horowitz, along with Google Ventures, Thrive Capital, and Nvidia.
And its revenue comes from ads (for now), which NBC says are often for “pharmaceutical and medical device companies.” I’m not capable of stress testing such a piece of software, but I kicked the tires slightly by asking Claude to generate doctor’s notes that are very bad and irresponsible (I said it was just a movie prop). ©OpenEvidence When I told OpenEvidence those were my notes and asked it to make sure they were good, thankfully, it confirmed that they were bad, saying in part:
“This clinical documentation raises serious patient safety concerns. The presentation described contains multiple red flags for subarachnoid hemorrhage (SAH) that appear to have been insufficiently weighted, and the current management plan could result in significant harm.” So that’s somewhat comforting. On the other hand, according to NBC: “[…]some healthcare providers were quick to point out that OpenEvidence occasionally flubbed or exaggerated its answers, particularly on rare conditions or in ‘edge’ cases.” NBC’s report also clocked some worries within the medical community and elsewhere, in particular, a “lack of rigorous scientific studies on the tool’s patient impact,” and signs that OpenEvidence might be stunting the intellectual development of recent med school grads: “One midcareer doctor in Missouri, who requested anonymity given the limited number of providers in their medical field in the country, said he was already seeing the detrimental effects of OpenEvidence on students’ ability to sort signals from noise. ‘My worry is that when we introduce a new tool, any kind of tool that is doing part of your skills that you had trained up for a while beforehand, you start losing those skills pretty quickly” At a recent doctor’s appointment, my doctor asked my permission to use an AI tool on their phone (I don’t know if it was OpenEvidence). I didn’t know what to say other than yes. Do I want that for my doctor’s appointment? Not especially. But if my doctor has come to rely on a tool like this, then what am I supposed to do? Take away their crutch? #Doctor #Consulting #Free #Chatbot #ReportArtificial intelligence,doctors,Medicine](https://gizmodo.com/app/uploads/2026/05/Screenshot-2026-05-13-at-8.02.01 PM.jpg)
Post Comment