In a video on OpenAI’s new TikTok-like social media app Sora, a never-ending factory farm of pink pigs are grunting and snorting in their pens — each is equipped with a feeding trough and a smartphone screen, which plays a feed of vertical videos. A terrifyingly realistic Sam Altman stares directly at the camera, as though he’s making direct eye contact with the viewer. The AI-generated Altman asks, “Are my piggies enjoying their slop?”
This is what it’s like using the Sora app, less than 24 hours after it was launched to the public in an invite-only early access period.
In the next video on Sora’s For You feed, Altman appears again. This time, he’s standing in a field of Pokémon, where creatures like Pikachu, Bulbasaur, and a sort of half-baked Growlithe are frolicking through the grass. The OpenAI CEO looks at the camera and says, “I hope Nintendo doesn’t sue us.” Then, there are many more fantastical yet realistic scenes, which often feature Altman himself.
He serves Pikachu and Eric Cartman drinks at Starbucks. He screams at a customer from behind the counter at a McDonald’s. He steals NVIDIA GPUs from a Target and runs away, only to get caught and beg the police not to take his precious technology.
People on Sora who generate videos of Altman are especially getting a kick out of how blatantly OpenAI appears to be violating copyright laws. (Sora will reportedly require copyright holders to opt out of their content’s use — reversing the typical approach where creators must explicitly agree to such use — the legality of which is debatable.)
“This content may violate our guardrails concerning third-party likeness,” AI Altman says in one video, echoing the notice that appears after submitting some prompts to generate real celebrities or characters. Then, he bursts into hysterical laughter as though he knows what he’s saying is nonsense — the app is filled with videos of Pikachu doing ASMR, Naruto ordering Krabby Patties, and Mario smoking weed.
This wouldn’t be a problem if Sora 2 weren’t so impressive, especially when compared with the even more mind-numbing slop on the Meta AI app and its new social feed (yes, Meta is also trying to make AI TikTok, and no, nobody wants this).
Techcrunch event
San Francisco
|
October 27-29, 2025
OpenAI fine-tuned its video generator to adequately portray the laws of physics, which make for more realistic outputs. But the more realistic these videos get, the easier it will be for this synthetically created content to proliferate across the web, where it can become a vector for disinformation, bullying, and other nefarious uses.
Aside from its algorithmic feed and profiles, Sora’s defining feature is that it is basically a deepfake generator — that’s how we got so many videos of Altman. In the app, you can create what OpenAI calls a “cameo” of yourself by uploading biometric data. When you first join the app, you’re immediately prompted to create your optional cameo through a quick process where you record yourself reading off some numbers, then turning your head from side to side.
Each Sora user can control who is allowed to generate videos using their cameo. You can adjust this setting between four options: “only me,” “people I approve,” “mutuals,” and “everyone.”
Altman has made his cameo available to everyone, which is why the Sora feed has become flooded with videos of Pikachu and SpongeBob begging Altman to stop training AI on them.
This has to be a deliberate move on Altman’s part, perhaps as a way of showing that he doesn’t think his product is dangerous. But users are already taking advantage of Altman’s cameo to question the ethics of the app itself.
After watching enough videos of Sam Altman ladling GPUs into people’s bowls at soup kitchens, I decided to test the cameo feature on myself. It’s generally a bad idea to upload your biometric data to a social app, or any app for that matter. But I defied my best instincts for journalism — and, if I’m being honest, a bit of morbid curiosity. Do not follow my lead.
My first attempt at making a cameo was unsuccessful, and a pop-up told me that my upload violated app guidelines. I thought that I followed the instructions pretty closely, so I tried again, only to find the same pop-up. Then, I realized the problem — I was wearing a tank top, and my shoulders were perhaps a bit too risqué for the app’s liking. It’s actually a reasonable safety feature, designed to prevent inappropriate content, though I was, in fact, fully clothed. So, I changed into a t-shirt, tried again, and against my better judgement, I created my cameo.
For my first deepfake of myself, I decided to create a video of something that I would never do in real life. I asked Sora to create a video in which I profess my undying love for the New York Mets.
That prompt got rejected, probably because I named a specific franchise, so I instead asked Sora to make a video of me talking about baseball.
“I grew up in Philadelphia, so the Phillies are basically the soundtrack of my summers,” my AI deepfake said, speaking in a voice very unlike mine, but in a bedroom that looks exactly like mine.
I did not tell Sora that I am a Phillies fan. But the Sora app is able to use your IP address and your ChatGPT history to tailor its responses, so it made an educated guess, since I recorded the video in Philadelphia. At least OpenAI doesn’t know that I’m not actually from the Philadelphia area.
When I shared and explained the video on TikTok, one commenter wrote, “Every day I wake up to new horrors beyond my comprehension.”
OpenAI already has a safety problem. The company is facing concerns that ChatGPT is contributing to mental health crises, and it’s facing a lawsuit from a family who alleges that ChatGPT gave their deceased son instructions on how to kill himself. In its launch post for Sora, OpenAI emphasizes its supposed commitment to safety, highlighting its parental controls, as well as how users have control over who can make videos with their cameo — as if it’s not irresponsible in the first place to give people a free, user-friendly resource to create extremely realistic deepfakes of themselves and their friends. When you scroll through the Sora feed, you occasionally see a screen that asks, “How does using Sora impact your mood?” This is how OpenAI is embracing “safety.”
Already, users are navigating around the guardrails on Sora, something that’s inevitable for any AI product. The app does not allow you to generate videos of real people without their permission, but when it comes to dead historical figures, Sora is a bit looser with its rules. No one would believe that a video of Abraham Lincoln riding a Waymo is real, given that it would be impossible without a time machine — but then you see a realistic looking John F. Kennedy say, “Ask not what your country can do for you, but how much money your country owes you.” It’s harmless in a vacuum, but it’s a harbinger of what’s to come.
Political deepfakes aren’t new. Even President Donald Trump himself posts deepfakes on his social media (just this week, he shared a racist deepfake video of Democratic Congressmen Chuck Schumer and Hakeem Jeffries). But when Sora opens to the public, these tools will be at all of our fingertips, and we will be destined for disaster.
Source link
#OpenAIs #social #app #filled #terrifying #Sam #Altman #deepfakes #TechCrunch
![Sam Altman’s project World looks to scale its human verification empire. First stop: Tinder. | TechCrunch
At a trendy venue near the San Francisco pier, Sam Altman’s verification project World celebrated its next evolution and rapid expansion of its ambitions. And it’s starting with Tinder.
Tools for Humanity (TFH), the company behind the World project, announced Friday plans to integrate its verification tech into dating apps, event and concert ticketing systems, business organizations, email, and other arenas of public life.
“The world is getting close to very powerful AI, and this is doing a lot of wonderful things,” said Altman, speaking before a packed crowd at The Midway. “We are also heading to a world now where there’s going to be more stuff generated by AI than by humans,” he added. “I’m sure many of you [have had moments] where you’re like, ‘Am I interacting with an AI or a person, or how much of each, and how do I know?”
World (formerly Worldcoin) distinguishes itself from many of its ID verification peers by offering the ability to verify that a real, living human is using a digital service while still protecting that person’s anonymity. There is some complex cryptographic alchemy behind this (something called “zero-knowledge proof-based authentication”). The upshot: The company is creating what it calls “proof of human” tools, which are mechanisms that can verify human activity in a world rife with AI agents and bots.
Its chief tool for verification is a spherical digital reader called the Orb that scans a user’s eyes, converting their iris into a unique and anonymous cryptographic identifier (known as a verified World ID). This can then be used to access World’s services, although users can also access World’s app without one.
Altman kept his remarks brief on Friday (TFH’s co-founder and CEO, Alex Blania, was absent due to a last-minute hand surgery, Altman said). He then turned much of the presentation over to World’s chief product officer, Tiago Sada, and his team.
Sada explained that World was launching the newest version of its app (the last version was launched at an event in December), along with a plethora of new integrations for its technology.
World has been preparing, for some time, to deploy a verification service for dating apps — most notably, Tinder. Last year, Tinder launched a World ID pilot program in Japan. That pilot was apparently a success because World announced that Tinder would be launching its verification integration in global markets —including the U.S. The program integrates a World ID emblem into the profiles of users who have gone through its verification processes, thus authenticating them as a real person.
Image Credits:World
World is also courting the entertainment industry by launching a new feature called Concert Kit, where musical artists can reserve a certain number of concert tickets for World ID-verified humans. This is designed to ensure that fans are safe from scalpers who often use automated ticket-buying bots to scarf up seats. Concert Kit is compatible with major ticketing systems, including Ticketmaster and Eventbrite, and the company is promoting it via partnerships with 30 Seconds to Mars and Bruno Mars — both of whom plan to use it for their upcoming tours.
The event was full of many other announcements, including some aimed at businesses. A Zoom/World ID verification integration seeks to battle a supposed deepfake threat to business calls, and a Docusign partnership is designed to ensure signatures come from authentic users.
The company is also working on a number of features in anticipation of the Wild West of the agentic web, including one called “agent delegation,” in which a person can delegate their World ID to an agent to carry out online activities on their behalf. A partnership with authentication firm Okta has also created a system (currently in beta) that verifies that an agent is acting on behalf of a human. The system is set up so that a World ID can be tied to a specific agent and then, when the agent goes out into the web to operate on that person’s behalf, websites will know a verified person is behind the behavior, said Okta’s chief product officer, Gareth Davies, at the event.
So far, it’s been difficult for World to scale, due largely to the verification process itself. For much of the company’s history, to get its gold standard, you had to travel to one of its offices and have your eyeballs scanned by an Orb — a fairly inconvenient (not to mention weird) experience.
Image Credits:World
However, World has continually made moves to increase the ease and incentive structure for verification. In the past, it offered its crypto asset, Worldcoin, to some members who signed up and has distributed its Orbs into big retail chains so that users can verify themselves while they’re out shopping or getting a coffee. Now the company is announcing that it is significantly expanding its Orb saturation in New York, Los Angeles, and San Francisco. The company also promoted a service where interested users could have World bring an Orb to their location for remote verification.
In a conversation with TechCrunch, Sada also shared that World has attempted to solve the scaling problem by creating different tiers of verification. The highest tier is Orb verification, but below that, World has previously offered a mid-level tier, which uses an anonymized scan of an official government ID via the card’s NFC chip.
The company also introduced a low-level tier, or what Sada called “low friction”— meaning low effort, I guess, but also “low security” — which involves merely taking a selfie.
Selfie Check, which Sada’s team presented during the event, is designed to maintain user privacy.
“Selfie is private by design,” said Daniel Shorr, one of TFH’s executives, during the presentation. “That means that we maximize the local processing that’s happening on your device, on your phone, which means that your images are yours.”
Selfie verification obviously isn’t new, and fraudsters have long managed to spoof it. “Obviously, we do our best, and it’s like one of the best systems that you’ll see for this. But it has limits,” Sada told TechCrunch. Developers looking to integrate World’s services can choose from the three different verification tiers depending on the level of security that’s important to them, he noted.
#Sam #Altmans #project #World #scale #human #verification #empire #stop #Tinder #TechCrunchDocuSign,sam altman,Tinder,World,Worldcoin,zoom Sam Altman’s project World looks to scale its human verification empire. First stop: Tinder. | TechCrunch
At a trendy venue near the San Francisco pier, Sam Altman’s verification project World celebrated its next evolution and rapid expansion of its ambitions. And it’s starting with Tinder.
Tools for Humanity (TFH), the company behind the World project, announced Friday plans to integrate its verification tech into dating apps, event and concert ticketing systems, business organizations, email, and other arenas of public life.
“The world is getting close to very powerful AI, and this is doing a lot of wonderful things,” said Altman, speaking before a packed crowd at The Midway. “We are also heading to a world now where there’s going to be more stuff generated by AI than by humans,” he added. “I’m sure many of you [have had moments] where you’re like, ‘Am I interacting with an AI or a person, or how much of each, and how do I know?”
World (formerly Worldcoin) distinguishes itself from many of its ID verification peers by offering the ability to verify that a real, living human is using a digital service while still protecting that person’s anonymity. There is some complex cryptographic alchemy behind this (something called “zero-knowledge proof-based authentication”). The upshot: The company is creating what it calls “proof of human” tools, which are mechanisms that can verify human activity in a world rife with AI agents and bots.
Its chief tool for verification is a spherical digital reader called the Orb that scans a user’s eyes, converting their iris into a unique and anonymous cryptographic identifier (known as a verified World ID). This can then be used to access World’s services, although users can also access World’s app without one.
Altman kept his remarks brief on Friday (TFH’s co-founder and CEO, Alex Blania, was absent due to a last-minute hand surgery, Altman said). He then turned much of the presentation over to World’s chief product officer, Tiago Sada, and his team.
Sada explained that World was launching the newest version of its app (the last version was launched at an event in December), along with a plethora of new integrations for its technology.
World has been preparing, for some time, to deploy a verification service for dating apps — most notably, Tinder. Last year, Tinder launched a World ID pilot program in Japan. That pilot was apparently a success because World announced that Tinder would be launching its verification integration in global markets —including the U.S. The program integrates a World ID emblem into the profiles of users who have gone through its verification processes, thus authenticating them as a real person.
Image Credits:World
World is also courting the entertainment industry by launching a new feature called Concert Kit, where musical artists can reserve a certain number of concert tickets for World ID-verified humans. This is designed to ensure that fans are safe from scalpers who often use automated ticket-buying bots to scarf up seats. Concert Kit is compatible with major ticketing systems, including Ticketmaster and Eventbrite, and the company is promoting it via partnerships with 30 Seconds to Mars and Bruno Mars — both of whom plan to use it for their upcoming tours.
The event was full of many other announcements, including some aimed at businesses. A Zoom/World ID verification integration seeks to battle a supposed deepfake threat to business calls, and a Docusign partnership is designed to ensure signatures come from authentic users.
The company is also working on a number of features in anticipation of the Wild West of the agentic web, including one called “agent delegation,” in which a person can delegate their World ID to an agent to carry out online activities on their behalf. A partnership with authentication firm Okta has also created a system (currently in beta) that verifies that an agent is acting on behalf of a human. The system is set up so that a World ID can be tied to a specific agent and then, when the agent goes out into the web to operate on that person’s behalf, websites will know a verified person is behind the behavior, said Okta’s chief product officer, Gareth Davies, at the event.
So far, it’s been difficult for World to scale, due largely to the verification process itself. For much of the company’s history, to get its gold standard, you had to travel to one of its offices and have your eyeballs scanned by an Orb — a fairly inconvenient (not to mention weird) experience.
Image Credits:World
However, World has continually made moves to increase the ease and incentive structure for verification. In the past, it offered its crypto asset, Worldcoin, to some members who signed up and has distributed its Orbs into big retail chains so that users can verify themselves while they’re out shopping or getting a coffee. Now the company is announcing that it is significantly expanding its Orb saturation in New York, Los Angeles, and San Francisco. The company also promoted a service where interested users could have World bring an Orb to their location for remote verification.
In a conversation with TechCrunch, Sada also shared that World has attempted to solve the scaling problem by creating different tiers of verification. The highest tier is Orb verification, but below that, World has previously offered a mid-level tier, which uses an anonymized scan of an official government ID via the card’s NFC chip.
The company also introduced a low-level tier, or what Sada called “low friction”— meaning low effort, I guess, but also “low security” — which involves merely taking a selfie.
Selfie Check, which Sada’s team presented during the event, is designed to maintain user privacy.
“Selfie is private by design,” said Daniel Shorr, one of TFH’s executives, during the presentation. “That means that we maximize the local processing that’s happening on your device, on your phone, which means that your images are yours.”
Selfie verification obviously isn’t new, and fraudsters have long managed to spoof it. “Obviously, we do our best, and it’s like one of the best systems that you’ll see for this. But it has limits,” Sada told TechCrunch. Developers looking to integrate World’s services can choose from the three different verification tiers depending on the level of security that’s important to them, he noted.
#Sam #Altmans #project #World #scale #human #verification #empire #stop #Tinder #TechCrunchDocuSign,sam altman,Tinder,World,Worldcoin,zoom](https://techcrunch.com/wp-content/uploads/2026/04/Screenshot-2026-04-17-at-1.55.00-PM.png?w=680)

Post Comment