×
Vylit, from OnlyFans’ former CEO, is coming — and it allows topless pics

Vylit, from OnlyFans’ former CEO, is coming — and it allows topless pics

“Really, everyone is a creator,” Amrapali (Ami) Gan, former CEO of OnlyFans, told Mashable in an interview. Think about how every audience member films the concert they’re watching — or how everyone at a restaurant films their food. 

Gan and her co-founder, Kailey Magder, hope to harness our modern creator economy with Vylit, an upcoming 18+ platform where, apparently, anyone can build an audience and monetize their posts, including topless content.

SEE ALSO:

‘We’re all sinners’: the Conservative tradwife who does OnlyFans

“Vylit is where sharing and earning collide, as thirst traps and everyday moments live side by side,” Vylit’s website claims. “Why? Because you’re hot.”

Recommended deals for you

AdultFriendFinder


readers’ pick for casual connections

Tinder


top pick for finding hookups

Hinge


popular choice for regular meetups

Products available for purchase through affiliate links. If you buy something through links on our site, Mashable may earn an affiliate commission.

Gan, who was CEO of the adult platform OnlyFans from December 2021 to July 2023, said that we’re just at the beginning of the creator economy, and it’s going to become more and more normalized for anyone to sell content, subscriptions, and the like based on their personal interests.

“That’s truly the future,” she said.

Vylit launches in beta next month, with the goal of having a broader launch in March or April 2026. But while Vylit will allow bare breasts, don’t expect it to be exactly like OnlyFans. 

Topless, but no nudes on Vylit

After Gan left OnlyFans, she started a marketing company called Hoxton Projects. She and Magder worked with other brands there — but they were both entrepreneurial-minded, Gan said, and they ideated on what their own business could look like. 

Left: Ami Gan. Right: Kailey Magder
Credit: Vylit

They had other ideas, like a dog treat company or holistic cleaning products, but between people frequently reaching out to them with ideas for “the next OnlyFans” and their own frustrations with modern social media, the idea for Vylit formed.

“It truly addresses this white space that sits between traditional platforms like Instagram and TikTok, and these adult creator platforms,” Gan said. “And that’s where we see the opportunity to have an 18 and over platform where anyone can monetize content, build a community, and have true freedom of expression.”

That freedom has its limits, though. “As we say, we’re going to free the nipple, but not the rest,” Gan said. Meaning, only the top half will be allowed Vylit. No genitalia, no explicit content. (Gan led marketing at OnlyFans at the time when it tried to ban explicit content in August 2021; the platform soon reversed course after a horde of backlash.)

While X and Bluesky allow adult content, Meta platforms (Facebook and Instagram) and TikTok don’t.

Mashable Trend Report

The reason behind Vylit’s line, Gan explained, is to appeal to a mass market audience. “These days you can turn on HBO or whatever and see all kinds of stuff, so we feel that it’s a lot more acceptable while still keeping that broader appeal to anyone who wants that safe space to share content,” she said, while not opening an app first thing in the morning and seeing nudes.

18+ in the age of age verification

“To us, the future of social media is very much 18 and over,” Gan said.

It seems that more and more legislatures believe this, too, with the increased adoption of age-verification laws, which require personal data like a government ID or biometric scans to prove you’re of age. This goes for explicit sites, but more and more non-explicit sites are installing age-verification measures like Spotify and YouTube. Free speech and internet experts told Mashable earlier this year that increased age verification would fundamentally change the internet, including curtailing minors’ access. 

At Vylit, the team had these conversations earlier on, Gan said. All users will go through an age verification process in the form of a facial age check. If you want to upload content onto Vylit, you’ll have to submit an ID as well. 

All content will be reviewed by AI before it’s published on Vylit — including text. (Vylit is partnering with Unitary AI around content moderation). Everything that gets flagged will be reviewed by a human, Gan said. 

Tools for creators — beyond what OnlyFans has

AI also shapes the tools for creators on the platform. One is image-generation, Magder said. Vylit can create an “AI twin,” so creators can input images of themselves and it will generate content with that “twin.”

Then there’s AI chat, created in the likeness of the creator. Fans (or “members”) will be able to speak either with the creator themselves or their AI, and there will be transparency around that. (Meanwhile, on OnlyFans, creators are using controversial messaging tools, where fans think they’re speaking to creators when actually they’re chatting with AI bots — or other humans called “chatters.”)

The Vylit team is also developing stronger search and discovery tools to easily find creators you’re interested in. Not just based on aesthetic preferences but hobbies and interests, Gan said. This will help foster community as well as engagement, as fans will have connection with a creator beyond what they look like. 

While OnlyFans didn’t come up, the platform is known for its lack of search function, so much so that users have taken to creating their own.

How Vylit is different than other creator platforms

Gan and Magder made these features, and Vylit as a whole, with creators in mind. Creator jobs grew 7x in recent years, according to a study released earlier this year, and Goldman Sachs estimated that the creator economy could hit $480 billion by 2027. 

But with platforms like FanHouse — and let’s face it, OnlyFans — already out there, where does Vylit fit in?

Gan said that Vylit is a platform that’s made for everyone, whether you want to monetize content or build a community. She claimed that users will be able to build a community on Vylit, whereas other platforms are typically for those who had an existing audience. 

Magder concurred, saying, “These existing platforms are really focused on the top one percent of creators, and expecting those creators to do marketing in order to build their following.” But the marketing has to happen on other platforms — whereas you can join Vylit and create an audience. 

“We really think about what’s stopping people from becoming creators, and it’s like, well, they don’t feel like they have a big enough following to monetize,” she continued. So, they’re building Vylit to allow creators of any size to thrive.

And there will be monetization tools. Vylit creators will be able to offer subscriptions, content pay-per-view, and tips. Magder said they thought about creator pain points and how they can reverse engineer the platform to support everyday creators — whether they’re small and trying to become a creator or they’re a big creator who’s been in the space a long time.

This month, though, most creators will have to watch Vylit develop from afar, as it’s invite-only.

Source link
#Vylit #OnlyFans #CEO #coming #topless #pics

Speech-to-text capability is now baked into all modern computers. But what if you didn’t have to dictate to your computer? What if you could type just by thinking?

Silicon Valley startup Sabi is emerging from stealth with that goal. The company is developing a brain wearable that decodes a person’s internal speech into words on a computer screen. CEO Rahul Chhabra says its first product, a brain-reading beanie, will be available by the end of the year. The company is also designing a baseball cap version.

The technology is known as a brain-computer interface, or BCI, a device that provides a direct communication pathway between the brain and an external device. While many companies such as Elon Musk’s Neuralink are developing surgically implanted BCIs for people with severe motor disabilities, Sabi’s device could allow anyone to become a cyborg.

It’s not exactly Musk’s vision of the future, which involves implanted brain chips to allow humans to merge with AI. But venture capitalist Vinod Khosla, who was an early investor in OpenAI, says a noninvasive, wearable device is the only path to getting lots of people to use BCI technology.

“The biggest and baddest application of BCI is if you can talk to your computer by thinking about it,” says Khosla, founder of Khosla Ventures, one of Sabi’s investors. “If you’re going to have a billion people use BCI for access to their computers every day, it can’t be invasive.”

Sabi’s brain-reading hat relies on EEG, or electroencephalography, which uses metal disks placed on the scalp to record the brain’s electrical activity. Decoding imagined speech from EEG is already possible, but it’s currently limited to small sets of words or commands rather than continuous, natural speech.

A very small chip shown on the pad of a finger to illustrate it's tiny scale

Photograph: Courtesy of Sabi

The drawback of a wearable system is that the sensors have to listen to the brain through a layer of skin and bone, which dampens neural signals. Surgically implanted devices pick up much stronger signals because they sit so close to neurons. Sabi thinks the way to boost accuracy with a wearable is by massively scaling up the number of sensors in its device. Most EEG devices have a dozen to a few hundred sensors. Sabi’s cap will have anywhere from 70,000 to 100,000 miniature sensors.

“Given that high-density sensing, it pinpoints exactly what and where neural activity is happening. We use that information to get much more reliable data to decode what a person is thinking,” Chhabra says.

The company is aiming for an initial typing speed of 30 or so words per minute. That’s slower than most people type, but he says the speed will improve as users spend more time with the cap.

#Beanie #Designed #Read #Thoughtswearables,neuroscience,artificial intelligence,brain-computer interfaces">This Beanie Is Designed to Read Your ThoughtsSpeech-to-text capability is now baked into all modern computers. But what if you didn’t have to dictate to your computer? What if you could type just by thinking?Silicon Valley startup Sabi is emerging from stealth with that goal. The company is developing a brain wearable that decodes a person’s internal speech into words on a computer screen. CEO Rahul Chhabra says its first product, a brain-reading beanie, will be available by the end of the year. The company is also designing a baseball cap version.The technology is known as a brain-computer interface, or BCI, a device that provides a direct communication pathway between the brain and an external device. While many companies such as Elon Musk’s Neuralink are developing surgically implanted BCIs for people with severe motor disabilities, Sabi’s device could allow anyone to become a cyborg.It’s not exactly Musk’s vision of the future, which involves implanted brain chips to allow humans to merge with AI. But venture capitalist Vinod Khosla, who was an early investor in OpenAI, says a noninvasive, wearable device is the only path to getting lots of people to use BCI technology.“The biggest and baddest application of BCI is if you can talk to your computer by thinking about it,” says Khosla, founder of Khosla Ventures, one of Sabi’s investors. “If you’re going to have a billion people use BCI for access to their computers every day, it can’t be invasive.”Sabi’s brain-reading hat relies on EEG, or electroencephalography, which uses metal disks placed on the scalp to record the brain’s electrical activity. Decoding imagined speech from EEG is already possible, but it’s currently limited to small sets of words or commands rather than continuous, natural speech.Photograph: Courtesy of SabiThe drawback of a wearable system is that the sensors have to listen to the brain through a layer of skin and bone, which dampens neural signals. Surgically implanted devices pick up much stronger signals because they sit so close to neurons. Sabi thinks the way to boost accuracy with a wearable is by massively scaling up the number of sensors in its device. Most EEG devices have a dozen to a few hundred sensors. Sabi’s cap will have anywhere from 70,000 to 100,000 miniature sensors.“Given that high-density sensing, it pinpoints exactly what and where neural activity is happening. We use that information to get much more reliable data to decode what a person is thinking,” Chhabra says.The company is aiming for an initial typing speed of 30 or so words per minute. That’s slower than most people type, but he says the speed will improve as users spend more time with the cap.#Beanie #Designed #Read #Thoughtswearables,neuroscience,artificial intelligence,brain-computer interfaces

modern computers. But what if you didn’t have to dictate to your computer? What if you could type just by thinking?

Silicon Valley startup Sabi is emerging from stealth with that goal. The company is developing a brain wearable that decodes a person’s internal speech into words on a computer screen. CEO Rahul Chhabra says its first product, a brain-reading beanie, will be available by the end of the year. The company is also designing a baseball cap version.

The technology is known as a brain-computer interface, or BCI, a device that provides a direct communication pathway between the brain and an external device. While many companies such as Elon Musk’s Neuralink are developing surgically implanted BCIs for people with severe motor disabilities, Sabi’s device could allow anyone to become a cyborg.

It’s not exactly Musk’s vision of the future, which involves implanted brain chips to allow humans to merge with AI. But venture capitalist Vinod Khosla, who was an early investor in OpenAI, says a noninvasive, wearable device is the only path to getting lots of people to use BCI technology.

“The biggest and baddest application of BCI is if you can talk to your computer by thinking about it,” says Khosla, founder of Khosla Ventures, one of Sabi’s investors. “If you’re going to have a billion people use BCI for access to their computers every day, it can’t be invasive.”

Sabi’s brain-reading hat relies on EEG, or electroencephalography, which uses metal disks placed on the scalp to record the brain’s electrical activity. Decoding imagined speech from EEG is already possible, but it’s currently limited to small sets of words or commands rather than continuous, natural speech.

A very small chip shown on the pad of a finger to illustrate it's tiny scale

Photograph: Courtesy of Sabi

The drawback of a wearable system is that the sensors have to listen to the brain through a layer of skin and bone, which dampens neural signals. Surgically implanted devices pick up much stronger signals because they sit so close to neurons. Sabi thinks the way to boost accuracy with a wearable is by massively scaling up the number of sensors in its device. Most EEG devices have a dozen to a few hundred sensors. Sabi’s cap will have anywhere from 70,000 to 100,000 miniature sensors.

“Given that high-density sensing, it pinpoints exactly what and where neural activity is happening. We use that information to get much more reliable data to decode what a person is thinking,” Chhabra says.

The company is aiming for an initial typing speed of 30 or so words per minute. That’s slower than most people type, but he says the speed will improve as users spend more time with the cap.

#Beanie #Designed #Read #Thoughtswearables,neuroscience,artificial intelligence,brain-computer interfaces">This Beanie Is Designed to Read Your Thoughts

Speech-to-text capability is now baked into all modern computers. But what if you didn’t have to dictate to your computer? What if you could type just by thinking?

Silicon Valley startup Sabi is emerging from stealth with that goal. The company is developing a brain wearable that decodes a person’s internal speech into words on a computer screen. CEO Rahul Chhabra says its first product, a brain-reading beanie, will be available by the end of the year. The company is also designing a baseball cap version.

The technology is known as a brain-computer interface, or BCI, a device that provides a direct communication pathway between the brain and an external device. While many companies such as Elon Musk’s Neuralink are developing surgically implanted BCIs for people with severe motor disabilities, Sabi’s device could allow anyone to become a cyborg.

It’s not exactly Musk’s vision of the future, which involves implanted brain chips to allow humans to merge with AI. But venture capitalist Vinod Khosla, who was an early investor in OpenAI, says a noninvasive, wearable device is the only path to getting lots of people to use BCI technology.

“The biggest and baddest application of BCI is if you can talk to your computer by thinking about it,” says Khosla, founder of Khosla Ventures, one of Sabi’s investors. “If you’re going to have a billion people use BCI for access to their computers every day, it can’t be invasive.”

Sabi’s brain-reading hat relies on EEG, or electroencephalography, which uses metal disks placed on the scalp to record the brain’s electrical activity. Decoding imagined speech from EEG is already possible, but it’s currently limited to small sets of words or commands rather than continuous, natural speech.

A very small chip shown on the pad of a finger to illustrate it's tiny scale

Photograph: Courtesy of Sabi

The drawback of a wearable system is that the sensors have to listen to the brain through a layer of skin and bone, which dampens neural signals. Surgically implanted devices pick up much stronger signals because they sit so close to neurons. Sabi thinks the way to boost accuracy with a wearable is by massively scaling up the number of sensors in its device. Most EEG devices have a dozen to a few hundred sensors. Sabi’s cap will have anywhere from 70,000 to 100,000 miniature sensors.

“Given that high-density sensing, it pinpoints exactly what and where neural activity is happening. We use that information to get much more reliable data to decode what a person is thinking,” Chhabra says.

The company is aiming for an initial typing speed of 30 or so words per minute. That’s slower than most people type, but he says the speed will improve as users spend more time with the cap.

#Beanie #Designed #Read #Thoughtswearables,neuroscience,artificial intelligence,brain-computer interfaces

Val Kilmer AI deepfake film As Deep as the Grave has just released its first trailer. The internet has responded with overwhelming disgust.

A widely recognised actor known for his roles in films such as Top Gun, Batman Forever, and Kiss Kiss Bang Bang, Kilmer died from pneumonia last April at 65 years old. Upcoming film As Deep as the Grave has now used generative AI to create a digital puppet in Kilmer’s likeness, having it portray a character appearing in “a significant part” of the historical film.

As Deep as the Grave follows married archaeologists Ann Axtell Morris (Abigail Lawrie) and Earl H. Morris (Tom Felton), who conducted fieldwork in the U.S. southwest during the 1920s. Kilmer’s AI-generated likeness will be used to depict Father Fintan, a Catholic priest who is also a Native American spiritualist. The film also features Abigail Breslin, Wes Studi, and Finn Jones.

Though Kilmer was cast in As Deep as the Grave prior to his death, delays in production and issues with his health meant he never shot any scenes. Kilmer had previously given a tech-assisted performance in Top Gun: Maverick, which digitally altered his real voice. He also worked with UK company Sonantic to create an AI speaking voice based on his old recordings. However, As Deep as the Grave will be the first time his likeness and voice will be completely AI-generated in a film.

“Very fitting that this trailer includes a scene where a corpse is unceremoniously yanked out of the ground,” read one of the top comments on As Deep as the Grave‘s trailer at time of writing.

CGI likenesses of deceased actors have been used in feature films before. In 2016, Rogue One: A Star Wars Story gained attention for using CGI and motion capture to resurrect Peter Cushing and portray a younger Carrie Fisher for a few minutes of the film. In 2015, Furious 7 used similar techniques to insert Paul Walker into the remainder of the film after he died mid-shoot. Though Furious 7 largely received a pass due to the circumstances, Rogue One received criticism regarding the ethics of its CGI Cushing. Using generative AI to completely create a performance out of nothing appears to go a step even further, completely removing any actors from the process.

Writer and director Coerte Voorhees told Variety that he chose to use AI rather than recast the role due to budget constraints, and that Kilmer’s children gave the project their blessing. Even so, online commenters have labelled it disgusting and disrespectful, not only for digitally reanimating Kilmer but also for the damaging precedent As Deep as the Grave‘s use of AI could set for the film industry as a whole.

#Val #Kilmer #deepfake #Deep #Grave #trailer #sparks #outrage">Val Kilmer AI deepfake in ‘As Deep as the Grave’ trailer sparks outrage
                        Val Kilmer AI deepfake film As Deep as the Grave has just released its first trailer. The internet has responded with overwhelming disgust.A widely recognised actor known for his roles in films such as Top Gun, Batman Forever, and Kiss Kiss Bang Bang, Kilmer died from pneumonia last April at 65 years old. Upcoming film As Deep as the Grave has now used generative AI to create a digital puppet in Kilmer’s likeness, having it portray a character appearing in “a significant part” of the historical film.As Deep as the Grave follows married archaeologists Ann Axtell Morris (Abigail Lawrie) and Earl H. Morris (Tom Felton), who conducted fieldwork in the U.S. southwest during the 1920s. Kilmer’s AI-generated likeness will be used to depict Father Fintan, a Catholic priest who is also a Native American spiritualist. The film also features Abigail Breslin, Wes Studi, and Finn Jones.Though Kilmer was cast in As Deep as the Grave prior to his death, delays in production and issues with his health meant he never shot any scenes. Kilmer had previously given a tech-assisted performance in Top Gun: Maverick, which digitally altered his real voice. He also worked with UK company Sonantic to create an AI speaking voice based on his old recordings. However, As Deep as the Grave will be the first time his likeness and voice will be completely AI-generated in a film.“Very fitting that this trailer includes a scene where a corpse is unceremoniously yanked out of the ground,” read one of the top comments on As Deep as the Grave‘s trailer at time of writing.
CGI likenesses of deceased actors have been used in feature films before. In 2016, Rogue One: A Star Wars Story gained attention for using CGI and motion capture to resurrect Peter Cushing and portray a younger Carrie Fisher for a few minutes of the film. In 2015, Furious 7 used similar techniques to insert Paul Walker into the remainder of the film after he died mid-shoot. Though Furious 7 largely received a pass due to the circumstances, Rogue One received criticism regarding the ethics of its CGI Cushing. Using generative AI to completely create a performance out of nothing appears to go a step even further, completely removing any actors from the process.Writer and director Coerte Voorhees told Variety that he chose to use AI rather than recast the role due to budget constraints, and that Kilmer’s children gave the project their blessing. Even so, online commenters have labelled it disgusting and disrespectful, not only for digitally reanimating Kilmer but also for the damaging precedent As Deep as the Grave‘s use of AI could set for the film industry as a whole.



                            
                    
                
                    #Val #Kilmer #deepfake #Deep #Grave #trailer #sparks #outrage

Val Kilmer AI deepfake film As Deep as the Grave has just released its first trailer. The internet has responded with overwhelming disgust.

A widely recognised actor known for his roles in films such as Top Gun, Batman Forever, and Kiss Kiss Bang Bang, Kilmer died from pneumonia last April at 65 years old. Upcoming film As Deep as the Grave has now used generative AI to create a digital puppet in Kilmer’s likeness, having it portray a character appearing in “a significant part” of the historical film.

As Deep as the Grave follows married archaeologists Ann Axtell Morris (Abigail Lawrie) and Earl H. Morris (Tom Felton), who conducted fieldwork in the U.S. southwest during the 1920s. Kilmer’s AI-generated likeness will be used to depict Father Fintan, a Catholic priest who is also a Native American spiritualist. The film also features Abigail Breslin, Wes Studi, and Finn Jones.

Though Kilmer was cast in As Deep as the Grave prior to his death, delays in production and issues with his health meant he never shot any scenes. Kilmer had previously given a tech-assisted performance in Top Gun: Maverick, which digitally altered his real voice. He also worked with UK company Sonantic to create an AI speaking voice based on his old recordings. However, As Deep as the Grave will be the first time his likeness and voice will be completely AI-generated in a film.

“Very fitting that this trailer includes a scene where a corpse is unceremoniously yanked out of the ground,” read one of the top comments on As Deep as the Grave‘s trailer at time of writing.

CGI likenesses of deceased actors have been used in feature films before. In 2016, Rogue One: A Star Wars Story gained attention for using CGI and motion capture to resurrect Peter Cushing and portray a younger Carrie Fisher for a few minutes of the film. In 2015, Furious 7 used similar techniques to insert Paul Walker into the remainder of the film after he died mid-shoot. Though Furious 7 largely received a pass due to the circumstances, Rogue One received criticism regarding the ethics of its CGI Cushing. Using generative AI to completely create a performance out of nothing appears to go a step even further, completely removing any actors from the process.

Writer and director Coerte Voorhees told Variety that he chose to use AI rather than recast the role due to budget constraints, and that Kilmer’s children gave the project their blessing. Even so, online commenters have labelled it disgusting and disrespectful, not only for digitally reanimating Kilmer but also for the damaging precedent As Deep as the Grave‘s use of AI could set for the film industry as a whole.

#Val #Kilmer #deepfake #Deep #Grave #trailer #sparks #outrage">Val Kilmer AI deepfake in ‘As Deep as the Grave’ trailer sparks outrage

Val Kilmer AI deepfake film As Deep as the Grave has just released its first trailer. The internet has responded with overwhelming disgust.

A widely recognised actor known for his roles in films such as Top Gun, Batman Forever, and Kiss Kiss Bang Bang, Kilmer died from pneumonia last April at 65 years old. Upcoming film As Deep as the Grave has now used generative AI to create a digital puppet in Kilmer’s likeness, having it portray a character appearing in “a significant part” of the historical film.

As Deep as the Grave follows married archaeologists Ann Axtell Morris (Abigail Lawrie) and Earl H. Morris (Tom Felton), who conducted fieldwork in the U.S. southwest during the 1920s. Kilmer’s AI-generated likeness will be used to depict Father Fintan, a Catholic priest who is also a Native American spiritualist. The film also features Abigail Breslin, Wes Studi, and Finn Jones.

Though Kilmer was cast in As Deep as the Grave prior to his death, delays in production and issues with his health meant he never shot any scenes. Kilmer had previously given a tech-assisted performance in Top Gun: Maverick, which digitally altered his real voice. He also worked with UK company Sonantic to create an AI speaking voice based on his old recordings. However, As Deep as the Grave will be the first time his likeness and voice will be completely AI-generated in a film.

“Very fitting that this trailer includes a scene where a corpse is unceremoniously yanked out of the ground,” read one of the top comments on As Deep as the Grave‘s trailer at time of writing.

CGI likenesses of deceased actors have been used in feature films before. In 2016, Rogue One: A Star Wars Story gained attention for using CGI and motion capture to resurrect Peter Cushing and portray a younger Carrie Fisher for a few minutes of the film. In 2015, Furious 7 used similar techniques to insert Paul Walker into the remainder of the film after he died mid-shoot. Though Furious 7 largely received a pass due to the circumstances, Rogue One received criticism regarding the ethics of its CGI Cushing. Using generative AI to completely create a performance out of nothing appears to go a step even further, completely removing any actors from the process.

Writer and director Coerte Voorhees told Variety that he chose to use AI rather than recast the role due to budget constraints, and that Kilmer’s children gave the project their blessing. Even so, online commenters have labelled it disgusting and disrespectful, not only for digitally reanimating Kilmer but also for the damaging precedent As Deep as the Grave‘s use of AI could set for the film industry as a whole.

#Val #Kilmer #deepfake #Deep #Grave #trailer #sparks #outrage

Post Comment