×
OnlyFans considering selling majority stake to Architect Capital | TechCrunch

OnlyFans considering selling majority stake to Architect Capital | TechCrunch

OnlyFans — the massive adult creator network where performers and influencers sell subscription-based content directly to fans — is considering selling a majority stake of its business to investment firm Architect Capital, a source close to the deal told TechCrunch. The deal would value the platform at $5.5 billion.

The source said that of that $5.5 billion, $3.5 billion would be equity and $2 billion would be debt. Under those terms, Architect would assume a 60% stake in the business. The two parties are in exclusivity, meaning that OnlyFans is barred from negotiating with other potential buyers for a set period of time. It’s unclear what the timeline for completing the deal might be. The negotiations were previously reported by The Wall Street Journal.

TechCrunch reached out to Architect Capital for comment.

This isn’t the first time in recent memory that OnlyFans has been in talks to sell off its business. Last year, the New York Post reported that Leonid Radvinsky, the billionaire owner of the site, was looking to “cash out,” and was courting potential buyers. Subsequent reporting showed that the platform’s parent company, Fenix International Ltd., was in talks with a U.S.-based investor group led by the Los Angeles-based investment firm Forest Road Company. It’s unclear what happened to those discussions, although the source told TechCrunch that there had been a number of interested parties since OnlyFans announced its desire to sell a majority stake.

The potential business partner in this particular deal, Architect, launched in 2021 as an asset-based lender — a firm that provides loans secured by company assets — that looks to partner with early-stage startups.

OnlyFans maintains that it’s not a pornography website, despite the fact that a majority of the creators on it produce adult content. A British firm, the site was founded in 2016 by Tim Stokely, who also initially served as its CEO. Stokely sold a majority stake of the site’s parent company, Fenix International, to Radvinsky in 2018. Over the years, it has suffered from a variety of legal controversies, including lawsuits accusing the site of profiting off of abusive videos.

Source link
#OnlyFans #selling #majority #stake #Architect #Capital #TechCrunch

Snap, YouTube, and TikTok have settled the first lawsuit of its kind, alleging that social media addiction has cost public schools massive amounts of money, according to Bloomberg. The suit, filed by the Breathitt County School District in Kentucky, claims that social media has disrupted learning and created a mental health crisis, straining budgets. The terms of the settlement have not been revealed yet, and Meta is still facing a trial in the same suit, which is viewed as a bellwether for over 1,000 similar lawsuits across the country

This follows an earlier case, settled by Snap and TikTok, in which a 19-year-old plaintiff claimed significant personal injury due to addictive social media apps. Google and Meta did not agree to a settlement in that suit, and it eventually went to trial, where a jury awarded the plaintiff $6 million. Meta also recently lost a suit brought by New Mexico’s Attorney General, to the tune of $375 million.

Beyond monetary awards, many, including New Mexico, are pushing for significant changes to social media apps to limit their harm to minors. And this is just the start of what’s shaping up to be a busy year for social media lawsuits. According to Bloomberg, lawyers representing school districts said their “focus remains on pursuing justice for the remaining 1,200 school districts who have filed cases.”

#Snap #YouTube #TikTok #settle #suit #harm #studentsCreators,Facebook,Law,Meta,News,Policy,Snapchat,Social Media,Streaming,Tech,TikTok,YouTube">Snap, YouTube, and TikTok settle suit over harm to studentsSnap, YouTube, and TikTok have settled the first lawsuit of its kind, alleging that social media addiction has cost public schools massive amounts of money, according to Bloomberg. The suit, filed by the Breathitt County School District in Kentucky, claims that social media has disrupted learning and created a mental health crisis, straining budgets. The terms of the settlement have not been revealed yet, and Meta is still facing a trial in the same suit, which is viewed as a bellwether for over 1,000 similar lawsuits across the countryThis follows an earlier case, settled by Snap and TikTok, in which a 19-year-old plaintiff claimed significant personal injury due to addictive social media apps. Google and Meta did not agree to a settlement in that suit, and it eventually went to trial, where a jury awarded the plaintiff  million. Meta also recently lost a suit brought by New Mexico’s Attorney General, to the tune of 5 million.Beyond monetary awards, many, including New Mexico, are pushing for significant changes to social media apps to limit their harm to minors. And this is just the start of what’s shaping up to be a busy year for social media lawsuits. According to Bloomberg, lawyers representing school districts said their “focus remains on pursuing justice for the remaining 1,200 school districts who have filed cases.”#Snap #YouTube #TikTok #settle #suit #harm #studentsCreators,Facebook,Law,Meta,News,Policy,Snapchat,Social Media,Streaming,Tech,TikTok,YouTube

Bloomberg. The suit, filed by the Breathitt County School District in Kentucky, claims that social media has disrupted learning and created a mental health crisis, straining budgets. The terms of the settlement have not been revealed yet, and Meta is still facing a trial in the same suit, which is viewed as a bellwether for over 1,000 similar lawsuits across the country

This follows an earlier case, settled by Snap and TikTok, in which a 19-year-old plaintiff claimed significant personal injury due to addictive social media apps. Google and Meta did not agree to a settlement in that suit, and it eventually went to trial, where a jury awarded the plaintiff $6 million. Meta also recently lost a suit brought by New Mexico’s Attorney General, to the tune of $375 million.

Beyond monetary awards, many, including New Mexico, are pushing for significant changes to social media apps to limit their harm to minors. And this is just the start of what’s shaping up to be a busy year for social media lawsuits. According to Bloomberg, lawyers representing school districts said their “focus remains on pursuing justice for the remaining 1,200 school districts who have filed cases.”

#Snap #YouTube #TikTok #settle #suit #harm #studentsCreators,Facebook,Law,Meta,News,Policy,Snapchat,Social Media,Streaming,Tech,TikTok,YouTube">Snap, YouTube, and TikTok settle suit over harm to students

Snap, YouTube, and TikTok have settled the first lawsuit of its kind, alleging that social media addiction has cost public schools massive amounts of money, according to Bloomberg. The suit, filed by the Breathitt County School District in Kentucky, claims that social media has disrupted learning and created a mental health crisis, straining budgets. The terms of the settlement have not been revealed yet, and Meta is still facing a trial in the same suit, which is viewed as a bellwether for over 1,000 similar lawsuits across the country

This follows an earlier case, settled by Snap and TikTok, in which a 19-year-old plaintiff claimed significant personal injury due to addictive social media apps. Google and Meta did not agree to a settlement in that suit, and it eventually went to trial, where a jury awarded the plaintiff $6 million. Meta also recently lost a suit brought by New Mexico’s Attorney General, to the tune of $375 million.

Beyond monetary awards, many, including New Mexico, are pushing for significant changes to social media apps to limit their harm to minors. And this is just the start of what’s shaping up to be a busy year for social media lawsuits. According to Bloomberg, lawyers representing school districts said their “focus remains on pursuing justice for the remaining 1,200 school districts who have filed cases.”

#Snap #YouTube #TikTok #settle #suit #harm #studentsCreators,Facebook,Law,Meta,News,Policy,Snapchat,Social Media,Streaming,Tech,TikTok,YouTube
ArXiv, a widely used open repository for preprint research, is doing more to crack down on the careless use of large language models in scientific papers.

Although papers are posted to the site before they are peer-reviewed, arXiv (pronounced “archive”) has become one of the main ways that research circulates in fields like computer science and math, and the site itself has become a source of data on trends in scientific research

ArXiv has already taken steps to combat a growing number of low-quality, AI-generated papers, for example by requiring first-time posters to get an endorsement from an established author. And after being hosted by Cornell for more than 20 years, the organization is becoming an independent nonprofit, which should allow it to raise more money to address issues like AI slop

In its latest move, Thomas Dietterich — the chair of arXiv’s computer science section — posted Thursday that “if a submission contains incontrovertible evidence that the authors did not check the results of LLM generation, this means we can’t trust anything in the paper.” 

That incontrovertible evidence could include things like “hallucinated references” and comments to or from the LLM, Dietterich said. If such evidence is found, a paper’s authors will face “a 1-year ban from arXiv followed by the requirement that subsequent arXiv submissions must first be accepted by a reputable peer-reviewed venue.”

Note that this isn’t an outright prohibition on using LLMs, but rather an insistence that, as Dietterich put it, authors take “full responsibility” for the content, “irrespective of how the contents are generated.” So if researchers copy-paste “inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content” directly from an LLM, then they’re still responsible for it. 

Dietterich told 404 Media that this will be a “one-strike” rule, but moderators must flag the issue and section chairs must confirm the evidence before imposing the penalty. Authors will also be able to appeal the decision.

Recent peer-reviewed research has found that fabricated citations are on the rise in biomedical research, likely due to LLMs — though to be fair, scientists aren’t the only ones getting caught using citations that were made up by AI.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#Research #repository #ArXiv #ban #authors #year #work #TechCruncharxiv">Research repository ArXiv will ban authors for a year if they let AI do all the work | TechCrunch
ArXiv, a widely used open repository for preprint research, is doing more to crack down on the careless use of large language models in scientific papers.

Although papers are posted to the site before they are peer-reviewed, arXiv (pronounced “archive”) has become one of the main ways that research circulates in fields like computer science and math, and the site itself has become a source of data on trends in scientific research. 







ArXiv has already taken steps to combat a growing number of low-quality, AI-generated papers, for example by requiring first-time posters to get an endorsement from an established author. And after being hosted by Cornell for more than 20 years, the organization is becoming an independent nonprofit, which should allow it to raise more money to address issues like AI slop. 

In its latest move, Thomas Dietterich — the chair of arXiv’s computer science section — posted Thursday that “if a submission contains incontrovertible evidence that the authors did not check the results of LLM generation, this means we can’t trust anything in the paper.” 

That incontrovertible evidence could include things like “hallucinated references” and comments to or from the LLM, Dietterich said. If such evidence is found, a paper’s authors will face “a 1-year ban from arXiv followed by the requirement that subsequent arXiv submissions must first be accepted by a reputable peer-reviewed venue.”

Note that this isn’t an outright prohibition on using LLMs, but rather an insistence that, as Dietterich put it, authors take “full responsibility” for the content, “irrespective of how the contents are generated.” So if researchers copy-paste “inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content” directly from an LLM, then they’re still responsible for it. 

Dietterich told 404 Media that this will be a “one-strike” rule, but moderators must flag the issue and section chairs must confirm the evidence before imposing the penalty. Authors will also be able to appeal the decision.


Recent peer-reviewed research has found that fabricated citations are on the rise in biomedical research, likely due to LLMs — though to be fair, scientists aren’t the only ones getting caught using citations that were made up by AI.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.#Research #repository #ArXiv #ban #authors #year #work #TechCruncharxiv

ArXiv, a widely used open repository for preprint research, is doing more to crack down on the careless use of large language models in scientific papers.

Although papers are posted to the site before they are peer-reviewed, arXiv (pronounced “archive”) has become one of the main ways that research circulates in fields like computer science and math, and the site itself has become a source of data on trends in scientific research

ArXiv has already taken steps to combat a growing number of low-quality, AI-generated papers, for example by requiring first-time posters to get an endorsement from an established author. And after being hosted by Cornell for more than 20 years, the organization is becoming an independent nonprofit, which should allow it to raise more money to address issues like AI slop

In its latest move, Thomas Dietterich — the chair of arXiv’s computer science section — posted Thursday that “if a submission contains incontrovertible evidence that the authors did not check the results of LLM generation, this means we can’t trust anything in the paper.” 

That incontrovertible evidence could include things like “hallucinated references” and comments to or from the LLM, Dietterich said. If such evidence is found, a paper’s authors will face “a 1-year ban from arXiv followed by the requirement that subsequent arXiv submissions must first be accepted by a reputable peer-reviewed venue.”

Note that this isn’t an outright prohibition on using LLMs, but rather an insistence that, as Dietterich put it, authors take “full responsibility” for the content, “irrespective of how the contents are generated.” So if researchers copy-paste “inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content” directly from an LLM, then they’re still responsible for it. 

Dietterich told 404 Media that this will be a “one-strike” rule, but moderators must flag the issue and section chairs must confirm the evidence before imposing the penalty. Authors will also be able to appeal the decision.

Recent peer-reviewed research has found that fabricated citations are on the rise in biomedical research, likely due to LLMs — though to be fair, scientists aren’t the only ones getting caught using citations that were made up by AI.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#Research #repository #ArXiv #ban #authors #year #work #TechCruncharxiv">Research repository ArXiv will ban authors for a year if they let AI do all the work | TechCrunch

ArXiv, a widely used open repository for preprint research, is doing more to crack down on the careless use of large language models in scientific papers.

Although papers are posted to the site before they are peer-reviewed, arXiv (pronounced “archive”) has become one of the main ways that research circulates in fields like computer science and math, and the site itself has become a source of data on trends in scientific research

ArXiv has already taken steps to combat a growing number of low-quality, AI-generated papers, for example by requiring first-time posters to get an endorsement from an established author. And after being hosted by Cornell for more than 20 years, the organization is becoming an independent nonprofit, which should allow it to raise more money to address issues like AI slop

In its latest move, Thomas Dietterich — the chair of arXiv’s computer science section — posted Thursday that “if a submission contains incontrovertible evidence that the authors did not check the results of LLM generation, this means we can’t trust anything in the paper.” 

That incontrovertible evidence could include things like “hallucinated references” and comments to or from the LLM, Dietterich said. If such evidence is found, a paper’s authors will face “a 1-year ban from arXiv followed by the requirement that subsequent arXiv submissions must first be accepted by a reputable peer-reviewed venue.”

Note that this isn’t an outright prohibition on using LLMs, but rather an insistence that, as Dietterich put it, authors take “full responsibility” for the content, “irrespective of how the contents are generated.” So if researchers copy-paste “inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content” directly from an LLM, then they’re still responsible for it. 

Dietterich told 404 Media that this will be a “one-strike” rule, but moderators must flag the issue and section chairs must confirm the evidence before imposing the penalty. Authors will also be able to appeal the decision.

Recent peer-reviewed research has found that fabricated citations are on the rise in biomedical research, likely due to LLMs — though to be fair, scientists aren’t the only ones getting caught using citations that were made up by AI.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#Research #repository #ArXiv #ban #authors #year #work #TechCruncharxiv

Post Comment