×
Apple’s head of UI design is leaving for Meta

Apple’s head of UI design is leaving for Meta

Alan Dye, who has led Apple’s UI design team since 2015, is leaving to join Meta as its chief design officer, Bloomberg reports. He’ll join Meta on December 31st, and Meta is opening a design studio and giving Dye oversight of design for “hardware, software and AI integration for its interfaces,” according to Bloomberg. Dye will report to Meta CTO Andrew Bosworth.

Designer Steve Lemay will replace Dye at Apple, the company confirmed to Bloomberg. “Steve Lemay has played a key role in the design of every major Apple interface since 1999,” CEO Tim Cook told Bloomberg in a statement. “He has always set an extraordinarily high bar for excellence and embodies Apple’s culture of collaboration and creativity.” Following the recent retirement of former COO Jeff Williams, Apple’s design team now reports to Tim Cook.

Apple and Meta didn’t immediately reply to a request for comment.

Dye’s departure is the latest leadership change at Apple, and another one of many top designers who have left the company in the last several years. Williams officially left Apple in November, and the company announced earlier this week that AI chief John Giannandrea would be stepping down. Bloomberg has also reported that chips lead Johnny Srouji is evaluating his future at the company.

Source link
#Apples #design #leaving #Meta

Snap, YouTube, and TikTok have settled the first lawsuit of its kind, alleging that social media addiction has cost public schools massive amounts of money, according to Bloomberg. The suit, filed by the Breathitt County School District in Kentucky, claims that social media has disrupted learning and created a mental health crisis, straining budgets. The terms of the settlement have not been revealed yet, and Meta is still facing a trial in the same suit, which is viewed as a bellwether for over 1,000 similar lawsuits across the country

This follows an earlier case, settled by Snap and TikTok, in which a 19-year-old plaintiff claimed significant personal injury due to addictive social media apps. Google and Meta did not agree to a settlement in that suit, and it eventually went to trial, where a jury awarded the plaintiff $6 million. Meta also recently lost a suit brought by New Mexico’s Attorney General, to the tune of $375 million.

Beyond monetary awards, many, including New Mexico, are pushing for significant changes to social media apps to limit their harm to minors. And this is just the start of what’s shaping up to be a busy year for social media lawsuits. According to Bloomberg, lawyers representing school districts said their “focus remains on pursuing justice for the remaining 1,200 school districts who have filed cases.”

#Snap #YouTube #TikTok #settle #suit #harm #studentsCreators,Facebook,Law,Meta,News,Policy,Snapchat,Social Media,Streaming,Tech,TikTok,YouTube">Snap, YouTube, and TikTok settle suit over harm to studentsSnap, YouTube, and TikTok have settled the first lawsuit of its kind, alleging that social media addiction has cost public schools massive amounts of money, according to Bloomberg. The suit, filed by the Breathitt County School District in Kentucky, claims that social media has disrupted learning and created a mental health crisis, straining budgets. The terms of the settlement have not been revealed yet, and Meta is still facing a trial in the same suit, which is viewed as a bellwether for over 1,000 similar lawsuits across the countryThis follows an earlier case, settled by Snap and TikTok, in which a 19-year-old plaintiff claimed significant personal injury due to addictive social media apps. Google and Meta did not agree to a settlement in that suit, and it eventually went to trial, where a jury awarded the plaintiff  million. Meta also recently lost a suit brought by New Mexico’s Attorney General, to the tune of 5 million.Beyond monetary awards, many, including New Mexico, are pushing for significant changes to social media apps to limit their harm to minors. And this is just the start of what’s shaping up to be a busy year for social media lawsuits. According to Bloomberg, lawyers representing school districts said their “focus remains on pursuing justice for the remaining 1,200 school districts who have filed cases.”#Snap #YouTube #TikTok #settle #suit #harm #studentsCreators,Facebook,Law,Meta,News,Policy,Snapchat,Social Media,Streaming,Tech,TikTok,YouTube

Bloomberg. The suit, filed by the Breathitt County School District in Kentucky, claims that social media has disrupted learning and created a mental health crisis, straining budgets. The terms of the settlement have not been revealed yet, and Meta is still facing a trial in the same suit, which is viewed as a bellwether for over 1,000 similar lawsuits across the country

This follows an earlier case, settled by Snap and TikTok, in which a 19-year-old plaintiff claimed significant personal injury due to addictive social media apps. Google and Meta did not agree to a settlement in that suit, and it eventually went to trial, where a jury awarded the plaintiff $6 million. Meta also recently lost a suit brought by New Mexico’s Attorney General, to the tune of $375 million.

Beyond monetary awards, many, including New Mexico, are pushing for significant changes to social media apps to limit their harm to minors. And this is just the start of what’s shaping up to be a busy year for social media lawsuits. According to Bloomberg, lawyers representing school districts said their “focus remains on pursuing justice for the remaining 1,200 school districts who have filed cases.”

#Snap #YouTube #TikTok #settle #suit #harm #studentsCreators,Facebook,Law,Meta,News,Policy,Snapchat,Social Media,Streaming,Tech,TikTok,YouTube">Snap, YouTube, and TikTok settle suit over harm to students

Snap, YouTube, and TikTok have settled the first lawsuit of its kind, alleging that social media addiction has cost public schools massive amounts of money, according to Bloomberg. The suit, filed by the Breathitt County School District in Kentucky, claims that social media has disrupted learning and created a mental health crisis, straining budgets. The terms of the settlement have not been revealed yet, and Meta is still facing a trial in the same suit, which is viewed as a bellwether for over 1,000 similar lawsuits across the country

This follows an earlier case, settled by Snap and TikTok, in which a 19-year-old plaintiff claimed significant personal injury due to addictive social media apps. Google and Meta did not agree to a settlement in that suit, and it eventually went to trial, where a jury awarded the plaintiff $6 million. Meta also recently lost a suit brought by New Mexico’s Attorney General, to the tune of $375 million.

Beyond monetary awards, many, including New Mexico, are pushing for significant changes to social media apps to limit their harm to minors. And this is just the start of what’s shaping up to be a busy year for social media lawsuits. According to Bloomberg, lawyers representing school districts said their “focus remains on pursuing justice for the remaining 1,200 school districts who have filed cases.”

#Snap #YouTube #TikTok #settle #suit #harm #studentsCreators,Facebook,Law,Meta,News,Policy,Snapchat,Social Media,Streaming,Tech,TikTok,YouTube
ArXiv, a widely used open repository for preprint research, is doing more to crack down on the careless use of large language models in scientific papers.

Although papers are posted to the site before they are peer-reviewed, arXiv (pronounced “archive”) has become one of the main ways that research circulates in fields like computer science and math, and the site itself has become a source of data on trends in scientific research

ArXiv has already taken steps to combat a growing number of low-quality, AI-generated papers, for example by requiring first-time posters to get an endorsement from an established author. And after being hosted by Cornell for more than 20 years, the organization is becoming an independent nonprofit, which should allow it to raise more money to address issues like AI slop

In its latest move, Thomas Dietterich — the chair of arXiv’s computer science section — posted Thursday that “if a submission contains incontrovertible evidence that the authors did not check the results of LLM generation, this means we can’t trust anything in the paper.” 

That incontrovertible evidence could include things like “hallucinated references” and comments to or from the LLM, Dietterich said. If such evidence is found, a paper’s authors will face “a 1-year ban from arXiv followed by the requirement that subsequent arXiv submissions must first be accepted by a reputable peer-reviewed venue.”

Note that this isn’t an outright prohibition on using LLMs, but rather an insistence that, as Dietterich put it, authors take “full responsibility” for the content, “irrespective of how the contents are generated.” So if researchers copy-paste “inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content” directly from an LLM, then they’re still responsible for it. 

Dietterich told 404 Media that this will be a “one-strike” rule, but moderators must flag the issue and section chairs must confirm the evidence before imposing the penalty. Authors will also be able to appeal the decision.

Recent peer-reviewed research has found that fabricated citations are on the rise in biomedical research, likely due to LLMs — though to be fair, scientists aren’t the only ones getting caught using citations that were made up by AI.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#Research #repository #ArXiv #ban #authors #year #work #TechCruncharxiv">Research repository ArXiv will ban authors for a year if they let AI do all the work | TechCrunch
ArXiv, a widely used open repository for preprint research, is doing more to crack down on the careless use of large language models in scientific papers.

Although papers are posted to the site before they are peer-reviewed, arXiv (pronounced “archive”) has become one of the main ways that research circulates in fields like computer science and math, and the site itself has become a source of data on trends in scientific research. 







ArXiv has already taken steps to combat a growing number of low-quality, AI-generated papers, for example by requiring first-time posters to get an endorsement from an established author. And after being hosted by Cornell for more than 20 years, the organization is becoming an independent nonprofit, which should allow it to raise more money to address issues like AI slop. 

In its latest move, Thomas Dietterich — the chair of arXiv’s computer science section — posted Thursday that “if a submission contains incontrovertible evidence that the authors did not check the results of LLM generation, this means we can’t trust anything in the paper.” 

That incontrovertible evidence could include things like “hallucinated references” and comments to or from the LLM, Dietterich said. If such evidence is found, a paper’s authors will face “a 1-year ban from arXiv followed by the requirement that subsequent arXiv submissions must first be accepted by a reputable peer-reviewed venue.”

Note that this isn’t an outright prohibition on using LLMs, but rather an insistence that, as Dietterich put it, authors take “full responsibility” for the content, “irrespective of how the contents are generated.” So if researchers copy-paste “inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content” directly from an LLM, then they’re still responsible for it. 

Dietterich told 404 Media that this will be a “one-strike” rule, but moderators must flag the issue and section chairs must confirm the evidence before imposing the penalty. Authors will also be able to appeal the decision.


Recent peer-reviewed research has found that fabricated citations are on the rise in biomedical research, likely due to LLMs — though to be fair, scientists aren’t the only ones getting caught using citations that were made up by AI.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.#Research #repository #ArXiv #ban #authors #year #work #TechCruncharxiv

ArXiv, a widely used open repository for preprint research, is doing more to crack down on the careless use of large language models in scientific papers.

Although papers are posted to the site before they are peer-reviewed, arXiv (pronounced “archive”) has become one of the main ways that research circulates in fields like computer science and math, and the site itself has become a source of data on trends in scientific research

ArXiv has already taken steps to combat a growing number of low-quality, AI-generated papers, for example by requiring first-time posters to get an endorsement from an established author. And after being hosted by Cornell for more than 20 years, the organization is becoming an independent nonprofit, which should allow it to raise more money to address issues like AI slop

In its latest move, Thomas Dietterich — the chair of arXiv’s computer science section — posted Thursday that “if a submission contains incontrovertible evidence that the authors did not check the results of LLM generation, this means we can’t trust anything in the paper.” 

That incontrovertible evidence could include things like “hallucinated references” and comments to or from the LLM, Dietterich said. If such evidence is found, a paper’s authors will face “a 1-year ban from arXiv followed by the requirement that subsequent arXiv submissions must first be accepted by a reputable peer-reviewed venue.”

Note that this isn’t an outright prohibition on using LLMs, but rather an insistence that, as Dietterich put it, authors take “full responsibility” for the content, “irrespective of how the contents are generated.” So if researchers copy-paste “inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content” directly from an LLM, then they’re still responsible for it. 

Dietterich told 404 Media that this will be a “one-strike” rule, but moderators must flag the issue and section chairs must confirm the evidence before imposing the penalty. Authors will also be able to appeal the decision.

Recent peer-reviewed research has found that fabricated citations are on the rise in biomedical research, likely due to LLMs — though to be fair, scientists aren’t the only ones getting caught using citations that were made up by AI.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#Research #repository #ArXiv #ban #authors #year #work #TechCruncharxiv">Research repository ArXiv will ban authors for a year if they let AI do all the work | TechCrunch

ArXiv, a widely used open repository for preprint research, is doing more to crack down on the careless use of large language models in scientific papers.

Although papers are posted to the site before they are peer-reviewed, arXiv (pronounced “archive”) has become one of the main ways that research circulates in fields like computer science and math, and the site itself has become a source of data on trends in scientific research

ArXiv has already taken steps to combat a growing number of low-quality, AI-generated papers, for example by requiring first-time posters to get an endorsement from an established author. And after being hosted by Cornell for more than 20 years, the organization is becoming an independent nonprofit, which should allow it to raise more money to address issues like AI slop

In its latest move, Thomas Dietterich — the chair of arXiv’s computer science section — posted Thursday that “if a submission contains incontrovertible evidence that the authors did not check the results of LLM generation, this means we can’t trust anything in the paper.” 

That incontrovertible evidence could include things like “hallucinated references” and comments to or from the LLM, Dietterich said. If such evidence is found, a paper’s authors will face “a 1-year ban from arXiv followed by the requirement that subsequent arXiv submissions must first be accepted by a reputable peer-reviewed venue.”

Note that this isn’t an outright prohibition on using LLMs, but rather an insistence that, as Dietterich put it, authors take “full responsibility” for the content, “irrespective of how the contents are generated.” So if researchers copy-paste “inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content” directly from an LLM, then they’re still responsible for it. 

Dietterich told 404 Media that this will be a “one-strike” rule, but moderators must flag the issue and section chairs must confirm the evidence before imposing the penalty. Authors will also be able to appeal the decision.

Recent peer-reviewed research has found that fabricated citations are on the rise in biomedical research, likely due to LLMs — though to be fair, scientists aren’t the only ones getting caught using citations that were made up by AI.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#Research #repository #ArXiv #ban #authors #year #work #TechCruncharxiv

Post Comment