×
Apple is going high-end with new ‘Ultra’ products next

Apple is going high-end with new ‘Ultra’ products next

Fresh off launching the low-cost MacBook Neo, Apple is reportedly preparing at least three new products that will fit into its highest-end “ultra” lineup. According to Bloomberg’s Mark Gruman, the next batch of releases may not bear the “ultra” name, like its Watch, but will all command price premiums over their mainline counterparts.

There’s the oft-rumored foldable iPhone, which is expected to cost around $2,000, and a touchscreen MacBook Pro is supposedly slated for the fall. Those are pretty straightforward plays for the higher end of the market. More interesting are the next-gen AirPods, which are rumored to include cameras to feed visual context to Siri. Since AirPods already use the Pro and Max branding, similar to Apple Silicon, a set of AirPods Ultra could very well be on the docket.

Between the Neo and multiple foldables in the works, it seems that Apple is simultaneously trying to go further up- and down-market.

Source link
#Apple #highend #Ultra #products

Snap, YouTube, and TikTok have settled the first lawsuit of its kind, alleging that social media addiction has cost public schools massive amounts of money, according to Bloomberg. The suit, filed by the Breathitt County School District in Kentucky, claims that social media has disrupted learning and created a mental health crisis, straining budgets. The terms of the settlement have not been revealed yet, and Meta is still facing a trial in the same suit, which is viewed as a bellwether for over 1,000 similar lawsuits across the country

This follows an earlier case, settled by Snap and TikTok, in which a 19-year-old plaintiff claimed significant personal injury due to addictive social media apps. Google and Meta did not agree to a settlement in that suit, and it eventually went to trial, where a jury awarded the plaintiff $6 million. Meta also recently lost a suit brought by New Mexico’s Attorney General, to the tune of $375 million.

Beyond monetary awards, many, including New Mexico, are pushing for significant changes to social media apps to limit their harm to minors. And this is just the start of what’s shaping up to be a busy year for social media lawsuits. According to Bloomberg, lawyers representing school districts said their “focus remains on pursuing justice for the remaining 1,200 school districts who have filed cases.”

#Snap #YouTube #TikTok #settle #suit #harm #studentsCreators,Facebook,Law,Meta,News,Policy,Snapchat,Social Media,Streaming,Tech,TikTok,YouTube">Snap, YouTube, and TikTok settle suit over harm to studentsSnap, YouTube, and TikTok have settled the first lawsuit of its kind, alleging that social media addiction has cost public schools massive amounts of money, according to Bloomberg. The suit, filed by the Breathitt County School District in Kentucky, claims that social media has disrupted learning and created a mental health crisis, straining budgets. The terms of the settlement have not been revealed yet, and Meta is still facing a trial in the same suit, which is viewed as a bellwether for over 1,000 similar lawsuits across the countryThis follows an earlier case, settled by Snap and TikTok, in which a 19-year-old plaintiff claimed significant personal injury due to addictive social media apps. Google and Meta did not agree to a settlement in that suit, and it eventually went to trial, where a jury awarded the plaintiff  million. Meta also recently lost a suit brought by New Mexico’s Attorney General, to the tune of 5 million.Beyond monetary awards, many, including New Mexico, are pushing for significant changes to social media apps to limit their harm to minors. And this is just the start of what’s shaping up to be a busy year for social media lawsuits. According to Bloomberg, lawyers representing school districts said their “focus remains on pursuing justice for the remaining 1,200 school districts who have filed cases.”#Snap #YouTube #TikTok #settle #suit #harm #studentsCreators,Facebook,Law,Meta,News,Policy,Snapchat,Social Media,Streaming,Tech,TikTok,YouTube

Bloomberg. The suit, filed by the Breathitt County School District in Kentucky, claims that social media has disrupted learning and created a mental health crisis, straining budgets. The terms of the settlement have not been revealed yet, and Meta is still facing a trial in the same suit, which is viewed as a bellwether for over 1,000 similar lawsuits across the country

This follows an earlier case, settled by Snap and TikTok, in which a 19-year-old plaintiff claimed significant personal injury due to addictive social media apps. Google and Meta did not agree to a settlement in that suit, and it eventually went to trial, where a jury awarded the plaintiff $6 million. Meta also recently lost a suit brought by New Mexico’s Attorney General, to the tune of $375 million.

Beyond monetary awards, many, including New Mexico, are pushing for significant changes to social media apps to limit their harm to minors. And this is just the start of what’s shaping up to be a busy year for social media lawsuits. According to Bloomberg, lawyers representing school districts said their “focus remains on pursuing justice for the remaining 1,200 school districts who have filed cases.”

#Snap #YouTube #TikTok #settle #suit #harm #studentsCreators,Facebook,Law,Meta,News,Policy,Snapchat,Social Media,Streaming,Tech,TikTok,YouTube">Snap, YouTube, and TikTok settle suit over harm to students

Snap, YouTube, and TikTok have settled the first lawsuit of its kind, alleging that social media addiction has cost public schools massive amounts of money, according to Bloomberg. The suit, filed by the Breathitt County School District in Kentucky, claims that social media has disrupted learning and created a mental health crisis, straining budgets. The terms of the settlement have not been revealed yet, and Meta is still facing a trial in the same suit, which is viewed as a bellwether for over 1,000 similar lawsuits across the country

This follows an earlier case, settled by Snap and TikTok, in which a 19-year-old plaintiff claimed significant personal injury due to addictive social media apps. Google and Meta did not agree to a settlement in that suit, and it eventually went to trial, where a jury awarded the plaintiff $6 million. Meta also recently lost a suit brought by New Mexico’s Attorney General, to the tune of $375 million.

Beyond monetary awards, many, including New Mexico, are pushing for significant changes to social media apps to limit their harm to minors. And this is just the start of what’s shaping up to be a busy year for social media lawsuits. According to Bloomberg, lawyers representing school districts said their “focus remains on pursuing justice for the remaining 1,200 school districts who have filed cases.”

#Snap #YouTube #TikTok #settle #suit #harm #studentsCreators,Facebook,Law,Meta,News,Policy,Snapchat,Social Media,Streaming,Tech,TikTok,YouTube
ArXiv, a widely used open repository for preprint research, is doing more to crack down on the careless use of large language models in scientific papers.

Although papers are posted to the site before they are peer-reviewed, arXiv (pronounced “archive”) has become one of the main ways that research circulates in fields like computer science and math, and the site itself has become a source of data on trends in scientific research

ArXiv has already taken steps to combat a growing number of low-quality, AI-generated papers, for example by requiring first-time posters to get an endorsement from an established author. And after being hosted by Cornell for more than 20 years, the organization is becoming an independent nonprofit, which should allow it to raise more money to address issues like AI slop

In its latest move, Thomas Dietterich — the chair of arXiv’s computer science section — posted Thursday that “if a submission contains incontrovertible evidence that the authors did not check the results of LLM generation, this means we can’t trust anything in the paper.” 

That incontrovertible evidence could include things like “hallucinated references” and comments to or from the LLM, Dietterich said. If such evidence is found, a paper’s authors will face “a 1-year ban from arXiv followed by the requirement that subsequent arXiv submissions must first be accepted by a reputable peer-reviewed venue.”

Note that this isn’t an outright prohibition on using LLMs, but rather an insistence that, as Dietterich put it, authors take “full responsibility” for the content, “irrespective of how the contents are generated.” So if researchers copy-paste “inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content” directly from an LLM, then they’re still responsible for it. 

Dietterich told 404 Media that this will be a “one-strike” rule, but moderators must flag the issue and section chairs must confirm the evidence before imposing the penalty. Authors will also be able to appeal the decision.

Recent peer-reviewed research has found that fabricated citations are on the rise in biomedical research, likely due to LLMs — though to be fair, scientists aren’t the only ones getting caught using citations that were made up by AI.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#Research #repository #ArXiv #ban #authors #year #work #TechCruncharxiv">Research repository ArXiv will ban authors for a year if they let AI do all the work | TechCrunch
ArXiv, a widely used open repository for preprint research, is doing more to crack down on the careless use of large language models in scientific papers.

Although papers are posted to the site before they are peer-reviewed, arXiv (pronounced “archive”) has become one of the main ways that research circulates in fields like computer science and math, and the site itself has become a source of data on trends in scientific research. 







ArXiv has already taken steps to combat a growing number of low-quality, AI-generated papers, for example by requiring first-time posters to get an endorsement from an established author. And after being hosted by Cornell for more than 20 years, the organization is becoming an independent nonprofit, which should allow it to raise more money to address issues like AI slop. 

In its latest move, Thomas Dietterich — the chair of arXiv’s computer science section — posted Thursday that “if a submission contains incontrovertible evidence that the authors did not check the results of LLM generation, this means we can’t trust anything in the paper.” 

That incontrovertible evidence could include things like “hallucinated references” and comments to or from the LLM, Dietterich said. If such evidence is found, a paper’s authors will face “a 1-year ban from arXiv followed by the requirement that subsequent arXiv submissions must first be accepted by a reputable peer-reviewed venue.”

Note that this isn’t an outright prohibition on using LLMs, but rather an insistence that, as Dietterich put it, authors take “full responsibility” for the content, “irrespective of how the contents are generated.” So if researchers copy-paste “inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content” directly from an LLM, then they’re still responsible for it. 

Dietterich told 404 Media that this will be a “one-strike” rule, but moderators must flag the issue and section chairs must confirm the evidence before imposing the penalty. Authors will also be able to appeal the decision.


Recent peer-reviewed research has found that fabricated citations are on the rise in biomedical research, likely due to LLMs — though to be fair, scientists aren’t the only ones getting caught using citations that were made up by AI.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.#Research #repository #ArXiv #ban #authors #year #work #TechCruncharxiv

ArXiv, a widely used open repository for preprint research, is doing more to crack down on the careless use of large language models in scientific papers.

Although papers are posted to the site before they are peer-reviewed, arXiv (pronounced “archive”) has become one of the main ways that research circulates in fields like computer science and math, and the site itself has become a source of data on trends in scientific research

ArXiv has already taken steps to combat a growing number of low-quality, AI-generated papers, for example by requiring first-time posters to get an endorsement from an established author. And after being hosted by Cornell for more than 20 years, the organization is becoming an independent nonprofit, which should allow it to raise more money to address issues like AI slop

In its latest move, Thomas Dietterich — the chair of arXiv’s computer science section — posted Thursday that “if a submission contains incontrovertible evidence that the authors did not check the results of LLM generation, this means we can’t trust anything in the paper.” 

That incontrovertible evidence could include things like “hallucinated references” and comments to or from the LLM, Dietterich said. If such evidence is found, a paper’s authors will face “a 1-year ban from arXiv followed by the requirement that subsequent arXiv submissions must first be accepted by a reputable peer-reviewed venue.”

Note that this isn’t an outright prohibition on using LLMs, but rather an insistence that, as Dietterich put it, authors take “full responsibility” for the content, “irrespective of how the contents are generated.” So if researchers copy-paste “inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content” directly from an LLM, then they’re still responsible for it. 

Dietterich told 404 Media that this will be a “one-strike” rule, but moderators must flag the issue and section chairs must confirm the evidence before imposing the penalty. Authors will also be able to appeal the decision.

Recent peer-reviewed research has found that fabricated citations are on the rise in biomedical research, likely due to LLMs — though to be fair, scientists aren’t the only ones getting caught using citations that were made up by AI.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#Research #repository #ArXiv #ban #authors #year #work #TechCruncharxiv">Research repository ArXiv will ban authors for a year if they let AI do all the work | TechCrunch

ArXiv, a widely used open repository for preprint research, is doing more to crack down on the careless use of large language models in scientific papers.

Although papers are posted to the site before they are peer-reviewed, arXiv (pronounced “archive”) has become one of the main ways that research circulates in fields like computer science and math, and the site itself has become a source of data on trends in scientific research

ArXiv has already taken steps to combat a growing number of low-quality, AI-generated papers, for example by requiring first-time posters to get an endorsement from an established author. And after being hosted by Cornell for more than 20 years, the organization is becoming an independent nonprofit, which should allow it to raise more money to address issues like AI slop

In its latest move, Thomas Dietterich — the chair of arXiv’s computer science section — posted Thursday that “if a submission contains incontrovertible evidence that the authors did not check the results of LLM generation, this means we can’t trust anything in the paper.” 

That incontrovertible evidence could include things like “hallucinated references” and comments to or from the LLM, Dietterich said. If such evidence is found, a paper’s authors will face “a 1-year ban from arXiv followed by the requirement that subsequent arXiv submissions must first be accepted by a reputable peer-reviewed venue.”

Note that this isn’t an outright prohibition on using LLMs, but rather an insistence that, as Dietterich put it, authors take “full responsibility” for the content, “irrespective of how the contents are generated.” So if researchers copy-paste “inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content” directly from an LLM, then they’re still responsible for it. 

Dietterich told 404 Media that this will be a “one-strike” rule, but moderators must flag the issue and section chairs must confirm the evidence before imposing the penalty. Authors will also be able to appeal the decision.

Recent peer-reviewed research has found that fabricated citations are on the rise in biomedical research, likely due to LLMs — though to be fair, scientists aren’t the only ones getting caught using citations that were made up by AI.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#Research #repository #ArXiv #ban #authors #year #work #TechCruncharxiv

Post Comment