×
Report: Apple Is Testing Features That Will Put Siri All Over Your iPhone Experience

Report: Apple Is Testing Features That Will Put Siri All Over Your iPhone Experience

A new report says some version of the long-awaited new, AI-powered Siri will debut on June 8 at Apple’s Worldwide Developers Conference (WWDC) at the same time iOS 27 is also scheduled to be shown to the public. One possible version detailed in the report makes it sound like phones with the new Siri features will provide an overall Siri-heavy experience.

It sounds like Apple is considering making this AI-powered Siri ubiquitous, like the Gemini features that show up across Google’s Search, Gmail, and Google Docs products.

The report comes from sources leaking anonymously to Bloomberg’s Mark Gurman, who is famous for his many Apple scoops. Sources have been conveying a story lately about Siri: it’s chronically disappointing, and key features are delayed until at least September. But Gurman’s latest article is so detailed in parts, it gives the distinct impression that he or someone close to him has physically tried out at least one in-development version of Apple’s new AI-powered personal assistant, and its integration into many areas of some version of iOS 27.

This (again: reportedly not finalized) version, Gurman says, has its functions divided across a Siri utility that pops up via the usual prompts, and a more feature-rich dedicated Siri app. The app, Gurman says, will “display prior conversations in either a list or a grid of rounded rectangles with text previews.”

Conversations with the new Siri sound like they’ll be much more like chatbot conversations—laid out in familiar, threaded back-and-forth bubbles that will remind users of the Messages app, though conversations with Siri can also be spoken, Gurman says.

Meanwhile, after triggering Siri the old fashioned way, one possible design, which—once again—is recounted in vivid detail by Gurman, will place Siri in the tiny Dynamic Island at first (for higher end iPhones with Dynamic Islands, I suppose). After speaking to Siri initially, the user will see “a pill-shaped indicator labeled ‘Searching’” next to “a glowing Siri icon.” Once Siri cooks up your answer, Gurman says the interface will expand downward to display the answer in a Liquid Glass-styled panel.

But in addition to these two different ways of accessing Siri, there may reportedly be others in testing. There’s apparently a possible “Write with Siri” option that will materialize above the keyboard whenever users input text. Gurman says that built in apps may also have an “Ask Siri” menu option that will allow users “to send selected content into a new Siri conversation.”

Meanwhile, multiple Gurman leakers apparently shared their belief that key personalization features for the new Siri involving access to users’ data, and “awareness” of what’s on the screen, should not be expected until later in the year.

Source link
#Report #Apple #Testing #Features #Put #Siri #iPhone #Experience

Snap, YouTube, and TikTok have settled the first lawsuit of its kind, alleging that social media addiction has cost public schools massive amounts of money, according to Bloomberg. The suit, filed by the Breathitt County School District in Kentucky, claims that social media has disrupted learning and created a mental health crisis, straining budgets. The terms of the settlement have not been revealed yet, and Meta is still facing a trial in the same suit, which is viewed as a bellwether for over 1,000 similar lawsuits across the country

This follows an earlier case, settled by Snap and TikTok, in which a 19-year-old plaintiff claimed significant personal injury due to addictive social media apps. Google and Meta did not agree to a settlement in that suit, and it eventually went to trial, where a jury awarded the plaintiff $6 million. Meta also recently lost a suit brought by New Mexico’s Attorney General, to the tune of $375 million.

Beyond monetary awards, many, including New Mexico, are pushing for significant changes to social media apps to limit their harm to minors. And this is just the start of what’s shaping up to be a busy year for social media lawsuits. According to Bloomberg, lawyers representing school districts said their “focus remains on pursuing justice for the remaining 1,200 school districts who have filed cases.”

#Snap #YouTube #TikTok #settle #suit #harm #studentsCreators,Facebook,Law,Meta,News,Policy,Snapchat,Social Media,Streaming,Tech,TikTok,YouTube">Snap, YouTube, and TikTok settle suit over harm to studentsSnap, YouTube, and TikTok have settled the first lawsuit of its kind, alleging that social media addiction has cost public schools massive amounts of money, according to Bloomberg. The suit, filed by the Breathitt County School District in Kentucky, claims that social media has disrupted learning and created a mental health crisis, straining budgets. The terms of the settlement have not been revealed yet, and Meta is still facing a trial in the same suit, which is viewed as a bellwether for over 1,000 similar lawsuits across the countryThis follows an earlier case, settled by Snap and TikTok, in which a 19-year-old plaintiff claimed significant personal injury due to addictive social media apps. Google and Meta did not agree to a settlement in that suit, and it eventually went to trial, where a jury awarded the plaintiff  million. Meta also recently lost a suit brought by New Mexico’s Attorney General, to the tune of 5 million.Beyond monetary awards, many, including New Mexico, are pushing for significant changes to social media apps to limit their harm to minors. And this is just the start of what’s shaping up to be a busy year for social media lawsuits. According to Bloomberg, lawyers representing school districts said their “focus remains on pursuing justice for the remaining 1,200 school districts who have filed cases.”#Snap #YouTube #TikTok #settle #suit #harm #studentsCreators,Facebook,Law,Meta,News,Policy,Snapchat,Social Media,Streaming,Tech,TikTok,YouTube

Bloomberg. The suit, filed by the Breathitt County School District in Kentucky, claims that social media has disrupted learning and created a mental health crisis, straining budgets. The terms of the settlement have not been revealed yet, and Meta is still facing a trial in the same suit, which is viewed as a bellwether for over 1,000 similar lawsuits across the country

This follows an earlier case, settled by Snap and TikTok, in which a 19-year-old plaintiff claimed significant personal injury due to addictive social media apps. Google and Meta did not agree to a settlement in that suit, and it eventually went to trial, where a jury awarded the plaintiff $6 million. Meta also recently lost a suit brought by New Mexico’s Attorney General, to the tune of $375 million.

Beyond monetary awards, many, including New Mexico, are pushing for significant changes to social media apps to limit their harm to minors. And this is just the start of what’s shaping up to be a busy year for social media lawsuits. According to Bloomberg, lawyers representing school districts said their “focus remains on pursuing justice for the remaining 1,200 school districts who have filed cases.”

#Snap #YouTube #TikTok #settle #suit #harm #studentsCreators,Facebook,Law,Meta,News,Policy,Snapchat,Social Media,Streaming,Tech,TikTok,YouTube">Snap, YouTube, and TikTok settle suit over harm to students

Snap, YouTube, and TikTok have settled the first lawsuit of its kind, alleging that social media addiction has cost public schools massive amounts of money, according to Bloomberg. The suit, filed by the Breathitt County School District in Kentucky, claims that social media has disrupted learning and created a mental health crisis, straining budgets. The terms of the settlement have not been revealed yet, and Meta is still facing a trial in the same suit, which is viewed as a bellwether for over 1,000 similar lawsuits across the country

This follows an earlier case, settled by Snap and TikTok, in which a 19-year-old plaintiff claimed significant personal injury due to addictive social media apps. Google and Meta did not agree to a settlement in that suit, and it eventually went to trial, where a jury awarded the plaintiff $6 million. Meta also recently lost a suit brought by New Mexico’s Attorney General, to the tune of $375 million.

Beyond monetary awards, many, including New Mexico, are pushing for significant changes to social media apps to limit their harm to minors. And this is just the start of what’s shaping up to be a busy year for social media lawsuits. According to Bloomberg, lawyers representing school districts said their “focus remains on pursuing justice for the remaining 1,200 school districts who have filed cases.”

#Snap #YouTube #TikTok #settle #suit #harm #studentsCreators,Facebook,Law,Meta,News,Policy,Snapchat,Social Media,Streaming,Tech,TikTok,YouTube
ArXiv, a widely used open repository for preprint research, is doing more to crack down on the careless use of large language models in scientific papers.

Although papers are posted to the site before they are peer-reviewed, arXiv (pronounced “archive”) has become one of the main ways that research circulates in fields like computer science and math, and the site itself has become a source of data on trends in scientific research

ArXiv has already taken steps to combat a growing number of low-quality, AI-generated papers, for example by requiring first-time posters to get an endorsement from an established author. And after being hosted by Cornell for more than 20 years, the organization is becoming an independent nonprofit, which should allow it to raise more money to address issues like AI slop

In its latest move, Thomas Dietterich — the chair of arXiv’s computer science section — posted Thursday that “if a submission contains incontrovertible evidence that the authors did not check the results of LLM generation, this means we can’t trust anything in the paper.” 

That incontrovertible evidence could include things like “hallucinated references” and comments to or from the LLM, Dietterich said. If such evidence is found, a paper’s authors will face “a 1-year ban from arXiv followed by the requirement that subsequent arXiv submissions must first be accepted by a reputable peer-reviewed venue.”

Note that this isn’t an outright prohibition on using LLMs, but rather an insistence that, as Dietterich put it, authors take “full responsibility” for the content, “irrespective of how the contents are generated.” So if researchers copy-paste “inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content” directly from an LLM, then they’re still responsible for it. 

Dietterich told 404 Media that this will be a “one-strike” rule, but moderators must flag the issue and section chairs must confirm the evidence before imposing the penalty. Authors will also be able to appeal the decision.

Recent peer-reviewed research has found that fabricated citations are on the rise in biomedical research, likely due to LLMs — though to be fair, scientists aren’t the only ones getting caught using citations that were made up by AI.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#Research #repository #ArXiv #ban #authors #year #work #TechCruncharxiv">Research repository ArXiv will ban authors for a year if they let AI do all the work | TechCrunch
ArXiv, a widely used open repository for preprint research, is doing more to crack down on the careless use of large language models in scientific papers.

Although papers are posted to the site before they are peer-reviewed, arXiv (pronounced “archive”) has become one of the main ways that research circulates in fields like computer science and math, and the site itself has become a source of data on trends in scientific research. 







ArXiv has already taken steps to combat a growing number of low-quality, AI-generated papers, for example by requiring first-time posters to get an endorsement from an established author. And after being hosted by Cornell for more than 20 years, the organization is becoming an independent nonprofit, which should allow it to raise more money to address issues like AI slop. 

In its latest move, Thomas Dietterich — the chair of arXiv’s computer science section — posted Thursday that “if a submission contains incontrovertible evidence that the authors did not check the results of LLM generation, this means we can’t trust anything in the paper.” 

That incontrovertible evidence could include things like “hallucinated references” and comments to or from the LLM, Dietterich said. If such evidence is found, a paper’s authors will face “a 1-year ban from arXiv followed by the requirement that subsequent arXiv submissions must first be accepted by a reputable peer-reviewed venue.”

Note that this isn’t an outright prohibition on using LLMs, but rather an insistence that, as Dietterich put it, authors take “full responsibility” for the content, “irrespective of how the contents are generated.” So if researchers copy-paste “inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content” directly from an LLM, then they’re still responsible for it. 

Dietterich told 404 Media that this will be a “one-strike” rule, but moderators must flag the issue and section chairs must confirm the evidence before imposing the penalty. Authors will also be able to appeal the decision.


Recent peer-reviewed research has found that fabricated citations are on the rise in biomedical research, likely due to LLMs — though to be fair, scientists aren’t the only ones getting caught using citations that were made up by AI.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.#Research #repository #ArXiv #ban #authors #year #work #TechCruncharxiv

ArXiv, a widely used open repository for preprint research, is doing more to crack down on the careless use of large language models in scientific papers.

Although papers are posted to the site before they are peer-reviewed, arXiv (pronounced “archive”) has become one of the main ways that research circulates in fields like computer science and math, and the site itself has become a source of data on trends in scientific research

ArXiv has already taken steps to combat a growing number of low-quality, AI-generated papers, for example by requiring first-time posters to get an endorsement from an established author. And after being hosted by Cornell for more than 20 years, the organization is becoming an independent nonprofit, which should allow it to raise more money to address issues like AI slop

In its latest move, Thomas Dietterich — the chair of arXiv’s computer science section — posted Thursday that “if a submission contains incontrovertible evidence that the authors did not check the results of LLM generation, this means we can’t trust anything in the paper.” 

That incontrovertible evidence could include things like “hallucinated references” and comments to or from the LLM, Dietterich said. If such evidence is found, a paper’s authors will face “a 1-year ban from arXiv followed by the requirement that subsequent arXiv submissions must first be accepted by a reputable peer-reviewed venue.”

Note that this isn’t an outright prohibition on using LLMs, but rather an insistence that, as Dietterich put it, authors take “full responsibility” for the content, “irrespective of how the contents are generated.” So if researchers copy-paste “inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content” directly from an LLM, then they’re still responsible for it. 

Dietterich told 404 Media that this will be a “one-strike” rule, but moderators must flag the issue and section chairs must confirm the evidence before imposing the penalty. Authors will also be able to appeal the decision.

Recent peer-reviewed research has found that fabricated citations are on the rise in biomedical research, likely due to LLMs — though to be fair, scientists aren’t the only ones getting caught using citations that were made up by AI.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#Research #repository #ArXiv #ban #authors #year #work #TechCruncharxiv">Research repository ArXiv will ban authors for a year if they let AI do all the work | TechCrunch

ArXiv, a widely used open repository for preprint research, is doing more to crack down on the careless use of large language models in scientific papers.

Although papers are posted to the site before they are peer-reviewed, arXiv (pronounced “archive”) has become one of the main ways that research circulates in fields like computer science and math, and the site itself has become a source of data on trends in scientific research

ArXiv has already taken steps to combat a growing number of low-quality, AI-generated papers, for example by requiring first-time posters to get an endorsement from an established author. And after being hosted by Cornell for more than 20 years, the organization is becoming an independent nonprofit, which should allow it to raise more money to address issues like AI slop

In its latest move, Thomas Dietterich — the chair of arXiv’s computer science section — posted Thursday that “if a submission contains incontrovertible evidence that the authors did not check the results of LLM generation, this means we can’t trust anything in the paper.” 

That incontrovertible evidence could include things like “hallucinated references” and comments to or from the LLM, Dietterich said. If such evidence is found, a paper’s authors will face “a 1-year ban from arXiv followed by the requirement that subsequent arXiv submissions must first be accepted by a reputable peer-reviewed venue.”

Note that this isn’t an outright prohibition on using LLMs, but rather an insistence that, as Dietterich put it, authors take “full responsibility” for the content, “irrespective of how the contents are generated.” So if researchers copy-paste “inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content” directly from an LLM, then they’re still responsible for it. 

Dietterich told 404 Media that this will be a “one-strike” rule, but moderators must flag the issue and section chairs must confirm the evidence before imposing the penalty. Authors will also be able to appeal the decision.

Recent peer-reviewed research has found that fabricated citations are on the rise in biomedical research, likely due to LLMs — though to be fair, scientists aren’t the only ones getting caught using citations that were made up by AI.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#Research #repository #ArXiv #ban #authors #year #work #TechCruncharxiv

Post Comment