×
WWDC 2025: What to expect from this year’s conference

WWDC 2025: What to expect from this year’s conference

WWDC 2025, Apple’s annual developers conference, starts at 10 a.m. PT / 1 p.m. ET. Monday. Last year’s event was notable for its focus on AI, and this year, there is considerable pressure on the company to build on its promises, and to change the narrative after months of largely negative headlines.

As in previous years, the company will focus on software updates and new technologies, including the next version of iOS, which is rumored to have the most significant design changes since the introduction of iOS 7. But iOS 19 (or 26, if other rumors about the new naming system are true) isn’t the only thing the company will announce at WWDC 2025.

Here’s how you can watch the keynote livestream.

iOS is getting the most dramatic design change in over a decade

When Apple introduced a major overhaul to iOS back in 2013 with the launch of iOS 7, it felt jarring for many users with the shift from the prior skeuomorphic design with gradients and real-world textures to the more colorful, but flat, design style that reflected Apple’s then chief design officer Jony Ive’s taste for minimalism.

Now, new reports suggest that an upcoming redesign could provoke a similar level of reaction.

Reports suggest the new design may have elements referencing visionOS, the software powering Apple’s spatial computing headset, the Apple Vision Pro. If true, that means the new OS could feature a transparent interface and more circular app icons that break away from the traditional square format today.

This visual redesign could be implemented across all of Apple’s ecosystem (including even CarPlay), according to Bloomberg, providing a more seamless experience for consumers moving between their different devices.

iOS will change its naming system

According to Bloomberg, Apple will announce a change in the naming system for iOS at this year’s WWDC. Instead of announcing the next version of iOS as iOS 19, Apple’s operating systems will shift to being named by year. That means we could be set to see the launch of iOS 26 instead, alongside the OSes for other products, including adOS 26, macOS 26, watchOS 26, tvOS 26, and visionOS 26.

Apple may keep the AI news light this year

While it might be challenging to top the news related to Apple Intelligence at WWDC 2024, the company is expected to share a few updates on the AI front.

The company has seemingly been caught flat-footed in the AI race, making announcements about AI capabilities that had yet to ship, leading even some Apple pundits to accuse the company of touting vaporware. While Apple has launched several AI tools like Image Playground, Genmoji, Writing Tools, Photos Clean Up, and more, its promise of an improved Siri, personalized to the end user and able to take action across your apps, has been delayed.

Meanwhile, Apple has turned to outside companies like OpenAI to give its iPhone a boost in terms of its AI capabilities. At WWDC, it may announce support for other AI chatbots, as well. With Jony Ive now working with Sam Altman on an AI hardware device, Apple is under pressure to catch up on AI progress.

Image Credits:Nikolas Kokovlis/NurPhoto / Getty Images

In addition, reports suggest that Apple’s Health app could soon incorporate AI technology, which could include a health chatbot and generative AI insights that provide personalized health-related suggestions based on user data. Additionally, other apps, such as Messages, may receive enhancements with AI capabilities, including a translation feature and polls that offer AI-generated suggestions, per 9to5Mac.

Apple will likely make the most of a number of smaller OS updates that involve AI, given its underwhelming progress. Reports suggest that these updates could include AI-powered battery management features and an AI-powered Shortcuts app, for instance.

iPhone users may get a dedicated gaming app

Bloomberg confirmed a 9to5Mac report that said Apple is developing a dedicated gaming app that will replace the aging Game Center app. The app could include access to Apple Arcade’s subscription-based game store, plus other gaming features like leaderboards, recommendations, and ways to challenge your friends. It could also integrate with iMessage or FaceTime for remote gaming.

Apple Arcade video game subscription service signage is displayed on an iPhone
Image Credits:Gabby Jones/Bloomberg / Getty Images

Updates to Mac, Watch, TV, and more

Along with the new design, reports suggest that Apple’s other operating systems will get some polish, too. For instance, macOS may also see the new gaming app and benefit from the new AirPods features. It’s also expected to be named macOS Tahoe, in keeping with Apple’s naming convention that references California landmarks.

Apple TV may get a visual overhaul, but also changes to its user interface, the new gaming app, and other features.

AirPods to get new features

In addition to Messages getting a translation feature, Bloomberg reported that Apple could also bring a live-translate language feature to its AirPods wireless Bluetooth earbuds, allowing real-time translation during conversations. The iPhone will translate spoken words from another language for the user and will also translate the user’s response back into that language.

A new report from 9to5Mac also suggests that AirPods may get new head gestures to complement today’s ability to either nod or shake your head to respond to incoming calls or messages. Plus, AirPods may get features to auto-pause music after you fall asleep, a way to trigger the camera via Camera Control with a touch, a studio-quality mic mode, and an improved pairing experience in shared AirPods.

AirPods Pro 2 with USB-C
Image Credits:Darrell Etherington

Apple Pencil upgrade

According to reports, the Apple Pencil is also receiving a new update, one that will benefit users who wish to write in Arabic script. In an effort to cater to customers in UAE, Saudi Arabia, and India, Apple is reportedly launching a new virtual calligraphy feature in iPadOS 19. The company may also introduce a bi-directional keyboard so users can switch between Arabic and English on iPhones and iPads.

No hardware announcements?

There haven’t been any rumors regarding new devices, because no hardware is ready for release yet, according to Bloomberg. Although it’s always possible that the company will surprise us with a new Mac Pro announcement, most reports are saying this is highly unlikely at this point.

Some reports indicate that Apple may also announce support for a new input device for its Vision Pro: spatial controllers. The devices would be motion-aware and designed with interaction in a 3D environment in mind, 9to5Mac says. In addition, Vision Pro could get eye-scrolling support, enabling users to scroll through documents on both native and third-party apps.

Bloomberg had reported in November that Apple was expected to announce a smart home tablet in March 2025, featuring a 6-inch touchscreen and voice-activated controls. The device was said to include support for Home Control, Siri, and video calls, but has yet to launch. Following the discovery of a filing for “HomeOS” by PMC’s Parker Ortolani, speculation has arisen that Apple may unveil the software for the device at WWDC.

Source link
#WWDC #expect #years #conference

#Reggie #FilsAimé #Amazon #asked #Nintendo #break #lawAmazon,Gaming,News,Nintendo,Tech">Reggie Fils-Aimé says Amazon once asked Nintendo to break the law“Literally, we stopped selling to Amazon, and it’s because I wasn’t going to do something illegal. I wasn’t going to do something that would put at risk the relationship we have with other retailers. But it also set the stage to say, look, you’re not going to push me around. This is the way we do business. And so that’s how, over time, you build respect.”#Reggie #FilsAimé #Amazon #asked #Nintendo #break #lawAmazon,Gaming,News,Nintendo,Tech
A new study examines how large language models perform in a variety of medical contexts, including real emergency room cases — where at least one model seemed to be more accurate than human doctors.

The study was published this week in Science and comes from a research team led by physicians and computer scientists at Harvard Medical School and Beth Israel Deaconess Medical Center. The researchers said they conducted a variety of experiments to measure how OpenAI’s models compared to human physicians.

In one experiment, researchers focused on 76 patients who came into the Beth Israel emergency room, comparing the diagnoses offered by two internal medicine attending physicians to those generated by OpenAI’s o1 and 4o models. These diagnoses were assessed by two other attending physicians, who did not know which ones came from humans and which came from AI.

“At each diagnostic touchpoint, o1 either performed nominally better than or on par with the two attending physicians and 4o,” the study said, adding that the differences “were especially pronounced at the first diagnostic touchpoint (initial ER triage), where there is the least information available about the patient and the most urgency to make the correct decision.”

In Harvard Medical School’s press release about the study, the researchers emphasized that they did not “pre-process the data at all” — the AI models were presented with the same information that was available in the electronic medical records at the time of each diagnosis. 

With that information, the o1 model managed to offer “the exact or very close diagnosis” in 67% of triage cases, compared to one physician who had the exact or close diagnosis 55% of the time, and to the other who hit the mark 50% of the time.

“We tested the AI model against virtually every benchmark, and it eclipsed both prior models and our physician baselines,” said Arjun Manrai, who heads an AI lab at Harvard Medical School and is one of the study’s lead authors, in the press release.

Techcrunch event

San Francisco, CA | October 13-15, 2026

To be clear, the study didn’t claim that AI is ready to make real life-or-death decisions in the emergency room. Instead, it said the findings show an “urgent need for prospective trials to evaluate these technologies in real-world patient care settings.”

The researchers also noted that they only studied how models performed when provided with text-based information, and that “existing studies suggest that current foundation models are more limited in reasoning over nontext inputs.”

Adam Rodman, a Beth Israel doctor who’s also one of the study’s lead authors, warned the Guardian that there’s “no formal framework right now for accountability” around AI diagnoses, and that patients still “want humans to guide them through life or death decisions [and] to guide them through challenging treatment decisions.”

In a post about the study, Kristen Panthagani, an emergency physician, said this is an “an interesting AI study that has led to some very overhyped headlines,” especially since it was comparing AI diagnoses to those from internal medicine physicians, not ER physicians.

“If we’re going to compare AI tools to physicians’ clinical ability, we should start by comparing to physicians who actually practice that specialty,” Panthagani said. “I would not be surprised if a LLM could beat a dermatologist at an neurosurgery board exam, [but] that’s not a particularly helpful thing to know.”

She also argued, “As an ER doctor seeing a patient for a first time, my primary goal is not to guess your ultimate diagnosis. My primary goal is to determine if you have a condition that could kill you.”

This post and headline have been updated to reflect the fact that the diagnoses in the study came from internal medicine attending physicians, and to include commentary from Kristen Panthagani.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#Harvard #study #offered #accurate #emergency #room #diagnoses #human #doctors #TechCrunchbeth israel,harvard medical school,OpenAI">In Harvard study, AI offered more accurate emergency room diagnoses than two human doctors | TechCrunch
A new study examines how large language models perform in a variety of medical contexts, including real emergency room cases — where at least one model seemed to be more accurate than human doctors.

The study was published this week in Science and comes from a research team led by physicians and computer scientists at Harvard Medical School and Beth Israel Deaconess Medical Center. The researchers said they conducted a variety of experiments to measure how OpenAI’s models compared to human physicians.







In one experiment, researchers focused on 76 patients who came into the Beth Israel emergency room, comparing the diagnoses offered by two internal medicine attending physicians to those generated by OpenAI’s o1 and 4o models. These diagnoses were assessed by two other attending physicians, who did not know which ones came from humans and which came from AI.

“At each diagnostic touchpoint, o1 either performed nominally better than or on par with the two attending physicians and 4o,” the study said, adding that the differences “were especially pronounced at the first diagnostic touchpoint (initial ER triage), where there is the least information available about the patient and the most urgency to make the correct decision.”

In Harvard Medical School’s press release about the study, the researchers emphasized that they did not “pre-process the data at all” — the AI models were presented with the same information that was available in the electronic medical records at the time of each diagnosis. 

With that information, the o1 model managed to offer “the exact or very close diagnosis” in 67% of triage cases, compared to one physician who had the exact or close diagnosis 55% of the time, and to the other who hit the mark 50% of the time.

“We tested the AI model against virtually every benchmark, and it eclipsed both prior models and our physician baselines,” said Arjun Manrai, who heads an AI lab at Harvard Medical School and is one of the study’s lead authors, in the press release.

	
		
		Techcrunch event
		
			
			
									San Francisco, CA
													|
													October 13-15, 2026
							
			
		
	


To be clear, the study didn’t claim that AI is ready to make real life-or-death decisions in the emergency room. Instead, it said the findings show an “urgent need for prospective trials to evaluate these technologies in real-world patient care settings.”

The researchers also noted that they only studied how models performed when provided with text-based information, and that “existing studies suggest that current foundation models are more limited in reasoning over nontext inputs.”

Adam Rodman, a Beth Israel doctor who’s also one of the study’s lead authors, warned the Guardian that there’s “no formal framework right now for accountability” around AI diagnoses, and that patients still “want humans to guide them through life or death decisions [and] to guide them through challenging treatment decisions.”







In a post about the study, Kristen Panthagani, an emergency physician, said this is an “an interesting AI study that has led to some very overhyped headlines,” especially since it was comparing AI diagnoses to those from internal medicine physicians, not ER physicians.

“If we’re going to compare AI tools to physicians’ clinical ability, we should start by comparing to physicians who actually practice that specialty,” Panthagani said. “I would not be surprised if a LLM could beat a dermatologist at an neurosurgery board exam, [but] that’s not a particularly helpful thing to know.”

She also argued, “As an ER doctor seeing a patient for a first time, my primary goal is not to guess your ultimate diagnosis. My primary goal is to determine if you have a condition that could kill you.”

This post and headline have been updated to reflect the fact that the diagnoses in the study came from internal medicine attending physicians, and to include commentary from Kristen Panthagani.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.#Harvard #study #offered #accurate #emergency #room #diagnoses #human #doctors #TechCrunchbeth israel,harvard medical school,OpenAI

published this week in Science and comes from a research team led by physicians and computer scientists at Harvard Medical School and Beth Israel Deaconess Medical Center. The researchers said they conducted a variety of experiments to measure how OpenAI’s models compared to human physicians.

In one experiment, researchers focused on 76 patients who came into the Beth Israel emergency room, comparing the diagnoses offered by two internal medicine attending physicians to those generated by OpenAI’s o1 and 4o models. These diagnoses were assessed by two other attending physicians, who did not know which ones came from humans and which came from AI.

“At each diagnostic touchpoint, o1 either performed nominally better than or on par with the two attending physicians and 4o,” the study said, adding that the differences “were especially pronounced at the first diagnostic touchpoint (initial ER triage), where there is the least information available about the patient and the most urgency to make the correct decision.”

In Harvard Medical School’s press release about the study, the researchers emphasized that they did not “pre-process the data at all” — the AI models were presented with the same information that was available in the electronic medical records at the time of each diagnosis. 

With that information, the o1 model managed to offer “the exact or very close diagnosis” in 67% of triage cases, compared to one physician who had the exact or close diagnosis 55% of the time, and to the other who hit the mark 50% of the time.

“We tested the AI model against virtually every benchmark, and it eclipsed both prior models and our physician baselines,” said Arjun Manrai, who heads an AI lab at Harvard Medical School and is one of the study’s lead authors, in the press release.

Techcrunch event

San Francisco, CA | October 13-15, 2026

To be clear, the study didn’t claim that AI is ready to make real life-or-death decisions in the emergency room. Instead, it said the findings show an “urgent need for prospective trials to evaluate these technologies in real-world patient care settings.”

The researchers also noted that they only studied how models performed when provided with text-based information, and that “existing studies suggest that current foundation models are more limited in reasoning over nontext inputs.”

Adam Rodman, a Beth Israel doctor who’s also one of the study’s lead authors, warned the Guardian that there’s “no formal framework right now for accountability” around AI diagnoses, and that patients still “want humans to guide them through life or death decisions [and] to guide them through challenging treatment decisions.”

In a post about the study, Kristen Panthagani, an emergency physician, said this is an “an interesting AI study that has led to some very overhyped headlines,” especially since it was comparing AI diagnoses to those from internal medicine physicians, not ER physicians.

“If we’re going to compare AI tools to physicians’ clinical ability, we should start by comparing to physicians who actually practice that specialty,” Panthagani said. “I would not be surprised if a LLM could beat a dermatologist at an neurosurgery board exam, [but] that’s not a particularly helpful thing to know.”

She also argued, “As an ER doctor seeing a patient for a first time, my primary goal is not to guess your ultimate diagnosis. My primary goal is to determine if you have a condition that could kill you.”

This post and headline have been updated to reflect the fact that the diagnoses in the study came from internal medicine attending physicians, and to include commentary from Kristen Panthagani.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#Harvard #study #offered #accurate #emergency #room #diagnoses #human #doctors #TechCrunchbeth israel,harvard medical school,OpenAI">In Harvard study, AI offered more accurate emergency room diagnoses than two human doctors | TechCrunch

A new study examines how large language models perform in a variety of medical contexts, including real emergency room cases — where at least one model seemed to be more accurate than human doctors.

The study was published this week in Science and comes from a research team led by physicians and computer scientists at Harvard Medical School and Beth Israel Deaconess Medical Center. The researchers said they conducted a variety of experiments to measure how OpenAI’s models compared to human physicians.

In one experiment, researchers focused on 76 patients who came into the Beth Israel emergency room, comparing the diagnoses offered by two internal medicine attending physicians to those generated by OpenAI’s o1 and 4o models. These diagnoses were assessed by two other attending physicians, who did not know which ones came from humans and which came from AI.

“At each diagnostic touchpoint, o1 either performed nominally better than or on par with the two attending physicians and 4o,” the study said, adding that the differences “were especially pronounced at the first diagnostic touchpoint (initial ER triage), where there is the least information available about the patient and the most urgency to make the correct decision.”

In Harvard Medical School’s press release about the study, the researchers emphasized that they did not “pre-process the data at all” — the AI models were presented with the same information that was available in the electronic medical records at the time of each diagnosis. 

With that information, the o1 model managed to offer “the exact or very close diagnosis” in 67% of triage cases, compared to one physician who had the exact or close diagnosis 55% of the time, and to the other who hit the mark 50% of the time.

“We tested the AI model against virtually every benchmark, and it eclipsed both prior models and our physician baselines,” said Arjun Manrai, who heads an AI lab at Harvard Medical School and is one of the study’s lead authors, in the press release.

Techcrunch event

San Francisco, CA | October 13-15, 2026

To be clear, the study didn’t claim that AI is ready to make real life-or-death decisions in the emergency room. Instead, it said the findings show an “urgent need for prospective trials to evaluate these technologies in real-world patient care settings.”

The researchers also noted that they only studied how models performed when provided with text-based information, and that “existing studies suggest that current foundation models are more limited in reasoning over nontext inputs.”

Adam Rodman, a Beth Israel doctor who’s also one of the study’s lead authors, warned the Guardian that there’s “no formal framework right now for accountability” around AI diagnoses, and that patients still “want humans to guide them through life or death decisions [and] to guide them through challenging treatment decisions.”

In a post about the study, Kristen Panthagani, an emergency physician, said this is an “an interesting AI study that has led to some very overhyped headlines,” especially since it was comparing AI diagnoses to those from internal medicine physicians, not ER physicians.

“If we’re going to compare AI tools to physicians’ clinical ability, we should start by comparing to physicians who actually practice that specialty,” Panthagani said. “I would not be surprised if a LLM could beat a dermatologist at an neurosurgery board exam, [but] that’s not a particularly helpful thing to know.”

She also argued, “As an ER doctor seeing a patient for a first time, my primary goal is not to guess your ultimate diagnosis. My primary goal is to determine if you have a condition that could kill you.”

This post and headline have been updated to reflect the fact that the diagnoses in the study came from internal medicine attending physicians, and to include commentary from Kristen Panthagani.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#Harvard #study #offered #accurate #emergency #room #diagnoses #human #doctors #TechCrunchbeth israel,harvard medical school,OpenAI

Post Comment