×
Apple AirPods Pro 3 review: I can only say ‘Holy cow!’

Apple AirPods Pro 3 review: I can only say ‘Holy cow!’

I wrote my initial AirPods Pro 3 review in paradise. Physically, I was sitting in a resort in Maui, attending a conference and pounding away at my keyboard between meetings and presentations. When I put in my new AirPods, however, I entered a personal cone of silence. Gone were the conversations around me, the sound of the waves crashing against the shoreline. All I had was my music, my thoughts, and my fingers on the keyboard.

It’s not that the AirPods Pro 3 do anything remarkably innovative compared to other wireless earbuds, which are often less expensive — you have your quality sound, active noise cancellation (ANC), transparency mode, and more. It’s that they exceeded expectations at every turn.

Standing in the middle of a tech conference social mixer, surrounded by the drone of people talking specs, all I had to do was pop in the AirPods, put on some music, and nothing else in the world existed. It’s a phenomenal feeling. So let’s get into it. I’ve been using the AirPods Pro 3 for about two weeks, both in paradise and out, and this is my full review.

Easy connectivity


Credit: Adam Doud / Mashable

After unboxing the AirPods Pro 3, the first thing you’ll need to do is pair them to your iPhone. That involves the complicated process of… opening the AirPods case next to your iPhone. That’s it. That’s the process. Oh, and by the way, once you do that, any device that you’re signed into with your Apple ID will also be able to connect to them — whether it be an iPhone, Mac, or iPad. This is not new; it’s been that way since the first generation of AirPods. But it doesn’t change the fact that it’s lovely.

Once connected, there’s no app to operate the AirPods — the settings are built into your phone’s operating system, but not necessarily in an ideal way. In the Bluetooth settings area, you can adjust hearing modes, hearing protection and assistance (which we’ll discuss later), controls, and a variety of other settings — there are too many to list here, to be frank. What is not there is the equalizer (EQ) for the AirPods, which is actually elsewhere in settings. There’s also no custom EQ, which is annoying.

Maybe the EQ is optional?

While testing, I primarily listened to podcasts — that’s my usual use case — but I also spent a good amount of time listening to music from artists such as Metallica, Scorpions, Evanescence, and more. Then there’s also my go-to, Lindsey Stirling, for examining the entire spectrum of frequencies from the lowest dub-step bass to the highest violin notes. 

The AirPods Pro 3 have a nice, even sound profile, with no particular emphasis on any frequencies. It’s a very flat profile, which is how it should be. You can set different EQs — also in the phone’s system settings, but again, there are no custom profiles, which is not ideal. Fortunately, I enjoy the default flat profile, so I don’t have any personal complaints, but it’s not unreasonable to want to set your own sound profile.

ANC and Transparency are impressive

airpods pro 3 and charging case on desk

Credit: Adam Doud / Mashable

airpods pro 3 with ear tip removed

Credit: Adam Doud Mashable

As I described above, the ANC on these AirPods Pro 3 is seriously impressive. The purpose of ANC is to reduce outside noise. The key benefit is to hearing health — the lower you can comfortably listen to your music and media, the less likely you are to damage your hearing. But Apple goes further by reducing all sound around you to a near whisper. You can still hear some things, but the volume is reduced to the point that it no longer matters.

ANC used to be great at removing constant noises, but fell short with sudden noises like a person talking or a car horn. That’s still the case with lower-end ANC earbuds, but Apple does a remarkable job at eliminating sudden noises, too. When you put on any music or media, everything around you is simply gone. 

Transparency is what you get when you decide to let noise in from the outside. The AirPods Pro 3 are very good at that, but are also a small step back from the AirPods Pro 2. That’s because transparency mode on the AirPods Pro 3 comes with a hint of sibilance in what you hear — it’s a faint ring of higher frequencies, like talking in a bathroom, or a hissing sound on particular letters. It’s not bad, but it’s noticeable (and likely fixable with a software update down the road). 

Heart rate monitor

iphone clipped to bike handlebars showing Apple Fitness app and heart rate


Credit: Adam Doud / Mashable

Now that I’ve had more than a day on my home turf with the AirPods Pro 3, I’ve been able to take them out on the road to test out the HRM features. And it works very well! The easiest way to test this sensor is to simply take off my Apple Watch and go for a bike ride around my neighborhood, so that’s what I did.

In order for AirPods to track your heart rate, you need to be actively tracking a workout through the Fitness App. Once you start the activity, there’s nothing to do but pop in your AirPods and start the activity. The Fitness app recognizes that you’re wearing AirPods Pro 3 and pulls heart data straight from the sensors. 

I tested the heart rate on two different bike rides — both shorter rides because I’m old and fat — and it worked perfectly. I checked the heart rate against another device (which you’ll hear about from Mashable soon), and in every instance, the heart rate from the AirPods Pro 3 was within a beat or two per minute of the other device, so that speaks very well for its accuracy.

How do they work as hearing aids?

My wife would strenuously disagree with what I’m about to tell you, but according to Apple, I do not have much hearing loss. Personally, I agree with my wife’s assessment. A misspent youth of playing too loud music in too small a space left me with lasting tinnitus problems. Put me in a noisy room, and I will smile and nod without having heard a word a person three feet away is saying. Even Nuance glasses didn’t help. Maybe Airpods will?

But, according to Apple, I only have 8db of hearing loss in one ear and 12db in the other. I tried using Apple’s hearing aid feature, and I basically just got transparency mode, so there wasn’t much help there. I tried it in a noisy bar, and I had a similar experience to the Nuance Glasses — if it helped, it was only by the barest margin. 

What it boils down to is, I spend a lot of time in noisy environments with people who are seemingly unaffected by the volume of the place. As a traveling journalist, I often find myself in cocktail mixers or bars with at the very least a cacophony of people talking and chatting away, and my ears simply cannot sort out the speaker I’m trying to talk to with the rest of the room. I was hopeful that the AirPods could help drown that noise out and allow me to focus on my conversational counterpart, but so far that simply hasn’t been the case.

apple airpods pro 3 next to apple watch


Credit: Celso Bulgatti/CNET

Does Live Translation really work?

As for Live Translation, it seems to work pretty decently. Live translation is available in English, French, German, Spanish, Portuguese, and English from the UK, which by the way, is not the same as American English. Frankly, I’m happy that Apple recognizes that.

I tested this feature by watching a movie dubbed into a different language — which the AirPods ironically would dub back into English, closing the loop. Not being a native speaker of a foreign language, I can’t comment on the accuracy, but I was able to compare the subtitles to what Apple showed, and it was close enough.

What I noticed was a definite 1-2 second lag between what was on the screen and what came through on the AirPods, which made it a little difficult to measure how accurate one was over the other. That being said, I think it’s fair to say that this could be a good help if you’re traveling in a foreign country (that speaks a supported language). That said, having traveled to Germany, Spain, France and South Korea, speaking English is the ultimate superpower, because a lot of people speak English in foreign countries already.

Are the AirPods Pro 3 worth it?

Overall, the AirPods Pro 3 are a remarkable upgrade, even over the AirPods Pro 2, which were already very good. What absolutely takes my breath away is the noise cancellation. I thought I knew what great ANC was, and it turns out I was not shooting high enough.

When you add up all that these buds bring to the table, including how easy they are to use, the long-term comfort, the excellent battery life, and how easily they connect to an iPhone, this is truly a premium package well worth the $249.99 retail price. They are the very definition of premium, but they come with a surprisingly low cost considering the price of other buds on the market. It wouldn’t be a bit of hyperbole to say that these earbuds are a steal.

Source link
#Apple #AirPods #Pro #review #Holy #cow

#Reggie #FilsAimé #Amazon #asked #Nintendo #break #lawAmazon,Gaming,News,Nintendo,Tech">Reggie Fils-Aimé says Amazon once asked Nintendo to break the law“Literally, we stopped selling to Amazon, and it’s because I wasn’t going to do something illegal. I wasn’t going to do something that would put at risk the relationship we have with other retailers. But it also set the stage to say, look, you’re not going to push me around. This is the way we do business. And so that’s how, over time, you build respect.”#Reggie #FilsAimé #Amazon #asked #Nintendo #break #lawAmazon,Gaming,News,Nintendo,Tech
A new study examines how large language models perform in a variety of medical contexts, including real emergency room cases — where at least one model seemed to be more accurate than human doctors.

The study was published this week in Science and comes from a research team led by physicians and computer scientists at Harvard Medical School and Beth Israel Deaconess Medical Center. The researchers said they conducted a variety of experiments to measure how OpenAI’s models compared to human physicians.

In one experiment, researchers focused on 76 patients who came into the Beth Israel emergency room, comparing the diagnoses offered by two internal medicine attending physicians to those generated by OpenAI’s o1 and 4o models. These diagnoses were assessed by two other attending physicians, who did not know which ones came from humans and which came from AI.

“At each diagnostic touchpoint, o1 either performed nominally better than or on par with the two attending physicians and 4o,” the study said, adding that the differences “were especially pronounced at the first diagnostic touchpoint (initial ER triage), where there is the least information available about the patient and the most urgency to make the correct decision.”

In Harvard Medical School’s press release about the study, the researchers emphasized that they did not “pre-process the data at all” — the AI models were presented with the same information that was available in the electronic medical records at the time of each diagnosis. 

With that information, the o1 model managed to offer “the exact or very close diagnosis” in 67% of triage cases, compared to one physician who had the exact or close diagnosis 55% of the time, and to the other who hit the mark 50% of the time.

“We tested the AI model against virtually every benchmark, and it eclipsed both prior models and our physician baselines,” said Arjun Manrai, who heads an AI lab at Harvard Medical School and is one of the study’s lead authors, in the press release.

Techcrunch event

San Francisco, CA | October 13-15, 2026

To be clear, the study didn’t claim that AI is ready to make real life-or-death decisions in the emergency room. Instead, it said the findings show an “urgent need for prospective trials to evaluate these technologies in real-world patient care settings.”

The researchers also noted that they only studied how models performed when provided with text-based information, and that “existing studies suggest that current foundation models are more limited in reasoning over nontext inputs.”

Adam Rodman, a Beth Israel doctor who’s also one of the study’s lead authors, warned the Guardian that there’s “no formal framework right now for accountability” around AI diagnoses, and that patients still “want humans to guide them through life or death decisions [and] to guide them through challenging treatment decisions.”

In a post about the study, Kristen Panthagani, an emergency physician, said this is an “an interesting AI study that has led to some very overhyped headlines,” especially since it was comparing AI diagnoses to those from internal medicine physicians, not ER physicians.

“If we’re going to compare AI tools to physicians’ clinical ability, we should start by comparing to physicians who actually practice that specialty,” Panthagani said. “I would not be surprised if a LLM could beat a dermatologist at an neurosurgery board exam, [but] that’s not a particularly helpful thing to know.”

She also argued, “As an ER doctor seeing a patient for a first time, my primary goal is not to guess your ultimate diagnosis. My primary goal is to determine if you have a condition that could kill you.”

This post and headline have been updated to reflect the fact that the diagnoses in the study came from internal medicine attending physicians, and to include commentary from Kristen Panthagani.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#Harvard #study #offered #accurate #emergency #room #diagnoses #human #doctors #TechCrunchbeth israel,harvard medical school,OpenAI">In Harvard study, AI offered more accurate emergency room diagnoses than two human doctors | TechCrunch
A new study examines how large language models perform in a variety of medical contexts, including real emergency room cases — where at least one model seemed to be more accurate than human doctors.

The study was published this week in Science and comes from a research team led by physicians and computer scientists at Harvard Medical School and Beth Israel Deaconess Medical Center. The researchers said they conducted a variety of experiments to measure how OpenAI’s models compared to human physicians.







In one experiment, researchers focused on 76 patients who came into the Beth Israel emergency room, comparing the diagnoses offered by two internal medicine attending physicians to those generated by OpenAI’s o1 and 4o models. These diagnoses were assessed by two other attending physicians, who did not know which ones came from humans and which came from AI.

“At each diagnostic touchpoint, o1 either performed nominally better than or on par with the two attending physicians and 4o,” the study said, adding that the differences “were especially pronounced at the first diagnostic touchpoint (initial ER triage), where there is the least information available about the patient and the most urgency to make the correct decision.”

In Harvard Medical School’s press release about the study, the researchers emphasized that they did not “pre-process the data at all” — the AI models were presented with the same information that was available in the electronic medical records at the time of each diagnosis. 

With that information, the o1 model managed to offer “the exact or very close diagnosis” in 67% of triage cases, compared to one physician who had the exact or close diagnosis 55% of the time, and to the other who hit the mark 50% of the time.

“We tested the AI model against virtually every benchmark, and it eclipsed both prior models and our physician baselines,” said Arjun Manrai, who heads an AI lab at Harvard Medical School and is one of the study’s lead authors, in the press release.

	
		
		Techcrunch event
		
			
			
									San Francisco, CA
													|
													October 13-15, 2026
							
			
		
	


To be clear, the study didn’t claim that AI is ready to make real life-or-death decisions in the emergency room. Instead, it said the findings show an “urgent need for prospective trials to evaluate these technologies in real-world patient care settings.”

The researchers also noted that they only studied how models performed when provided with text-based information, and that “existing studies suggest that current foundation models are more limited in reasoning over nontext inputs.”

Adam Rodman, a Beth Israel doctor who’s also one of the study’s lead authors, warned the Guardian that there’s “no formal framework right now for accountability” around AI diagnoses, and that patients still “want humans to guide them through life or death decisions [and] to guide them through challenging treatment decisions.”







In a post about the study, Kristen Panthagani, an emergency physician, said this is an “an interesting AI study that has led to some very overhyped headlines,” especially since it was comparing AI diagnoses to those from internal medicine physicians, not ER physicians.

“If we’re going to compare AI tools to physicians’ clinical ability, we should start by comparing to physicians who actually practice that specialty,” Panthagani said. “I would not be surprised if a LLM could beat a dermatologist at an neurosurgery board exam, [but] that’s not a particularly helpful thing to know.”

She also argued, “As an ER doctor seeing a patient for a first time, my primary goal is not to guess your ultimate diagnosis. My primary goal is to determine if you have a condition that could kill you.”

This post and headline have been updated to reflect the fact that the diagnoses in the study came from internal medicine attending physicians, and to include commentary from Kristen Panthagani.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.#Harvard #study #offered #accurate #emergency #room #diagnoses #human #doctors #TechCrunchbeth israel,harvard medical school,OpenAI

published this week in Science and comes from a research team led by physicians and computer scientists at Harvard Medical School and Beth Israel Deaconess Medical Center. The researchers said they conducted a variety of experiments to measure how OpenAI’s models compared to human physicians.

In one experiment, researchers focused on 76 patients who came into the Beth Israel emergency room, comparing the diagnoses offered by two internal medicine attending physicians to those generated by OpenAI’s o1 and 4o models. These diagnoses were assessed by two other attending physicians, who did not know which ones came from humans and which came from AI.

“At each diagnostic touchpoint, o1 either performed nominally better than or on par with the two attending physicians and 4o,” the study said, adding that the differences “were especially pronounced at the first diagnostic touchpoint (initial ER triage), where there is the least information available about the patient and the most urgency to make the correct decision.”

In Harvard Medical School’s press release about the study, the researchers emphasized that they did not “pre-process the data at all” — the AI models were presented with the same information that was available in the electronic medical records at the time of each diagnosis. 

With that information, the o1 model managed to offer “the exact or very close diagnosis” in 67% of triage cases, compared to one physician who had the exact or close diagnosis 55% of the time, and to the other who hit the mark 50% of the time.

“We tested the AI model against virtually every benchmark, and it eclipsed both prior models and our physician baselines,” said Arjun Manrai, who heads an AI lab at Harvard Medical School and is one of the study’s lead authors, in the press release.

Techcrunch event

San Francisco, CA | October 13-15, 2026

To be clear, the study didn’t claim that AI is ready to make real life-or-death decisions in the emergency room. Instead, it said the findings show an “urgent need for prospective trials to evaluate these technologies in real-world patient care settings.”

The researchers also noted that they only studied how models performed when provided with text-based information, and that “existing studies suggest that current foundation models are more limited in reasoning over nontext inputs.”

Adam Rodman, a Beth Israel doctor who’s also one of the study’s lead authors, warned the Guardian that there’s “no formal framework right now for accountability” around AI diagnoses, and that patients still “want humans to guide them through life or death decisions [and] to guide them through challenging treatment decisions.”

In a post about the study, Kristen Panthagani, an emergency physician, said this is an “an interesting AI study that has led to some very overhyped headlines,” especially since it was comparing AI diagnoses to those from internal medicine physicians, not ER physicians.

“If we’re going to compare AI tools to physicians’ clinical ability, we should start by comparing to physicians who actually practice that specialty,” Panthagani said. “I would not be surprised if a LLM could beat a dermatologist at an neurosurgery board exam, [but] that’s not a particularly helpful thing to know.”

She also argued, “As an ER doctor seeing a patient for a first time, my primary goal is not to guess your ultimate diagnosis. My primary goal is to determine if you have a condition that could kill you.”

This post and headline have been updated to reflect the fact that the diagnoses in the study came from internal medicine attending physicians, and to include commentary from Kristen Panthagani.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#Harvard #study #offered #accurate #emergency #room #diagnoses #human #doctors #TechCrunchbeth israel,harvard medical school,OpenAI">In Harvard study, AI offered more accurate emergency room diagnoses than two human doctors | TechCrunch

A new study examines how large language models perform in a variety of medical contexts, including real emergency room cases — where at least one model seemed to be more accurate than human doctors.

The study was published this week in Science and comes from a research team led by physicians and computer scientists at Harvard Medical School and Beth Israel Deaconess Medical Center. The researchers said they conducted a variety of experiments to measure how OpenAI’s models compared to human physicians.

In one experiment, researchers focused on 76 patients who came into the Beth Israel emergency room, comparing the diagnoses offered by two internal medicine attending physicians to those generated by OpenAI’s o1 and 4o models. These diagnoses were assessed by two other attending physicians, who did not know which ones came from humans and which came from AI.

“At each diagnostic touchpoint, o1 either performed nominally better than or on par with the two attending physicians and 4o,” the study said, adding that the differences “were especially pronounced at the first diagnostic touchpoint (initial ER triage), where there is the least information available about the patient and the most urgency to make the correct decision.”

In Harvard Medical School’s press release about the study, the researchers emphasized that they did not “pre-process the data at all” — the AI models were presented with the same information that was available in the electronic medical records at the time of each diagnosis. 

With that information, the o1 model managed to offer “the exact or very close diagnosis” in 67% of triage cases, compared to one physician who had the exact or close diagnosis 55% of the time, and to the other who hit the mark 50% of the time.

“We tested the AI model against virtually every benchmark, and it eclipsed both prior models and our physician baselines,” said Arjun Manrai, who heads an AI lab at Harvard Medical School and is one of the study’s lead authors, in the press release.

Techcrunch event

San Francisco, CA | October 13-15, 2026

To be clear, the study didn’t claim that AI is ready to make real life-or-death decisions in the emergency room. Instead, it said the findings show an “urgent need for prospective trials to evaluate these technologies in real-world patient care settings.”

The researchers also noted that they only studied how models performed when provided with text-based information, and that “existing studies suggest that current foundation models are more limited in reasoning over nontext inputs.”

Adam Rodman, a Beth Israel doctor who’s also one of the study’s lead authors, warned the Guardian that there’s “no formal framework right now for accountability” around AI diagnoses, and that patients still “want humans to guide them through life or death decisions [and] to guide them through challenging treatment decisions.”

In a post about the study, Kristen Panthagani, an emergency physician, said this is an “an interesting AI study that has led to some very overhyped headlines,” especially since it was comparing AI diagnoses to those from internal medicine physicians, not ER physicians.

“If we’re going to compare AI tools to physicians’ clinical ability, we should start by comparing to physicians who actually practice that specialty,” Panthagani said. “I would not be surprised if a LLM could beat a dermatologist at an neurosurgery board exam, [but] that’s not a particularly helpful thing to know.”

She also argued, “As an ER doctor seeing a patient for a first time, my primary goal is not to guess your ultimate diagnosis. My primary goal is to determine if you have a condition that could kill you.”

This post and headline have been updated to reflect the fact that the diagnoses in the study came from internal medicine attending physicians, and to include commentary from Kristen Panthagani.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#Harvard #study #offered #accurate #emergency #room #diagnoses #human #doctors #TechCrunchbeth israel,harvard medical school,OpenAI

Post Comment