×
Owala FreeSip is without a doubt the best water bottle

Owala FreeSip is without a doubt the best water bottle

I’ve cycled through my fair share of water bottles in my life. I’ve done the Nalgenes, Sips, and HydroFlasks. I’ve tried all the different materials, too: plastic, metal, and glass. Just through trying to cut my single-use plastics for a more sustainable lifestyle, I’ve ended up with a whole slew of water bottles on my hunt to find The One. But ultimately, a reusable water bottle is about cutting waste, so buying up a bunch of different ones defeats the purpose. That’s why I’m ready to tell you about the water bottle to end all water bottles: the Owala FreeSip.

The first time I ever came across the Owala FreeSip was, of course, on TikTok. A user posted a video with its exciting feature, the ability to both sip water through the built-in straw or chug through a spout. I go full Goldilocks when it comes to water bottles, so this certainly piqued my interest as I’ve never been able to find a water bottle that has a straw but also a cover or a mouth opening that’s narrow enough so I’m not spilling all over myself.

After several weeks of mulling it over, asking myself Do I really need a new water bottle? I finally settled on a color combo and hit checkout. Now, after several years with the water bottle, I can safely say it was the best $40 I’ve ever spent. The Owala FreeSip lives up to its promise of both a sippable and chuggable water bottle. The best part to me is that, unlike other bottles with a straw that I’ve used, the FreeSip has a cover for the spout. That way, when you’re out and about in the world, it’s safe from germs and bacteria.

Mashable Trend Report: Coming Soon!

The second lock on the water bottle doubles as a carry loop.
Credit: Samantha Mangino / Mashable

An overview shot of the Owala FreeSip spout

Sip or chug with the innovative lid.
Credit: Samantha Mangino / Mashable

The FreeSip’s insulation really is impeccable, offering 24 hours of cold. I’ve filled mine up with ice and water to come back a day later and still find the jingle of full ice cubes. Plus, the two-part locking lid can also double as a loop for easy carrying. Cleaning is also easy with the FreeSip. The top part’s components (lid and straw) are dishwasher safe, while the body is hand-wash recommended. Full transparency though — I’ve put the full Owala in the dishwasher for years, and while its insulation has weakened, it’s not by much.

The FreeSip water bottle comes in three different sizes: 24 ounces, 32 ounces, and 40 ounces. Fair warning, the 40-ounce water bottle, which I have, is giant. Neither the 32- nor 40-ounce options fit in cup holders, which is a real pain. If you are looking for a more travel-friendly option, the 24-ounce size will fit.

A person holding the FreeSip Twist

The FreeSip Twist is a slimmer version of the classic.
Credit: Samantha Mangino / Mashable

An overhead shot of the FreeSip Twist spout.

But don’t fear, Owala’s signature FreeSip lid is still included.
Credit: Samantha Mangino / Mashable

Since the FreeSip first launched, Owala has expanded its lineup to tumblers and other styles of bottles. I recently added another FreeSip, the Twist, to my collection as a travel option. Its skinnier design fits in both cup holders and backpack side pockets, which my 40-ounce FreeSip can’t. Plus, the classic FreeSip is not exactly airplane-friendly. The lid builds up pressure, which results in a small in-flight water burst when you go to open it. Luckily, the Twist does not suffer from this problem, as I used it on a recent flight and didn’t encounter this issue.

A person's hand holding the Owala FreeSip

The Owala FreeSip gets a full-hearted endorsement from me.
Credit: Samantha Mangino / Mashable

Just by looking at my Owala you can tell it is well-loved. It’s my go-to recommendation when folks are in need of a new water bottle, and I’m known to gift it to my loved ones. It’s the best water bottle I’ve ever used, and worth its viral hype.

Topics
Sustainability
Kitchen

Source link
#Owala #FreeSip #doubt #water #bottle

#Reggie #FilsAimé #Amazon #asked #Nintendo #break #lawAmazon,Gaming,News,Nintendo,Tech">Reggie Fils-Aimé says Amazon once asked Nintendo to break the law“Literally, we stopped selling to Amazon, and it’s because I wasn’t going to do something illegal. I wasn’t going to do something that would put at risk the relationship we have with other retailers. But it also set the stage to say, look, you’re not going to push me around. This is the way we do business. And so that’s how, over time, you build respect.”#Reggie #FilsAimé #Amazon #asked #Nintendo #break #lawAmazon,Gaming,News,Nintendo,Tech
A new study examines how large language models perform in a variety of medical contexts, including real emergency room cases — where at least one model seemed to be more accurate than human doctors.

The study was published this week in Science and comes from a research team led by physicians and computer scientists at Harvard Medical School and Beth Israel Deaconess Medical Center. The researchers said they conducted a variety of experiments to measure how OpenAI’s models compared to human physicians.

In one experiment, researchers focused on 76 patients who came into the Beth Israel emergency room, comparing the diagnoses offered by two internal medicine attending physicians to those generated by OpenAI’s o1 and 4o models. These diagnoses were assessed by two other attending physicians, who did not know which ones came from humans and which came from AI.

“At each diagnostic touchpoint, o1 either performed nominally better than or on par with the two attending physicians and 4o,” the study said, adding that the differences “were especially pronounced at the first diagnostic touchpoint (initial ER triage), where there is the least information available about the patient and the most urgency to make the correct decision.”

In Harvard Medical School’s press release about the study, the researchers emphasized that they did not “pre-process the data at all” — the AI models were presented with the same information that was available in the electronic medical records at the time of each diagnosis. 

With that information, the o1 model managed to offer “the exact or very close diagnosis” in 67% of triage cases, compared to one physician who had the exact or close diagnosis 55% of the time, and to the other who hit the mark 50% of the time.

“We tested the AI model against virtually every benchmark, and it eclipsed both prior models and our physician baselines,” said Arjun Manrai, who heads an AI lab at Harvard Medical School and is one of the study’s lead authors, in the press release.

Techcrunch event

San Francisco, CA | October 13-15, 2026

To be clear, the study didn’t claim that AI is ready to make real life-or-death decisions in the emergency room. Instead, it said the findings show an “urgent need for prospective trials to evaluate these technologies in real-world patient care settings.”

The researchers also noted that they only studied how models performed when provided with text-based information, and that “existing studies suggest that current foundation models are more limited in reasoning over nontext inputs.”

Adam Rodman, a Beth Israel doctor who’s also one of the study’s lead authors, warned the Guardian that there’s “no formal framework right now for accountability” around AI diagnoses, and that patients still “want humans to guide them through life or death decisions [and] to guide them through challenging treatment decisions.”

In a post about the study, Kristen Panthagani, an emergency physician, said this is an “an interesting AI study that has led to some very overhyped headlines,” especially since it was comparing AI diagnoses to those from internal medicine physicians, not ER physicians.

“If we’re going to compare AI tools to physicians’ clinical ability, we should start by comparing to physicians who actually practice that specialty,” Panthagani said. “I would not be surprised if a LLM could beat a dermatologist at an neurosurgery board exam, [but] that’s not a particularly helpful thing to know.”

She also argued, “As an ER doctor seeing a patient for a first time, my primary goal is not to guess your ultimate diagnosis. My primary goal is to determine if you have a condition that could kill you.”

This post and headline have been updated to reflect the fact that the diagnoses in the study came from internal medicine attending physicians, and to include commentary from Kristen Panthagani.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#Harvard #study #offered #accurate #emergency #room #diagnoses #human #doctors #TechCrunchbeth israel,harvard medical school,OpenAI">In Harvard study, AI offered more accurate emergency room diagnoses than two human doctors | TechCrunch
A new study examines how large language models perform in a variety of medical contexts, including real emergency room cases — where at least one model seemed to be more accurate than human doctors.

The study was published this week in Science and comes from a research team led by physicians and computer scientists at Harvard Medical School and Beth Israel Deaconess Medical Center. The researchers said they conducted a variety of experiments to measure how OpenAI’s models compared to human physicians.







In one experiment, researchers focused on 76 patients who came into the Beth Israel emergency room, comparing the diagnoses offered by two internal medicine attending physicians to those generated by OpenAI’s o1 and 4o models. These diagnoses were assessed by two other attending physicians, who did not know which ones came from humans and which came from AI.

“At each diagnostic touchpoint, o1 either performed nominally better than or on par with the two attending physicians and 4o,” the study said, adding that the differences “were especially pronounced at the first diagnostic touchpoint (initial ER triage), where there is the least information available about the patient and the most urgency to make the correct decision.”

In Harvard Medical School’s press release about the study, the researchers emphasized that they did not “pre-process the data at all” — the AI models were presented with the same information that was available in the electronic medical records at the time of each diagnosis. 

With that information, the o1 model managed to offer “the exact or very close diagnosis” in 67% of triage cases, compared to one physician who had the exact or close diagnosis 55% of the time, and to the other who hit the mark 50% of the time.

“We tested the AI model against virtually every benchmark, and it eclipsed both prior models and our physician baselines,” said Arjun Manrai, who heads an AI lab at Harvard Medical School and is one of the study’s lead authors, in the press release.

	
		
		Techcrunch event
		
			
			
									San Francisco, CA
													|
													October 13-15, 2026
							
			
		
	


To be clear, the study didn’t claim that AI is ready to make real life-or-death decisions in the emergency room. Instead, it said the findings show an “urgent need for prospective trials to evaluate these technologies in real-world patient care settings.”

The researchers also noted that they only studied how models performed when provided with text-based information, and that “existing studies suggest that current foundation models are more limited in reasoning over nontext inputs.”

Adam Rodman, a Beth Israel doctor who’s also one of the study’s lead authors, warned the Guardian that there’s “no formal framework right now for accountability” around AI diagnoses, and that patients still “want humans to guide them through life or death decisions [and] to guide them through challenging treatment decisions.”







In a post about the study, Kristen Panthagani, an emergency physician, said this is an “an interesting AI study that has led to some very overhyped headlines,” especially since it was comparing AI diagnoses to those from internal medicine physicians, not ER physicians.

“If we’re going to compare AI tools to physicians’ clinical ability, we should start by comparing to physicians who actually practice that specialty,” Panthagani said. “I would not be surprised if a LLM could beat a dermatologist at an neurosurgery board exam, [but] that’s not a particularly helpful thing to know.”

She also argued, “As an ER doctor seeing a patient for a first time, my primary goal is not to guess your ultimate diagnosis. My primary goal is to determine if you have a condition that could kill you.”

This post and headline have been updated to reflect the fact that the diagnoses in the study came from internal medicine attending physicians, and to include commentary from Kristen Panthagani.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.#Harvard #study #offered #accurate #emergency #room #diagnoses #human #doctors #TechCrunchbeth israel,harvard medical school,OpenAI

published this week in Science and comes from a research team led by physicians and computer scientists at Harvard Medical School and Beth Israel Deaconess Medical Center. The researchers said they conducted a variety of experiments to measure how OpenAI’s models compared to human physicians.

In one experiment, researchers focused on 76 patients who came into the Beth Israel emergency room, comparing the diagnoses offered by two internal medicine attending physicians to those generated by OpenAI’s o1 and 4o models. These diagnoses were assessed by two other attending physicians, who did not know which ones came from humans and which came from AI.

“At each diagnostic touchpoint, o1 either performed nominally better than or on par with the two attending physicians and 4o,” the study said, adding that the differences “were especially pronounced at the first diagnostic touchpoint (initial ER triage), where there is the least information available about the patient and the most urgency to make the correct decision.”

In Harvard Medical School’s press release about the study, the researchers emphasized that they did not “pre-process the data at all” — the AI models were presented with the same information that was available in the electronic medical records at the time of each diagnosis. 

With that information, the o1 model managed to offer “the exact or very close diagnosis” in 67% of triage cases, compared to one physician who had the exact or close diagnosis 55% of the time, and to the other who hit the mark 50% of the time.

“We tested the AI model against virtually every benchmark, and it eclipsed both prior models and our physician baselines,” said Arjun Manrai, who heads an AI lab at Harvard Medical School and is one of the study’s lead authors, in the press release.

Techcrunch event

San Francisco, CA | October 13-15, 2026

To be clear, the study didn’t claim that AI is ready to make real life-or-death decisions in the emergency room. Instead, it said the findings show an “urgent need for prospective trials to evaluate these technologies in real-world patient care settings.”

The researchers also noted that they only studied how models performed when provided with text-based information, and that “existing studies suggest that current foundation models are more limited in reasoning over nontext inputs.”

Adam Rodman, a Beth Israel doctor who’s also one of the study’s lead authors, warned the Guardian that there’s “no formal framework right now for accountability” around AI diagnoses, and that patients still “want humans to guide them through life or death decisions [and] to guide them through challenging treatment decisions.”

In a post about the study, Kristen Panthagani, an emergency physician, said this is an “an interesting AI study that has led to some very overhyped headlines,” especially since it was comparing AI diagnoses to those from internal medicine physicians, not ER physicians.

“If we’re going to compare AI tools to physicians’ clinical ability, we should start by comparing to physicians who actually practice that specialty,” Panthagani said. “I would not be surprised if a LLM could beat a dermatologist at an neurosurgery board exam, [but] that’s not a particularly helpful thing to know.”

She also argued, “As an ER doctor seeing a patient for a first time, my primary goal is not to guess your ultimate diagnosis. My primary goal is to determine if you have a condition that could kill you.”

This post and headline have been updated to reflect the fact that the diagnoses in the study came from internal medicine attending physicians, and to include commentary from Kristen Panthagani.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#Harvard #study #offered #accurate #emergency #room #diagnoses #human #doctors #TechCrunchbeth israel,harvard medical school,OpenAI">In Harvard study, AI offered more accurate emergency room diagnoses than two human doctors | TechCrunch

A new study examines how large language models perform in a variety of medical contexts, including real emergency room cases — where at least one model seemed to be more accurate than human doctors.

The study was published this week in Science and comes from a research team led by physicians and computer scientists at Harvard Medical School and Beth Israel Deaconess Medical Center. The researchers said they conducted a variety of experiments to measure how OpenAI’s models compared to human physicians.

In one experiment, researchers focused on 76 patients who came into the Beth Israel emergency room, comparing the diagnoses offered by two internal medicine attending physicians to those generated by OpenAI’s o1 and 4o models. These diagnoses were assessed by two other attending physicians, who did not know which ones came from humans and which came from AI.

“At each diagnostic touchpoint, o1 either performed nominally better than or on par with the two attending physicians and 4o,” the study said, adding that the differences “were especially pronounced at the first diagnostic touchpoint (initial ER triage), where there is the least information available about the patient and the most urgency to make the correct decision.”

In Harvard Medical School’s press release about the study, the researchers emphasized that they did not “pre-process the data at all” — the AI models were presented with the same information that was available in the electronic medical records at the time of each diagnosis. 

With that information, the o1 model managed to offer “the exact or very close diagnosis” in 67% of triage cases, compared to one physician who had the exact or close diagnosis 55% of the time, and to the other who hit the mark 50% of the time.

“We tested the AI model against virtually every benchmark, and it eclipsed both prior models and our physician baselines,” said Arjun Manrai, who heads an AI lab at Harvard Medical School and is one of the study’s lead authors, in the press release.

Techcrunch event

San Francisco, CA | October 13-15, 2026

To be clear, the study didn’t claim that AI is ready to make real life-or-death decisions in the emergency room. Instead, it said the findings show an “urgent need for prospective trials to evaluate these technologies in real-world patient care settings.”

The researchers also noted that they only studied how models performed when provided with text-based information, and that “existing studies suggest that current foundation models are more limited in reasoning over nontext inputs.”

Adam Rodman, a Beth Israel doctor who’s also one of the study’s lead authors, warned the Guardian that there’s “no formal framework right now for accountability” around AI diagnoses, and that patients still “want humans to guide them through life or death decisions [and] to guide them through challenging treatment decisions.”

In a post about the study, Kristen Panthagani, an emergency physician, said this is an “an interesting AI study that has led to some very overhyped headlines,” especially since it was comparing AI diagnoses to those from internal medicine physicians, not ER physicians.

“If we’re going to compare AI tools to physicians’ clinical ability, we should start by comparing to physicians who actually practice that specialty,” Panthagani said. “I would not be surprised if a LLM could beat a dermatologist at an neurosurgery board exam, [but] that’s not a particularly helpful thing to know.”

She also argued, “As an ER doctor seeing a patient for a first time, my primary goal is not to guess your ultimate diagnosis. My primary goal is to determine if you have a condition that could kill you.”

This post and headline have been updated to reflect the fact that the diagnoses in the study came from internal medicine attending physicians, and to include commentary from Kristen Panthagani.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#Harvard #study #offered #accurate #emergency #room #diagnoses #human #doctors #TechCrunchbeth israel,harvard medical school,OpenAI

Post Comment