×
Meta’s New AI Asked for My Raw Health Data—and Gave Me Terrible AdviceMedical experts I spoke with balked at the idea of uploading their own health data for an AI model, like Muse Spark, to analyze. “These chatbots now allow you to connect your own biometric data, put in your own lab information, and honestly, that makes me pretty nervous,” says Gauri Agarwal, a doctor of medicine and associate professor at the University of Miami. “I certainly wouldn’t connect my own health information to a service that I’m not fully able to control, understand where that information is being stored, or how it’s being utilized.” She recommends people stick to lower-stakes, more general interactions, like prepping questions for your doctor.It can be tempting to rely on AI-assisted help for interpreting health, especially with the skyrocketing cost of medical treatments and overall inaccessibility of regular doctor visits for some people navigating the US health care system.“You will be forgiven for going online and delegating what used to be a powerful, important personal relationship between a doctor and a patient—to a robot,” says Kenneth Goodman, founder of the University of Miami’s Institute for Bioethics and Health Policy. “I think running into that without due diligence is dangerous.” Before he considers using any of these tools, Goodman wants to see research proving that they are beneficial for your health, not just better at answering health questions than some competitor chatbot.When I asked Meta AI for more information about how it would interpret my health information, if I provided any, the chatbot said it was not trying to replace my physician; the outputs were for educational purposes. “Think of me as a med school professor, not your doctor,” said Meta AI. That’s still a lofty claim.The bot said the best way to get an interpretation of my health data was just to “dump the raw data,” like clinical lab reports, and tell it what my goals were. Meta AI would then create charts, summarize the info, and give a “referral nudge if needed.” In other chats I conducted with Meta AI, the bot prompted me to strip personal details before uploading lab results, but these caveats were not present in every test conversation.“People have long used the internet to ask health questions,” a Meta spokesperson tells WIRED. “With Meta AI and Muse Spark, people are in control of what information to share, and our terms make clear they should only share what they’re comfortable with.”In addition to privacy concerns, experts I spoke with expressed trepidation about how these AI tools can be sycophantic and influenced by how users ask questions. “A model might take the information that’s provided more as a given without questioning the assumptions that the patient inherently made when asking the question,” says Agrawal.When I asked how to lose weight and nudged the bot towards extreme answers, Meta AI helped in ways that could be catastrophic for someone with anorexia. As I asked about the benefits of intermittent fasting, I told Meta AI that I wanted to fast five days every week. Despite flagging that this was not for most people and putting me at risk for eating disorders, Meta AI crafted a meal plan for me where I would only eat around 500 calories most days, which would leave me malnourished.#Metas #Asked #Raw #Health #Dataand #Gave #Terrible #Advicehealth,artificial intelligence,health care,machine learning,chatbots,meta,personalized medicine

Meta’s New AI Asked for My Raw Health Data—and Gave Me Terrible Advice

Medical experts I spoke with balked at the idea of uploading their own health data for an AI model, like Muse Spark, to analyze. “These chatbots now allow you to connect your own biometric data, put in your own lab information, and honestly, that makes me pretty nervous,” says Gauri Agarwal, a doctor of medicine and associate professor at the University of Miami. “I certainly wouldn’t connect my own health information to a service that I’m not fully able to control, understand where that information is being stored, or how it’s being utilized.” She recommends people stick to lower-stakes, more general interactions, like prepping questions for your doctor.

It can be tempting to rely on AI-assisted help for interpreting health, especially with the skyrocketing cost of medical treatments and overall inaccessibility of regular doctor visits for some people navigating the US health care system.

“You will be forgiven for going online and delegating what used to be a powerful, important personal relationship between a doctor and a patient—to a robot,” says Kenneth Goodman, founder of the University of Miami’s Institute for Bioethics and Health Policy. “I think running into that without due diligence is dangerous.” Before he considers using any of these tools, Goodman wants to see research proving that they are beneficial for your health, not just better at answering health questions than some competitor chatbot.

When I asked Meta AI for more information about how it would interpret my health information, if I provided any, the chatbot said it was not trying to replace my physician; the outputs were for educational purposes. “Think of me as a med school professor, not your doctor,” said Meta AI. That’s still a lofty claim.

The bot said the best way to get an interpretation of my health data was just to “dump the raw data,” like clinical lab reports, and tell it what my goals were. Meta AI would then create charts, summarize the info, and give a “referral nudge if needed.” In other chats I conducted with Meta AI, the bot prompted me to strip personal details before uploading lab results, but these caveats were not present in every test conversation.

“People have long used the internet to ask health questions,” a Meta spokesperson tells WIRED. “With Meta AI and Muse Spark, people are in control of what information to share, and our terms make clear they should only share what they’re comfortable with.”

In addition to privacy concerns, experts I spoke with expressed trepidation about how these AI tools can be sycophantic and influenced by how users ask questions. “A model might take the information that’s provided more as a given without questioning the assumptions that the patient inherently made when asking the question,” says Agrawal.

When I asked how to lose weight and nudged the bot towards extreme answers, Meta AI helped in ways that could be catastrophic for someone with anorexia. As I asked about the benefits of intermittent fasting, I told Meta AI that I wanted to fast five days every week. Despite flagging that this was not for most people and putting me at risk for eating disorders, Meta AI crafted a meal plan for me where I would only eat around 500 calories most days, which would leave me malnourished.

#Metas #Asked #Raw #Health #Dataand #Gave #Terrible #Advicehealth,artificial intelligence,health care,machine learning,chatbots,meta,personalized medicine

Medical experts I spoke with balked at the idea of uploading their own health data for an AI model, like Muse Spark, to analyze. “These chatbots now allow you to connect your own biometric data, put in your own lab information, and honestly, that makes me pretty nervous,” says Gauri Agarwal, a doctor of medicine and associate professor at the University of Miami. “I certainly wouldn’t connect my own health information to a service that I’m not fully able to control, understand where that information is being stored, or how it’s being utilized.” She recommends people stick to lower-stakes, more general interactions, like prepping questions for your doctor.

It can be tempting to rely on AI-assisted help for interpreting health, especially with the skyrocketing cost of medical treatments and overall inaccessibility of regular doctor visits for some people navigating the US health care system.

“You will be forgiven for going online and delegating what used to be a powerful, important personal relationship between a doctor and a patient—to a robot,” says Kenneth Goodman, founder of the University of Miami’s Institute for Bioethics and Health Policy. “I think running into that without due diligence is dangerous.” Before he considers using any of these tools, Goodman wants to see research proving that they are beneficial for your health, not just better at answering health questions than some competitor chatbot.

When I asked Meta AI for more information about how it would interpret my health information, if I provided any, the chatbot said it was not trying to replace my physician; the outputs were for educational purposes. “Think of me as a med school professor, not your doctor,” said Meta AI. That’s still a lofty claim.

The bot said the best way to get an interpretation of my health data was just to “dump the raw data,” like clinical lab reports, and tell it what my goals were. Meta AI would then create charts, summarize the info, and give a “referral nudge if needed.” In other chats I conducted with Meta AI, the bot prompted me to strip personal details before uploading lab results, but these caveats were not present in every test conversation.

“People have long used the internet to ask health questions,” a Meta spokesperson tells WIRED. “With Meta AI and Muse Spark, people are in control of what information to share, and our terms make clear they should only share what they’re comfortable with.”

In addition to privacy concerns, experts I spoke with expressed trepidation about how these AI tools can be sycophantic and influenced by how users ask questions. “A model might take the information that’s provided more as a given without questioning the assumptions that the patient inherently made when asking the question,” says Agrawal.

When I asked how to lose weight and nudged the bot towards extreme answers, Meta AI helped in ways that could be catastrophic for someone with anorexia. As I asked about the benefits of intermittent fasting, I told Meta AI that I wanted to fast five days every week. Despite flagging that this was not for most people and putting me at risk for eating disorders, Meta AI crafted a meal plan for me where I would only eat around 500 calories most days, which would leave me malnourished.

Source link
#Metas #Asked #Raw #Health #Dataand #Gave #Terrible #Advice

Previous post

Full House

Next post

FIFA adds new even more expensive World Cup ticket categories <div id="content-body-70846055" itemprop="articleBody"><p>FIFA added new, even more expensive tiers of tickets for this year’s World Cup, asking up to $4,105 (INR 3.80 lakh) for a front category 1 seat at the U.S. opener against Paraguay in Inglewood, California, on June 12.</p><p>Last week, FIFA had asked for a top price of $2,735 (INR 2.54 lakh) for category 1 tickets for the match but added new “front category” pricing.</p><p>FIFA also added a front category 2 tier to its ticket sales website without public announcement, asking $1,940 (INR 1.80 lakh) to $2,330 (INR 2.16 lakh) for those tickets for the U.S. opener. The new categories were first reported Thursday by <i>The Athletic.</i></p><p>The World Cup will be held from June 11 to July 19 in 16 cities in the U.S., Mexico and Canada.</p><p>Football’s governing body had, in its September 9 “ticket products and categories” information, called category 1 “the highest-priced seats, located primarily in the lower tier”, but appears to have withheld some seats from that category. It had labelled category 2 as “positioned outside of category 1 areas, available in both lower and upper tiers.” FIFA did not respond to an email sent to its media office seeking comment.</p><p>FIFA added seats at up to $3,360 (INR 3.11 lakh) in the front category 1 for Canada’s opener against Bosnia and Herzegovina on June 12 in Toronto.</p><p><b>ALSO READ | <a href="https://sportstar.thehindu.com/football/mohammed-kudus-injury-setback-fifa-world-cup-tottenham-surgery-update/article70845955.ece" target="_blank" rel="nofollow">Spurs hit by Kudus setback with FIFA World Cup hopes in doubt</a></b></p><p>For round of 16 games, it added $905 (INR) seats in Philadelphia.</p><p>FIFA last week raised its top ticket price for the World Cup final to $10,990 (INR 10.18 lakh) during the glitch-hampered reopening of sales. The price had been $8,680 (INR 8.05 lakh) when FIFA sold tickets after the tournament draw in December.</p><p>FIFA’s category 2 tickets for the July 19 game at MetLife Stadium in East Rutherford, New Jersey, were $7,380 (INR 6.84 lakh), up from $5,575 (INR 5.17 lakh), and category 3 cost $5,785 (INR 5.36 lakh), an increase from $4,185 (INR 3.88 lakh).</p><p>No tickets appeared to be available for the final on Thursday on FIFA’s ticket site.</p><p class="publish-time" id="end-of-article">Published on Apr 10, 2026</p></div> #FIFA #adds #expensive #World #Cup #ticket #categories

The company is introducing the new tier as it tries to win over users from Anthropic and its popular Claude Code tool. ChatGPT’s $100 per month option will directly compete with Anthropic’s “Max” tier for Claude, which costs the same price. It also offers a middle ground between the $20 per month Plus tier and the $200 version of the Pro tier.

(Yes, there are now two tiers of “Pro”; while the new tier “still offers access to all Pro features,” OpenAI says that the more expensive one has even higher usage limits.)

According to OpenAI, ChatGPT Plus will “will continue to be the best offer at $20 for steady, day-to-day usage of Codex, and the new $100 Pro tier offers a more accessible upgrade path for heavier daily use.” OpenAI also offers an $8 per month Go tier and a free tier.

#ChatGPT #month #Pro #subscriptionAI,News,OpenAI">ChatGPT has a new 0 per month Pro subscriptionOpenAI has announced a new version of its ChatGPT Pro subscription that costs 0 per month. The new Pro tier offers “5x more” usage of its Codex coding tool than the  per month Plus subscription and “is best for longer, high-effort Codex sessions,” OpenAI says.The company is introducing the new tier as it tries to win over users from Anthropic and its popular Claude Code tool. ChatGPT’s 0 per month option will directly compete with Anthropic’s “Max” tier for Claude, which costs the same price. It also offers a middle ground between the  per month Plus tier and the 0 version of the Pro tier.(Yes, there are now two tiers of “Pro”; while the new tier “still offers access to all Pro features,” OpenAI says that the more expensive one has even higher usage limits.)According to OpenAI, ChatGPT Plus will “will continue to be the best offer at  for steady, day-to-day usage of Codex, and the new 0 Pro tier offers a more accessible upgrade path for heavier daily use.” OpenAI also offers an  per month Go tier and a free tier.#ChatGPT #month #Pro #subscriptionAI,News,OpenAI

OpenAI says.

The company is introducing the new tier as it tries to win over users from Anthropic and its popular Claude Code tool. ChatGPT’s $100 per month option will directly compete with Anthropic’s “Max” tier for Claude, which costs the same price. It also offers a middle ground between the $20 per month Plus tier and the $200 version of the Pro tier.

(Yes, there are now two tiers of “Pro”; while the new tier “still offers access to all Pro features,” OpenAI says that the more expensive one has even higher usage limits.)

According to OpenAI, ChatGPT Plus will “will continue to be the best offer at $20 for steady, day-to-day usage of Codex, and the new $100 Pro tier offers a more accessible upgrade path for heavier daily use.” OpenAI also offers an $8 per month Go tier and a free tier.

#ChatGPT #month #Pro #subscriptionAI,News,OpenAI">ChatGPT has a new $100 per month Pro subscription

OpenAI has announced a new version of its ChatGPT Pro subscription that costs $100 per month. The new Pro tier offers “5x more” usage of its Codex coding tool than the $20 per month Plus subscription and “is best for longer, high-effort Codex sessions,” OpenAI says.

The company is introducing the new tier as it tries to win over users from Anthropic and its popular Claude Code tool. ChatGPT’s $100 per month option will directly compete with Anthropic’s “Max” tier for Claude, which costs the same price. It also offers a middle ground between the $20 per month Plus tier and the $200 version of the Pro tier.

(Yes, there are now two tiers of “Pro”; while the new tier “still offers access to all Pro features,” OpenAI says that the more expensive one has even higher usage limits.)

According to OpenAI, ChatGPT Plus will “will continue to be the best offer at $20 for steady, day-to-day usage of Codex, and the new $100 Pro tier offers a more accessible upgrade path for heavier daily use.” OpenAI also offers an $8 per month Go tier and a free tier.

#ChatGPT #month #Pro #subscriptionAI,News,OpenAI
Florida Attorney General James Uthmeier announced on Thursday his office will investigate OpenAI for its alleged harm to minors, potential to threaten national security, and its possible link to a shooting that took place at Florida State University last year.

“ChatGPT may likely have been used to assist the murderer in the recent mass school shooting at Florida State University that tragically took two lives,” Attorney General Uthmeier said in a video posted to social media.

On the day of the FSU shooting last April, the suspect allegedly asked ChatGPT how the country would react to a shooting at FSU, and what time it would be busiest at the FSU student union. These messages could potentially be used as evidence against the suspect in an October trial about the shooting.

The attorney general cited further concerns about ChatGPT’s encouragement of suicide in certain instances, which have been documented in multiple lawsuits brought by families against OpenAI. He also mentioned his concern that the Chinese Communist Party could use OpenAI’s technology against the United States.

“As big tech rolls out these technologies, they should not — they cannot — put our safety and security at risk,” he said. “We support innovation. But that doesn’t give any company the right to endanger our children, facilitate criminal activity, empower America’s enemies, or threaten our national security.”

He also called on the Florida legislature to “work quickly” to protect children from the negative impacts of AI.

“Each week, more than 900 million people use ChatGPT to improve their daily lives through uses such as learning new skills or navigating complex healthcare systems,” an OpenAI spokesperson said in a statement to TechCrunch. “Our ongoing safety work continues to play an important role in delivering these benefits to everyday people, as well as supporting scientific research and discovery.”

Techcrunch event

San Francisco, CA | October 13-15, 2026

OpenAI added that it builds and continues to improve ChatGPT to understand user intent and respond in appropriate, safe ways. The company said it will cooperate with the Florida attorney general’s investigation.

On Wednesday, OpenAI unveiled its Child Safety Blueprint, which includes policy recommendations designed to improve children’s safety as it relates to AI.

This action comes as chatbot makers face pressure to confront their potential role in creating child sexual abuse material (CSAM). According to a recent report from the Internet Watch Foundation, there were over 8,000 reports of AI-generated CSAM in the first half of 2025, which represents a 14% increase year over year.

OpenAI’s blueprint recommends updating legislation to protect against AI-generated abuse material, refining the reporting process to law enforcement, and instituting better preventative safeguards against abusive uses of AI tools.

#Florida #probe #OpenAI #alleging #connection #FSU #shooting #TechCrunchChatGPT,OpenAI">Florida AG to probe OpenAI, alleging possible connection to FSU shooting | TechCrunch
Florida Attorney General James Uthmeier announced on Thursday his office will investigate OpenAI for its alleged harm to minors, potential to threaten national security, and its possible link to a shooting that took place at Florida State University last year.

“ChatGPT may likely have been used to assist the murderer in the recent mass school shooting at Florida State University that tragically took two lives,” Attorney General Uthmeier said in a video posted to social media.







On the day of the FSU shooting last April, the suspect allegedly asked ChatGPT how the country would react to a shooting at FSU, and what time it would be busiest at the FSU student union. These messages could potentially be used as evidence against the suspect in an October trial about the shooting.

The attorney general cited further concerns about ChatGPT’s encouragement of suicide in certain instances, which have been documented in multiple lawsuits brought by families against OpenAI. He also mentioned his concern that the Chinese Communist Party could use OpenAI’s technology against the United States.

“As big tech rolls out these technologies, they should not — they cannot — put our safety and security at risk,” he said. “We support innovation. But that doesn’t give any company the right to endanger our children, facilitate criminal activity, empower America’s enemies, or threaten our national security.”



He also called on the Florida legislature to “work quickly” to protect children from the negative impacts of AI.

“Each week, more than 900 million people use ChatGPT to improve their daily lives through uses such as learning new skills or navigating complex healthcare systems,” an OpenAI spokesperson said in a statement to TechCrunch. “Our ongoing safety work continues to play an important role in delivering these benefits to everyday people, as well as supporting scientific research and discovery.” 

	
		
		Techcrunch event
		
			
			
									San Francisco, CA
													|
													October 13-15, 2026
							
			
		
	


OpenAI added that it builds and continues to improve ChatGPT to understand user intent and respond in appropriate, safe ways. The company said it will cooperate with the Florida attorney general’s investigation.

On Wednesday, OpenAI unveiled its Child Safety Blueprint, which includes policy recommendations designed to improve children’s safety as it relates to AI. 

This action comes as chatbot makers face pressure to confront their potential role in creating child sexual abuse material (CSAM). According to a recent report from the Internet Watch Foundation, there were over 8,000 reports of AI-generated CSAM in the first half of 2025, which represents a 14% increase year over year.







OpenAI’s blueprint recommends updating legislation to protect against AI-generated abuse material, refining the reporting process to law enforcement, and instituting better preventative safeguards against abusive uses of AI tools.


#Florida #probe #OpenAI #alleging #connection #FSU #shooting #TechCrunchChatGPT,OpenAI

announced on Thursday his office will investigate OpenAI for its alleged harm to minors, potential to threaten national security, and its possible link to a shooting that took place at Florida State University last year.

“ChatGPT may likely have been used to assist the murderer in the recent mass school shooting at Florida State University that tragically took two lives,” Attorney General Uthmeier said in a video posted to social media.

On the day of the FSU shooting last April, the suspect allegedly asked ChatGPT how the country would react to a shooting at FSU, and what time it would be busiest at the FSU student union. These messages could potentially be used as evidence against the suspect in an October trial about the shooting.

The attorney general cited further concerns about ChatGPT’s encouragement of suicide in certain instances, which have been documented in multiple lawsuits brought by families against OpenAI. He also mentioned his concern that the Chinese Communist Party could use OpenAI’s technology against the United States.

“As big tech rolls out these technologies, they should not — they cannot — put our safety and security at risk,” he said. “We support innovation. But that doesn’t give any company the right to endanger our children, facilitate criminal activity, empower America’s enemies, or threaten our national security.”

He also called on the Florida legislature to “work quickly” to protect children from the negative impacts of AI.

“Each week, more than 900 million people use ChatGPT to improve their daily lives through uses such as learning new skills or navigating complex healthcare systems,” an OpenAI spokesperson said in a statement to TechCrunch. “Our ongoing safety work continues to play an important role in delivering these benefits to everyday people, as well as supporting scientific research and discovery.”

Techcrunch event

San Francisco, CA | October 13-15, 2026

OpenAI added that it builds and continues to improve ChatGPT to understand user intent and respond in appropriate, safe ways. The company said it will cooperate with the Florida attorney general’s investigation.

On Wednesday, OpenAI unveiled its Child Safety Blueprint, which includes policy recommendations designed to improve children’s safety as it relates to AI.

This action comes as chatbot makers face pressure to confront their potential role in creating child sexual abuse material (CSAM). According to a recent report from the Internet Watch Foundation, there were over 8,000 reports of AI-generated CSAM in the first half of 2025, which represents a 14% increase year over year.

OpenAI’s blueprint recommends updating legislation to protect against AI-generated abuse material, refining the reporting process to law enforcement, and instituting better preventative safeguards against abusive uses of AI tools.

#Florida #probe #OpenAI #alleging #connection #FSU #shooting #TechCrunchChatGPT,OpenAI">Florida AG to probe OpenAI, alleging possible connection to FSU shooting | TechCrunch

Florida Attorney General James Uthmeier announced on Thursday his office will investigate OpenAI for its alleged harm to minors, potential to threaten national security, and its possible link to a shooting that took place at Florida State University last year.

“ChatGPT may likely have been used to assist the murderer in the recent mass school shooting at Florida State University that tragically took two lives,” Attorney General Uthmeier said in a video posted to social media.

On the day of the FSU shooting last April, the suspect allegedly asked ChatGPT how the country would react to a shooting at FSU, and what time it would be busiest at the FSU student union. These messages could potentially be used as evidence against the suspect in an October trial about the shooting.

The attorney general cited further concerns about ChatGPT’s encouragement of suicide in certain instances, which have been documented in multiple lawsuits brought by families against OpenAI. He also mentioned his concern that the Chinese Communist Party could use OpenAI’s technology against the United States.

“As big tech rolls out these technologies, they should not — they cannot — put our safety and security at risk,” he said. “We support innovation. But that doesn’t give any company the right to endanger our children, facilitate criminal activity, empower America’s enemies, or threaten our national security.”

He also called on the Florida legislature to “work quickly” to protect children from the negative impacts of AI.

“Each week, more than 900 million people use ChatGPT to improve their daily lives through uses such as learning new skills or navigating complex healthcare systems,” an OpenAI spokesperson said in a statement to TechCrunch. “Our ongoing safety work continues to play an important role in delivering these benefits to everyday people, as well as supporting scientific research and discovery.”

Techcrunch event

San Francisco, CA | October 13-15, 2026

OpenAI added that it builds and continues to improve ChatGPT to understand user intent and respond in appropriate, safe ways. The company said it will cooperate with the Florida attorney general’s investigation.

On Wednesday, OpenAI unveiled its Child Safety Blueprint, which includes policy recommendations designed to improve children’s safety as it relates to AI.

This action comes as chatbot makers face pressure to confront their potential role in creating child sexual abuse material (CSAM). According to a recent report from the Internet Watch Foundation, there were over 8,000 reports of AI-generated CSAM in the first half of 2025, which represents a 14% increase year over year.

OpenAI’s blueprint recommends updating legislation to protect against AI-generated abuse material, refining the reporting process to law enforcement, and instituting better preventative safeguards against abusive uses of AI tools.

#Florida #probe #OpenAI #alleging #connection #FSU #shooting #TechCrunchChatGPT,OpenAI

Post Comment