×
All the New York Comic-Con TV and movie announcements you need to know

All the New York Comic-Con TV and movie announcements you need to know

Following San Diego Comic-Con this summer, New York Comic-Con brought even more exciting announcements. Here’s a running list of TV and movie trailer drops and sneak peeks you need to know about.

A Knight of the Seven Kingdoms

A Knight of the Seven Kingdoms is a prequel to Game of Thrones, taking place a century before the events of the original series. Based on novellas by George R.R. Martin, A Knight of Seven Kingdoms will follow shabby knight Sir Duncan the Tall (Peter Claffey) and his squire, Egg (Dexter Sol Ansell).

How to watch: A Knight of the Seven Kingdoms premieres January 18 on HBO Max.

Marvel’s Wonder Man miniseries teaser

Marvel dropped a teaser for Wonder Man, an eight-episode miniseries. The trailer is pretty meta, with talk of an in-universe reboot of a Wonder Man movie. “I know your question is, ‘Why one more superhero film?'” a reclusive director character says in an interview. “Everyone is tired of superheroes” — saying what many fans watching the teaser must be thinking. According to Entertainment Weekly, the series itself will be pretty meta as well, with the actual hero auditioning to be the movie version of Wonder Man.

“Have you given any thought about casting?” the interviewer asks, with the final shot of the teaser being Wonder Man (Yahya Abdul-Mateen II) staring at his phone, watching.

On Marvel’s website, it states that Wonder Man premieres in December, but Entertainment Weekly reported that it’s shifted to January.

Mashable Top Stories

How to watch: Wonder Man premieres on Disney+ in January 2026.

Invincible Season 4 gets a release date

At SDCC in July, creator Robert Kirkman updated fans about Invincible Season 4, like that Matthew Rhys will be voicing Dr. David Anders/Dinosaurus. Kirkman also revealed that Grand Regent Thragg, ruler of the Viltrum Empire, would debut — and now as of NYCC, we know who will voice him: Lee Pace, Variety reported. An entire trailer dropped, as well as a release date (March).

How to watch: Invincible Season 4 premieres on Amazon Prime in March 2026.

Mercy trailer drops

Chris Pratt and Rebecca Ferguson star in this new sci-fi movie, Mercy, from Amazon MGM, releasing in January. “Mercy” is an advanced AI who acts as judge, jury, and executioner according to the trailer. (Not totally terrifying or topical at all!) Pratt plays Detective Raven, who helped develop the program, now on trial for murdering his wife. If he can’t prove his innocence in 90 minutes to Mercy (Ferguson), the AI will kill him.

How to watch: Mercy will be released in theaters on January 23.

The Vampire Lestat extended first look

The Vampire Lestat, Season 3 of the AMC+ Interview With The Vampire adaptation, takes a bite at Anne Rice’s second book in her Vampire Chronicles series. While Interview focused on Louis de Pointe du Lac’s (Jacob Anderson) journey from human to vampire, The Vampire Lestat stars his (former) vampiric lover, Lestat de Lioncourt (Sam Reid) — in case the title didn’t make that obvious. While the first trailer dropped last year at SDCC, fans will have to wait until next year for the entire season to drop, so AMC+ must’ve felt merciful to give fans an extended look at the 2025 NYCC.

How to watch: The Vampire Lestat premieres in 2026 on AMC+.

Check back for more developments as NYCC continues.

Source link
#York #ComicCon #movie #announcements

#Reggie #FilsAimé #Amazon #asked #Nintendo #break #lawAmazon,Gaming,News,Nintendo,Tech">Reggie Fils-Aimé says Amazon once asked Nintendo to break the law“Literally, we stopped selling to Amazon, and it’s because I wasn’t going to do something illegal. I wasn’t going to do something that would put at risk the relationship we have with other retailers. But it also set the stage to say, look, you’re not going to push me around. This is the way we do business. And so that’s how, over time, you build respect.”#Reggie #FilsAimé #Amazon #asked #Nintendo #break #lawAmazon,Gaming,News,Nintendo,Tech
A new study examines how large language models perform in a variety of medical contexts, including real emergency room cases — where at least one model seemed to be more accurate than human doctors.

The study was published this week in Science and comes from a research team led by physicians and computer scientists at Harvard Medical School and Beth Israel Deaconess Medical Center. The researchers said they conducted a variety of experiments to measure how OpenAI’s models compared to human physicians.

In one experiment, researchers focused on 76 patients who came into the Beth Israel emergency room, comparing the diagnoses offered by two internal medicine attending physicians to those generated by OpenAI’s o1 and 4o models. These diagnoses were assessed by two other attending physicians, who did not know which ones came from humans and which came from AI.

“At each diagnostic touchpoint, o1 either performed nominally better than or on par with the two attending physicians and 4o,” the study said, adding that the differences “were especially pronounced at the first diagnostic touchpoint (initial ER triage), where there is the least information available about the patient and the most urgency to make the correct decision.”

In Harvard Medical School’s press release about the study, the researchers emphasized that they did not “pre-process the data at all” — the AI models were presented with the same information that was available in the electronic medical records at the time of each diagnosis. 

With that information, the o1 model managed to offer “the exact or very close diagnosis” in 67% of triage cases, compared to one physician who had the exact or close diagnosis 55% of the time, and to the other who hit the mark 50% of the time.

“We tested the AI model against virtually every benchmark, and it eclipsed both prior models and our physician baselines,” said Arjun Manrai, who heads an AI lab at Harvard Medical School and is one of the study’s lead authors, in the press release.

Techcrunch event

San Francisco, CA | October 13-15, 2026

To be clear, the study didn’t claim that AI is ready to make real life-or-death decisions in the emergency room. Instead, it said the findings show an “urgent need for prospective trials to evaluate these technologies in real-world patient care settings.”

The researchers also noted that they only studied how models performed when provided with text-based information, and that “existing studies suggest that current foundation models are more limited in reasoning over nontext inputs.”

Adam Rodman, a Beth Israel doctor who’s also one of the study’s lead authors, warned the Guardian that there’s “no formal framework right now for accountability” around AI diagnoses, and that patients still “want humans to guide them through life or death decisions [and] to guide them through challenging treatment decisions.”

In a post about the study, Kristen Panthagani, an emergency physician, said this is an “an interesting AI study that has led to some very overhyped headlines,” especially since it was comparing AI diagnoses to those from internal medicine physicians, not ER physicians.

“If we’re going to compare AI tools to physicians’ clinical ability, we should start by comparing to physicians who actually practice that specialty,” Panthagani said. “I would not be surprised if a LLM could beat a dermatologist at an neurosurgery board exam, [but] that’s not a particularly helpful thing to know.”

She also argued, “As an ER doctor seeing a patient for a first time, my primary goal is not to guess your ultimate diagnosis. My primary goal is to determine if you have a condition that could kill you.”

This post and headline have been updated to reflect the fact that the diagnoses in the study came from internal medicine attending physicians, and to include commentary from Kristen Panthagani.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#Harvard #study #offered #accurate #emergency #room #diagnoses #human #doctors #TechCrunchbeth israel,harvard medical school,OpenAI">In Harvard study, AI offered more accurate emergency room diagnoses than two human doctors | TechCrunch
A new study examines how large language models perform in a variety of medical contexts, including real emergency room cases — where at least one model seemed to be more accurate than human doctors.

The study was published this week in Science and comes from a research team led by physicians and computer scientists at Harvard Medical School and Beth Israel Deaconess Medical Center. The researchers said they conducted a variety of experiments to measure how OpenAI’s models compared to human physicians.







In one experiment, researchers focused on 76 patients who came into the Beth Israel emergency room, comparing the diagnoses offered by two internal medicine attending physicians to those generated by OpenAI’s o1 and 4o models. These diagnoses were assessed by two other attending physicians, who did not know which ones came from humans and which came from AI.

“At each diagnostic touchpoint, o1 either performed nominally better than or on par with the two attending physicians and 4o,” the study said, adding that the differences “were especially pronounced at the first diagnostic touchpoint (initial ER triage), where there is the least information available about the patient and the most urgency to make the correct decision.”

In Harvard Medical School’s press release about the study, the researchers emphasized that they did not “pre-process the data at all” — the AI models were presented with the same information that was available in the electronic medical records at the time of each diagnosis. 

With that information, the o1 model managed to offer “the exact or very close diagnosis” in 67% of triage cases, compared to one physician who had the exact or close diagnosis 55% of the time, and to the other who hit the mark 50% of the time.

“We tested the AI model against virtually every benchmark, and it eclipsed both prior models and our physician baselines,” said Arjun Manrai, who heads an AI lab at Harvard Medical School and is one of the study’s lead authors, in the press release.

	
		
		Techcrunch event
		
			
			
									San Francisco, CA
													|
													October 13-15, 2026
							
			
		
	


To be clear, the study didn’t claim that AI is ready to make real life-or-death decisions in the emergency room. Instead, it said the findings show an “urgent need for prospective trials to evaluate these technologies in real-world patient care settings.”

The researchers also noted that they only studied how models performed when provided with text-based information, and that “existing studies suggest that current foundation models are more limited in reasoning over nontext inputs.”

Adam Rodman, a Beth Israel doctor who’s also one of the study’s lead authors, warned the Guardian that there’s “no formal framework right now for accountability” around AI diagnoses, and that patients still “want humans to guide them through life or death decisions [and] to guide them through challenging treatment decisions.”







In a post about the study, Kristen Panthagani, an emergency physician, said this is an “an interesting AI study that has led to some very overhyped headlines,” especially since it was comparing AI diagnoses to those from internal medicine physicians, not ER physicians.

“If we’re going to compare AI tools to physicians’ clinical ability, we should start by comparing to physicians who actually practice that specialty,” Panthagani said. “I would not be surprised if a LLM could beat a dermatologist at an neurosurgery board exam, [but] that’s not a particularly helpful thing to know.”

She also argued, “As an ER doctor seeing a patient for a first time, my primary goal is not to guess your ultimate diagnosis. My primary goal is to determine if you have a condition that could kill you.”

This post and headline have been updated to reflect the fact that the diagnoses in the study came from internal medicine attending physicians, and to include commentary from Kristen Panthagani.
When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.#Harvard #study #offered #accurate #emergency #room #diagnoses #human #doctors #TechCrunchbeth israel,harvard medical school,OpenAI

published this week in Science and comes from a research team led by physicians and computer scientists at Harvard Medical School and Beth Israel Deaconess Medical Center. The researchers said they conducted a variety of experiments to measure how OpenAI’s models compared to human physicians.

In one experiment, researchers focused on 76 patients who came into the Beth Israel emergency room, comparing the diagnoses offered by two internal medicine attending physicians to those generated by OpenAI’s o1 and 4o models. These diagnoses were assessed by two other attending physicians, who did not know which ones came from humans and which came from AI.

“At each diagnostic touchpoint, o1 either performed nominally better than or on par with the two attending physicians and 4o,” the study said, adding that the differences “were especially pronounced at the first diagnostic touchpoint (initial ER triage), where there is the least information available about the patient and the most urgency to make the correct decision.”

In Harvard Medical School’s press release about the study, the researchers emphasized that they did not “pre-process the data at all” — the AI models were presented with the same information that was available in the electronic medical records at the time of each diagnosis. 

With that information, the o1 model managed to offer “the exact or very close diagnosis” in 67% of triage cases, compared to one physician who had the exact or close diagnosis 55% of the time, and to the other who hit the mark 50% of the time.

“We tested the AI model against virtually every benchmark, and it eclipsed both prior models and our physician baselines,” said Arjun Manrai, who heads an AI lab at Harvard Medical School and is one of the study’s lead authors, in the press release.

Techcrunch event

San Francisco, CA | October 13-15, 2026

To be clear, the study didn’t claim that AI is ready to make real life-or-death decisions in the emergency room. Instead, it said the findings show an “urgent need for prospective trials to evaluate these technologies in real-world patient care settings.”

The researchers also noted that they only studied how models performed when provided with text-based information, and that “existing studies suggest that current foundation models are more limited in reasoning over nontext inputs.”

Adam Rodman, a Beth Israel doctor who’s also one of the study’s lead authors, warned the Guardian that there’s “no formal framework right now for accountability” around AI diagnoses, and that patients still “want humans to guide them through life or death decisions [and] to guide them through challenging treatment decisions.”

In a post about the study, Kristen Panthagani, an emergency physician, said this is an “an interesting AI study that has led to some very overhyped headlines,” especially since it was comparing AI diagnoses to those from internal medicine physicians, not ER physicians.

“If we’re going to compare AI tools to physicians’ clinical ability, we should start by comparing to physicians who actually practice that specialty,” Panthagani said. “I would not be surprised if a LLM could beat a dermatologist at an neurosurgery board exam, [but] that’s not a particularly helpful thing to know.”

She also argued, “As an ER doctor seeing a patient for a first time, my primary goal is not to guess your ultimate diagnosis. My primary goal is to determine if you have a condition that could kill you.”

This post and headline have been updated to reflect the fact that the diagnoses in the study came from internal medicine attending physicians, and to include commentary from Kristen Panthagani.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#Harvard #study #offered #accurate #emergency #room #diagnoses #human #doctors #TechCrunchbeth israel,harvard medical school,OpenAI">In Harvard study, AI offered more accurate emergency room diagnoses than two human doctors | TechCrunch

A new study examines how large language models perform in a variety of medical contexts, including real emergency room cases — where at least one model seemed to be more accurate than human doctors.

The study was published this week in Science and comes from a research team led by physicians and computer scientists at Harvard Medical School and Beth Israel Deaconess Medical Center. The researchers said they conducted a variety of experiments to measure how OpenAI’s models compared to human physicians.

In one experiment, researchers focused on 76 patients who came into the Beth Israel emergency room, comparing the diagnoses offered by two internal medicine attending physicians to those generated by OpenAI’s o1 and 4o models. These diagnoses were assessed by two other attending physicians, who did not know which ones came from humans and which came from AI.

“At each diagnostic touchpoint, o1 either performed nominally better than or on par with the two attending physicians and 4o,” the study said, adding that the differences “were especially pronounced at the first diagnostic touchpoint (initial ER triage), where there is the least information available about the patient and the most urgency to make the correct decision.”

In Harvard Medical School’s press release about the study, the researchers emphasized that they did not “pre-process the data at all” — the AI models were presented with the same information that was available in the electronic medical records at the time of each diagnosis. 

With that information, the o1 model managed to offer “the exact or very close diagnosis” in 67% of triage cases, compared to one physician who had the exact or close diagnosis 55% of the time, and to the other who hit the mark 50% of the time.

“We tested the AI model against virtually every benchmark, and it eclipsed both prior models and our physician baselines,” said Arjun Manrai, who heads an AI lab at Harvard Medical School and is one of the study’s lead authors, in the press release.

Techcrunch event

San Francisco, CA | October 13-15, 2026

To be clear, the study didn’t claim that AI is ready to make real life-or-death decisions in the emergency room. Instead, it said the findings show an “urgent need for prospective trials to evaluate these technologies in real-world patient care settings.”

The researchers also noted that they only studied how models performed when provided with text-based information, and that “existing studies suggest that current foundation models are more limited in reasoning over nontext inputs.”

Adam Rodman, a Beth Israel doctor who’s also one of the study’s lead authors, warned the Guardian that there’s “no formal framework right now for accountability” around AI diagnoses, and that patients still “want humans to guide them through life or death decisions [and] to guide them through challenging treatment decisions.”

In a post about the study, Kristen Panthagani, an emergency physician, said this is an “an interesting AI study that has led to some very overhyped headlines,” especially since it was comparing AI diagnoses to those from internal medicine physicians, not ER physicians.

“If we’re going to compare AI tools to physicians’ clinical ability, we should start by comparing to physicians who actually practice that specialty,” Panthagani said. “I would not be surprised if a LLM could beat a dermatologist at an neurosurgery board exam, [but] that’s not a particularly helpful thing to know.”

She also argued, “As an ER doctor seeing a patient for a first time, my primary goal is not to guess your ultimate diagnosis. My primary goal is to determine if you have a condition that could kill you.”

This post and headline have been updated to reflect the fact that the diagnoses in the study came from internal medicine attending physicians, and to include commentary from Kristen Panthagani.

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#Harvard #study #offered #accurate #emergency #room #diagnoses #human #doctors #TechCrunchbeth israel,harvard medical school,OpenAI

Post Comment