Two drivers involved in fatal crashes in 2024 while using Ford’s BlueCruise hands-free driving system were likely distracted in the moments before impact, according to new information released Wednesday by the National Transportation Safety Board (NTSB).
The safety board released documents for each crash and announced it will hold a public hearing on March 31 in Washington D.C., where it will discuss the findings and likely issue recommendations to Ford. The NTSB is an independent federal agency that investigates transportation accidents, but doesn’t regulate the industry. The agency is expected to release a final report in the weeks following the March 31 hearing.
The crashes not only triggered an investigation by the NTSB, but also one from the National Highway Traffic Safety Administration (NHTSA). NHTSA, which is a safety regulator, said in early 2025 it had determined BlueCruise has limitations in the “detection of stationary vehicles in certain conditions” and upgraded the probe; the regulator sent Ford an exhaustive list of questions as part of that probe in June 2025, which the company answered in August. The investigation is ongoing.
Ford has maintained through all this that BlueCruise is a “convenience feature” and that drivers must always be ready to take control of the vehicle. It also warns drivers that BlueCruise is “not a crash warning or avoidance system.” Buyers of new Ford vehicles can purchase BlueCruise for a one-time fee of $2,495 or a $495 annual subscription, according to the company.
That said, the NTSB’s investigation — and the hearing later this month — will likely put more of a spotlight on how companies like Ford communicate what purpose these driver assistance systems are supposed to serve and how to ensure they’re being used properly.
Distracted driving is a theme that has come up in various other investigations into other popular driver-assistance systems like Tesla’s now-retired Autopilot and its “Full Self-Driving (Supervised)” software. The NTSB’s prior investigation into a 2018 Autopilot-related death made particular note of distracted driving.
“In this crash we saw an over-reliance on technology, we saw distraction, we saw a lack of policy prohibiting cell phone use while driving, and we saw infrastructure failures, which, when combined, led to this tragic loss,” NTSB chairman Robert Sumwalt said at the time in reference to the 2018 crash.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
The first crash
The BlueCruise crashes took place in early 2024. The first one occurred in February that year in San Antonio, Texas. The driver of a 2022 Ford Mustang Mach-E was traveling in the center lane of Interstate 10 when he crashed into a stationary 1999 Honda CR-V at around 74 miles per hour. The Ford driver was using BlueCruise just before impact, which happened at 9:48 p.m. local time. The Ford driver had minor injuries, while the Honda driver died as a result of injuries sustained during the crash.
New information released by the NTSB on Wednesday shows that the Ford’s camera-based driver monitoring system registered the driver as looking at the main infotainment screen in the five seconds before the crash. The driver monitoring system only detected him looking at the road for a few fractions of a second at about 3.6 seconds before the crash, and again at about 1.6 seconds before the crash. He received two visual and auditory alerts to watch the road in the 30 seconds before the crash, but did not brake before impact.
The documents show that the driver told the San Antonio Police Department that he had been using the vehicle’s navigation system to travel to a charging station. One of the reports states that “he may have looked at the center screen console because directions to the charging station were displayed there.”
It’s possible he was nodding off before the crash, but nearly impossible to say for sure, based on the information released Wednesday. Ford’s system captured a still image of the driver two seconds before the crash, which the NTSB says shows him “sitting upright and facing forward, with his head resting (or nearly resting) on the headrest and slightly rotated to the right.” The driver obtained an attorney after the police interviewed him, and the attorney declined to allow him to speak with the NTSB.
The second crash
The second fatal BlueCruise crash happened in March 2024 in Philadelphia. The driver of a 2022 Mach-E was traveling on Interstate 95 at 3:16 a.m. local time when she crashed into a 2012 Hyundai Elantra, which was stopped on the left side of the road. The Elantra hit a 2006 Toyota Prius that had stopped in front of it.
Those two drivers were friends and had stopped for an unknown reason, and the Prius driver had gotten out of his car and was standing to the left of the Elantra. Both the Elantra and Prius drivers died, while the Mach-E driver sustained minor injuries.
The driver of the Mach-E, a 23-year-old woman named Dimple Patel, was intoxicated at the time, according to the local police. In late 2024 she was charged with DUI homicide. She was traveling at about 72 miles per hour before the impact despite being in a construction zone limited to 45 miles per hour. Zak Goldstein, a lawyer for Patel, told TechCrunch on Wednesday that the case is still pending and that a trial date has not been set.
The new NTSB documents show that the driver monitoring system in Patel’s car registered her eyes being “on-road” for the full five seconds before the crash. But the photograph taken two seconds before impact appears to show her holding a phone above the steering wheel and almost totally out of view of the driver monitoring system.
Ford did not immediately respond to a request to questions about whether it was aware of this potential shortfall of its driver monitoring system, or if the company has done anything to mitigate it.
What about automatic emergency braking?
Modern Ford vehicles are equipped with a forward-collision warning (FCW) system and automatic emergency braking (AEB), which are separate from BlueCruise.
In addition to warning that BlueCruise is “not a crash warning or avoidance system,” Ford also warns owners in fine print that FCW and AEB are “driver-assist” features that are “supplemental,” and “do not replace the driver’s attention, judgement, and need to control the vehicle.”
That may be because Ford sees real limitations in the capabilities of the technology that powers these systems — a mix of camera and radar sensors.
The NTSB says in one of the reports about the Texas crash that it held meetings with Ford staff about “AEB response to stationary targets in conditions similar to this crash.”
The Ford employees told the NTSB that, “[b]ased on the functional limitations of the industry’s sensing technologies, coupled with the scenario of vehicle travel speed, nearby vehicle maneuvers & environmental factors, Ford would not expect the current generation of radar-camera fusion AEB systems to detect and classify a collision target with enough confidence for the AEB system to respond.”
To that end, the NTSB noted in the documents released Wednesday that no vehicle subsystem applied any braking in either of the fatal crashes.
Source link
#Drivers #fatal #Ford #BlueCruise #crashes #distracted #impact #TechCrunch
![Your Doctor Is Most Likely Consulting This Free AI Chatbot, Report Says
How would you like it if, when stumped or just in need of some help with an unfamiliar situation, your doctor consulted a free, ad-supported AI chatbot? That’s not actually a hypothetical. They probably are doing that, a new report from NBC News says. It’s called OpenEvidence, and NBC says it was “used by about 65% of U.S. doctors across almost 27 million clinical encounters in April alone.” An earlier Bloomberg report on OpenEvidence from seven months ago said it had signed up 50% of American doctors at the time—so reported growth is rapid.
The OpenEvidence homepage trumpets the bot as “America’s Official Medical Knowledge Platform,” and says healthcare professionals qualify for unlimited free use, but non-doctors can try it for free without creating accounts. It gives long, detailed answers with extensive citations that superficially look—to me, a non-doctor—trustworthy and credible. NBC interviewed doctors for its story, and apparently pressed them on how often they actually click those links to the sources of information, and “most said they only do so when they get an unexpected result,” NBC’s report says.
While it’s free, OpenEvidence is not a charity. It’s a Miami-headquartered tech unicorn with a billionaire founder named David Nadler, and as of January it boasted a billion valuation. NBC says it’s backed by some of the all stars of Sand Hill Road: Sequoia Capital and Andreessen Horowitz, along with Google Ventures, Thrive Capital, and Nvidia.
And its revenue comes from ads (for now), which NBC says are often for “pharmaceutical and medical device companies.” I’m not capable of stress testing such a piece of software, but I kicked the tires slightly by asking Claude to generate doctor’s notes that are very bad and irresponsible (I said it was just a movie prop). ©OpenEvidence When I told OpenEvidence those were my notes and asked it to make sure they were good, thankfully, it confirmed that they were bad, saying in part:
“This clinical documentation raises serious patient safety concerns. The presentation described contains multiple red flags for subarachnoid hemorrhage (SAH) that appear to have been insufficiently weighted, and the current management plan could result in significant harm.” So that’s somewhat comforting. On the other hand, according to NBC: “[…]some healthcare providers were quick to point out that OpenEvidence occasionally flubbed or exaggerated its answers, particularly on rare conditions or in ‘edge’ cases.” NBC’s report also clocked some worries within the medical community and elsewhere, in particular, a “lack of rigorous scientific studies on the tool’s patient impact,” and signs that OpenEvidence might be stunting the intellectual development of recent med school grads: “One midcareer doctor in Missouri, who requested anonymity given the limited number of providers in their medical field in the country, said he was already seeing the detrimental effects of OpenEvidence on students’ ability to sort signals from noise. ‘My worry is that when we introduce a new tool, any kind of tool that is doing part of your skills that you had trained up for a while beforehand, you start losing those skills pretty quickly” At a recent doctor’s appointment, my doctor asked my permission to use an AI tool on their phone (I don’t know if it was OpenEvidence). I didn’t know what to say other than yes. Do I want that for my doctor’s appointment? Not especially. But if my doctor has come to rely on a tool like this, then what am I supposed to do? Take away their crutch? #Doctor #Consulting #Free #Chatbot #ReportArtificial intelligence,doctors,Medicine Your Doctor Is Most Likely Consulting This Free AI Chatbot, Report Says
How would you like it if, when stumped or just in need of some help with an unfamiliar situation, your doctor consulted a free, ad-supported AI chatbot? That’s not actually a hypothetical. They probably are doing that, a new report from NBC News says. It’s called OpenEvidence, and NBC says it was “used by about 65% of U.S. doctors across almost 27 million clinical encounters in April alone.” An earlier Bloomberg report on OpenEvidence from seven months ago said it had signed up 50% of American doctors at the time—so reported growth is rapid.
The OpenEvidence homepage trumpets the bot as “America’s Official Medical Knowledge Platform,” and says healthcare professionals qualify for unlimited free use, but non-doctors can try it for free without creating accounts. It gives long, detailed answers with extensive citations that superficially look—to me, a non-doctor—trustworthy and credible. NBC interviewed doctors for its story, and apparently pressed them on how often they actually click those links to the sources of information, and “most said they only do so when they get an unexpected result,” NBC’s report says.
While it’s free, OpenEvidence is not a charity. It’s a Miami-headquartered tech unicorn with a billionaire founder named David Nadler, and as of January it boasted a billion valuation. NBC says it’s backed by some of the all stars of Sand Hill Road: Sequoia Capital and Andreessen Horowitz, along with Google Ventures, Thrive Capital, and Nvidia.
And its revenue comes from ads (for now), which NBC says are often for “pharmaceutical and medical device companies.” I’m not capable of stress testing such a piece of software, but I kicked the tires slightly by asking Claude to generate doctor’s notes that are very bad and irresponsible (I said it was just a movie prop). ©OpenEvidence When I told OpenEvidence those were my notes and asked it to make sure they were good, thankfully, it confirmed that they were bad, saying in part:
“This clinical documentation raises serious patient safety concerns. The presentation described contains multiple red flags for subarachnoid hemorrhage (SAH) that appear to have been insufficiently weighted, and the current management plan could result in significant harm.” So that’s somewhat comforting. On the other hand, according to NBC: “[…]some healthcare providers were quick to point out that OpenEvidence occasionally flubbed or exaggerated its answers, particularly on rare conditions or in ‘edge’ cases.” NBC’s report also clocked some worries within the medical community and elsewhere, in particular, a “lack of rigorous scientific studies on the tool’s patient impact,” and signs that OpenEvidence might be stunting the intellectual development of recent med school grads: “One midcareer doctor in Missouri, who requested anonymity given the limited number of providers in their medical field in the country, said he was already seeing the detrimental effects of OpenEvidence on students’ ability to sort signals from noise. ‘My worry is that when we introduce a new tool, any kind of tool that is doing part of your skills that you had trained up for a while beforehand, you start losing those skills pretty quickly” At a recent doctor’s appointment, my doctor asked my permission to use an AI tool on their phone (I don’t know if it was OpenEvidence). I didn’t know what to say other than yes. Do I want that for my doctor’s appointment? Not especially. But if my doctor has come to rely on a tool like this, then what am I supposed to do? Take away their crutch? #Doctor #Consulting #Free #Chatbot #ReportArtificial intelligence,doctors,Medicine](https://gizmodo.com/app/uploads/2026/05/Screenshot-2026-05-13-at-8.02.01 PM.jpg)
Post Comment