×
GM’s Final EV Battery Strategy Copies China’s Playbook: Super Cheap Cells

GM’s Final EV Battery Strategy Copies China’s Playbook: Super Cheap Cells

General Motors has just announced its latest and likely final piece in what now appears to be a three-pronged cell-chemistry strategy to power GM’s lineup of a dozen EVs through the end of the decade and beyond.

GM has stated today it will build low-cost lithium iron phosphate (LFP) battery cells in Spring Hill, Tennessee, starting in late 2027. Conversion of cell lines to produce that chemistry will begin later this year. The cell plant at the Spring Hill complex is owned and operated by Ultium Cells, GM’s joint-venture battery company with LG Energy Solution. A GM assembly plant in the same complex builds the Cadillac Lyriq and Acura ZDX electric SUVs.

Under Kurt Kelty, GM vice president of battery, propulsion, and sustainability, the company has diversified from its previous strategy of “one cell for all EVs.” Kelty was hired in February 2024 after stints at Tesla and Panasonic, and is widely respected in the industry.

The LFP cells made by Ultium are expected to be used in the updated 2026 Chevrolet Bolt EV, which GM should reveal within two to three months. It will go into production in a Kansas plant before the end this year. For its first two years, it will have to use LFP cells imported from another LG plant—potentially one in South Korea. Those imports let GM get inexpensive iron-phosphate batteries onto US roads a full three years before its next cell chemistry, called LMR, which it says costs no more than LFP, but has higher energy density.

Still, converting a plant—at an unspecified cost—to build LFP cells suggests they will be used in the lineup for a while.

LMR’s Future Promise

Thus far, all GM EVs after the 2017-2023 Chevrolet Bolt EV have used nickel-manganese-cobalt-aluminum (NMCA) cells. Those hold the most energy in a given volume, but are also priciest due to their nickel and cobalt content. Delays in production of the Ultium modules holding those cells pushed out deliveries of GM’s EV lineup by 12 to 18 months, from late 2022 to early 2024. (GM EV sales have risen steadily for three quarters, suggesting those troubles might be in the past.)

This May, Ultium announced a second cell chemistry, which it calls “lithium manganese-rich” or LMR. It claims the LMR chemistry provides one-third greater energy density than the same volume of lithium iron-phosphate (LFP) cells—at a comparable cell cost—and will cut the cost of its largest EV trucks and SUVs. Those vehicles from Cadillac, Chevrolet, and GMC use gargantuan battery packs of 109 to 205 kilowatt-hours.

The first LMR cells will come off a pilot line in 2027; full volume production is slated for 2028 at a plant Ultium hasn’t disclosed. With Spring Hill now set to produce LFP cells, it seems likely LMR cells will come from the other Ultium Cells plant now in production—in Warren, Ohio.

Compact Chemistry

Adding lithium-iron-phosphate rounds out the suite of chemistries GM is likely to use in its EVs from this year through the early 2030s. That applies, at least, to those produced outside China; the various models it builds in China have long included LFP chemistries, the dominant chemistry in that country.

Much of the intellectual property around LFP chemistries is owned by Chinese firms, which has caused trouble for Ford as it tries to add LFP cells for future EV models. A GM spokesperson told WIRED that no intellectual property for the LFP cells it will produce with partner LG Energy Solution is owned by any Chinese entity.

Source link
#GMs #Final #Battery #Strategy #Copies #Chinas #Playbook #Super #Cheap #Cells

ShinyHunters has breached Instructure (again). Instead of contacting us to resolve it they ignored us and did some “security patches.” If any of the schools in the affected list are interested in preventing the release of their data, please consult with a cyber advisory firm and contact us privately at TOX to negotiate a settlement. You have till the end of the day by 12 May 2026 before everything is leaked.

The message included a link to a list of schools ShinyHunter claims to have breached through Canvas. The platform’s status page says Canvas, Canvas Beta, and Canvas Test are currently unavailable and that it is investigating the outage.

Instructure said last week that it “deployed patches to enhance system security” following the breach. ShinyHunters — which has claimed responsibility for attacks on Ticketmaster, AT&T, Rockstar Games, ADT, and Vercel — said its data leak site contains 9,000 schools, including data belonging to 275 million students, teachers, and other staff, according to Bleeping Computer.

#Canvas #ShinyHunters #threatens #leak #schools #dataNews,Security,Tech">Canvas is down as ShinyHunters threatens to leak schools’ dataThe Instructure-owned learning management platform, Canvas, is down after recently confirming a massive data breach that impacted student names, email addresses, ID numbers, and messages. Students attempting to access the system on Thursday saw a message from the hacking group ShinyHunters, which claimed responsibility for the attack:ShinyHunters has breached Instructure (again). Instead of contacting us to resolve it they ignored us and did some “security patches.” If any of the schools in the affected list are interested in preventing the release of their data, please consult with a cyber advisory firm and contact us privately at TOX to negotiate a settlement. You have till the end of the day by 12 May 2026 before everything is leaked.The message included a link to a list of schools ShinyHunter claims to have breached through Canvas. The platform’s status page says Canvas, Canvas Beta, and Canvas Test are currently unavailable and that it is investigating the outage.Instructure said last week that it “deployed patches to enhance system security” following the breach. ShinyHunters — which has claimed responsibility for attacks on Ticketmaster, AT&T, Rockstar Games, ADT, and Vercel — said its data leak site contains 9,000 schools, including data belonging to 275 million students, teachers, and other staff, according to Bleeping Computer.#Canvas #ShinyHunters #threatens #leak #schools #dataNews,Security,Tech

impacted student names, email addresses, ID numbers, and messages. Students attempting to access the system on Thursday saw a message from the hacking group ShinyHunters, which claimed responsibility for the attack:

ShinyHunters has breached Instructure (again). Instead of contacting us to resolve it they ignored us and did some “security patches.” If any of the schools in the affected list are interested in preventing the release of their data, please consult with a cyber advisory firm and contact us privately at TOX to negotiate a settlement. You have till the end of the day by 12 May 2026 before everything is leaked.

The message included a link to a list of schools ShinyHunter claims to have breached through Canvas. The platform’s status page says Canvas, Canvas Beta, and Canvas Test are currently unavailable and that it is investigating the outage.

Instructure said last week that it “deployed patches to enhance system security” following the breach. ShinyHunters — which has claimed responsibility for attacks on Ticketmaster, AT&T, Rockstar Games, ADT, and Vercel — said its data leak site contains 9,000 schools, including data belonging to 275 million students, teachers, and other staff, according to Bleeping Computer.

#Canvas #ShinyHunters #threatens #leak #schools #dataNews,Security,Tech">Canvas is down as ShinyHunters threatens to leak schools’ data

The Instructure-owned learning management platform, Canvas, is down after recently confirming a massive data breach that impacted student names, email addresses, ID numbers, and messages. Students attempting to access the system on Thursday saw a message from the hacking group ShinyHunters, which claimed responsibility for the attack:

ShinyHunters has breached Instructure (again). Instead of contacting us to resolve it they ignored us and did some “security patches.” If any of the schools in the affected list are interested in preventing the release of their data, please consult with a cyber advisory firm and contact us privately at TOX to negotiate a settlement. You have till the end of the day by 12 May 2026 before everything is leaked.

The message included a link to a list of schools ShinyHunter claims to have breached through Canvas. The platform’s status page says Canvas, Canvas Beta, and Canvas Test are currently unavailable and that it is investigating the outage.

Instructure said last week that it “deployed patches to enhance system security” following the breach. ShinyHunters — which has claimed responsibility for attacks on Ticketmaster, AT&T, Rockstar Games, ADT, and Vercel — said its data leak site contains 9,000 schools, including data belonging to 275 million students, teachers, and other staff, according to Bleeping Computer.

#Canvas #ShinyHunters #threatens #leak #schools #dataNews,Security,Tech
On Thursday OpenAI announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation. The feature allows an adult ChatGPT user to designate another person as a trusted contact within their account, such as a friend or family member. In cases where a conversation may turn to self-harm, OpenAI will now encourage the user to reach out to that contact. It also sends an automated alert to the contact, encouraging them to check in with the user.

OpenAI has faced a wave of lawsuits from the families of people who have committed suicide after talking with its chatbot. In a number of cases, the families say ChatGPT encouraged their loved one to kill themselves—or even helped them plan it out.

OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. Certain conversational triggers alert the company’s system to suicidal ideations, which then relay the information to a human safety team. The company claims that every time it receives this kind of notification, the incident is reviewed by a human. “We strive to review these safety notifications in under one hour,” the company says.

If OpenAI’s internal team decides that the situation represents a serious safety risk, ChatGPT proceeds to send the trusted contact an alert—either by email, text message, or an in-app notification. The alert is designed to be brief and to encourage the contact to check in with the person in question. It does not include detailed information about what was being discussed, as a means of protecting the user’s privacy, the company says.

OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm | TechCrunch
On Thursday OpenAI announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation. The feature allows an adult ChatGPT user to designate another person as a trusted contact within their account, such as a friend or family member. In cases where a conversation may turn to self-harm, OpenAI will now encourage the user to reach out to that contact. It also sends an automated alert to the contact, encouraging them to check in with the user.

OpenAI has faced a wave of lawsuits from the families of people who have committed suicide after talking with its chatbot. In a number of cases, the families say ChatGPT encouraged their loved one to kill themselves—or even helped them plan it out.







OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. Certain conversational triggers alert the company’s system to suicidal ideations, which then relay the information to a human safety team. The company claims that every time it receives this kind of notification, the incident is reviewed by a human. “We strive to review these safety notifications in under one hour,” the company says. 

If OpenAI’s internal team decides that the situation represents a serious safety risk, ChatGPT proceeds to send the trusted contact an alert—either by email, text message, or an in-app notification. The alert is designed to be brief and to encourage the contact to check in with the person in question. It does not include detailed information about what was being discussed, as a means of protecting the user’s privacy, the company says.  

Image Credits:OpenAI

The Trusted Contact feature follows the safeguards the company introduced last September that gave parents the power to have some oversight of their teens’ accounts, including receiving safety notifications designed to alert the parent if OpenAI’s system believes their child is facing a “serious safety risk.” For some time now, ChatGPT has also included automated alerts to seek professional health services, should a conversation trend toward the topic of self-harm.

Crucially, Trust Contact is optional and, even if the protection is activated on a particular account, any user can have multiple ChatGPT accounts. OpenAI’s parental controls are also optional, presenting a similar limitation.

“Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments,” the company wrote in the announcement post. “We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress.”

	
		
		Techcrunch event
		
			
			
									San Francisco, CA
													|
													October 13-15, 2026
							
			
		
	

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.#OpenAI #introduces #Trusted #Contact #safeguard #cases #selfharm #TechCrunchAI,ChatGPT,OpenAI
Image Credits:OpenAI

The Trusted Contact feature follows the safeguards the company introduced last September that gave parents the power to have some oversight of their teens’ accounts, including receiving safety notifications designed to alert the parent if OpenAI’s system believes their child is facing a “serious safety risk.” For some time now, ChatGPT has also included automated alerts to seek professional health services, should a conversation trend toward the topic of self-harm.

Crucially, Trust Contact is optional and, even if the protection is activated on a particular account, any user can have multiple ChatGPT accounts. OpenAI’s parental controls are also optional, presenting a similar limitation.

“Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments,” the company wrote in the announcement post. “We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress.”

Techcrunch event

San Francisco, CA | October 13-15, 2026

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#OpenAI #introduces #Trusted #Contact #safeguard #cases #selfharm #TechCrunchAI,ChatGPT,OpenAI">OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm | TechCrunch
On Thursday OpenAI announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation. The feature allows an adult ChatGPT user to designate another person as a trusted contact within their account, such as a friend or family member. In cases where a conversation may turn to self-harm, OpenAI will now encourage the user to reach out to that contact. It also sends an automated alert to the contact, encouraging them to check in with the user.

OpenAI has faced a wave of lawsuits from the families of people who have committed suicide after talking with its chatbot. In a number of cases, the families say ChatGPT encouraged their loved one to kill themselves—or even helped them plan it out.







OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. Certain conversational triggers alert the company’s system to suicidal ideations, which then relay the information to a human safety team. The company claims that every time it receives this kind of notification, the incident is reviewed by a human. “We strive to review these safety notifications in under one hour,” the company says. 

If OpenAI’s internal team decides that the situation represents a serious safety risk, ChatGPT proceeds to send the trusted contact an alert—either by email, text message, or an in-app notification. The alert is designed to be brief and to encourage the contact to check in with the person in question. It does not include detailed information about what was being discussed, as a means of protecting the user’s privacy, the company says.  

Image Credits:OpenAI

The Trusted Contact feature follows the safeguards the company introduced last September that gave parents the power to have some oversight of their teens’ accounts, including receiving safety notifications designed to alert the parent if OpenAI’s system believes their child is facing a “serious safety risk.” For some time now, ChatGPT has also included automated alerts to seek professional health services, should a conversation trend toward the topic of self-harm.

Crucially, Trust Contact is optional and, even if the protection is activated on a particular account, any user can have multiple ChatGPT accounts. OpenAI’s parental controls are also optional, presenting a similar limitation.

“Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments,” the company wrote in the announcement post. “We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress.”

	
		
		Techcrunch event
		
			
			
									San Francisco, CA
													|
													October 13-15, 2026
							
			
		
	

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.#OpenAI #introduces #Trusted #Contact #safeguard #cases #selfharm #TechCrunchAI,ChatGPT,OpenAI

announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation. The feature allows an adult ChatGPT user to designate another person as a trusted contact within their account, such as a friend or family member. In cases where a conversation may turn to self-harm, OpenAI will now encourage the user to reach out to that contact. It also sends an automated alert to the contact, encouraging them to check in with the user.

OpenAI has faced a wave of lawsuits from the families of people who have committed suicide after talking with its chatbot. In a number of cases, the families say ChatGPT encouraged their loved one to kill themselves—or even helped them plan it out.

OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. Certain conversational triggers alert the company’s system to suicidal ideations, which then relay the information to a human safety team. The company claims that every time it receives this kind of notification, the incident is reviewed by a human. “We strive to review these safety notifications in under one hour,” the company says.

If OpenAI’s internal team decides that the situation represents a serious safety risk, ChatGPT proceeds to send the trusted contact an alert—either by email, text message, or an in-app notification. The alert is designed to be brief and to encourage the contact to check in with the person in question. It does not include detailed information about what was being discussed, as a means of protecting the user’s privacy, the company says.

OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm | TechCrunch
On Thursday OpenAI announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation. The feature allows an adult ChatGPT user to designate another person as a trusted contact within their account, such as a friend or family member. In cases where a conversation may turn to self-harm, OpenAI will now encourage the user to reach out to that contact. It also sends an automated alert to the contact, encouraging them to check in with the user.

OpenAI has faced a wave of lawsuits from the families of people who have committed suicide after talking with its chatbot. In a number of cases, the families say ChatGPT encouraged their loved one to kill themselves—or even helped them plan it out.







OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. Certain conversational triggers alert the company’s system to suicidal ideations, which then relay the information to a human safety team. The company claims that every time it receives this kind of notification, the incident is reviewed by a human. “We strive to review these safety notifications in under one hour,” the company says. 

If OpenAI’s internal team decides that the situation represents a serious safety risk, ChatGPT proceeds to send the trusted contact an alert—either by email, text message, or an in-app notification. The alert is designed to be brief and to encourage the contact to check in with the person in question. It does not include detailed information about what was being discussed, as a means of protecting the user’s privacy, the company says.  

Image Credits:OpenAI

The Trusted Contact feature follows the safeguards the company introduced last September that gave parents the power to have some oversight of their teens’ accounts, including receiving safety notifications designed to alert the parent if OpenAI’s system believes their child is facing a “serious safety risk.” For some time now, ChatGPT has also included automated alerts to seek professional health services, should a conversation trend toward the topic of self-harm.

Crucially, Trust Contact is optional and, even if the protection is activated on a particular account, any user can have multiple ChatGPT accounts. OpenAI’s parental controls are also optional, presenting a similar limitation.

“Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments,” the company wrote in the announcement post. “We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress.”

	
		
		Techcrunch event
		
			
			
									San Francisco, CA
													|
													October 13-15, 2026
							
			
		
	

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.#OpenAI #introduces #Trusted #Contact #safeguard #cases #selfharm #TechCrunchAI,ChatGPT,OpenAI
Image Credits:OpenAI

The Trusted Contact feature follows the safeguards the company introduced last September that gave parents the power to have some oversight of their teens’ accounts, including receiving safety notifications designed to alert the parent if OpenAI’s system believes their child is facing a “serious safety risk.” For some time now, ChatGPT has also included automated alerts to seek professional health services, should a conversation trend toward the topic of self-harm.

Crucially, Trust Contact is optional and, even if the protection is activated on a particular account, any user can have multiple ChatGPT accounts. OpenAI’s parental controls are also optional, presenting a similar limitation.

“Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments,” the company wrote in the announcement post. “We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress.”

Techcrunch event

San Francisco, CA | October 13-15, 2026

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#OpenAI #introduces #Trusted #Contact #safeguard #cases #selfharm #TechCrunchAI,ChatGPT,OpenAI">OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm | TechCrunch

On Thursday OpenAI announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation. The feature allows an adult ChatGPT user to designate another person as a trusted contact within their account, such as a friend or family member. In cases where a conversation may turn to self-harm, OpenAI will now encourage the user to reach out to that contact. It also sends an automated alert to the contact, encouraging them to check in with the user.

OpenAI has faced a wave of lawsuits from the families of people who have committed suicide after talking with its chatbot. In a number of cases, the families say ChatGPT encouraged their loved one to kill themselves—or even helped them plan it out.

OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. Certain conversational triggers alert the company’s system to suicidal ideations, which then relay the information to a human safety team. The company claims that every time it receives this kind of notification, the incident is reviewed by a human. “We strive to review these safety notifications in under one hour,” the company says.

If OpenAI’s internal team decides that the situation represents a serious safety risk, ChatGPT proceeds to send the trusted contact an alert—either by email, text message, or an in-app notification. The alert is designed to be brief and to encourage the contact to check in with the person in question. It does not include detailed information about what was being discussed, as a means of protecting the user’s privacy, the company says.

OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm | TechCrunch
On Thursday OpenAI announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation. The feature allows an adult ChatGPT user to designate another person as a trusted contact within their account, such as a friend or family member. In cases where a conversation may turn to self-harm, OpenAI will now encourage the user to reach out to that contact. It also sends an automated alert to the contact, encouraging them to check in with the user.

OpenAI has faced a wave of lawsuits from the families of people who have committed suicide after talking with its chatbot. In a number of cases, the families say ChatGPT encouraged their loved one to kill themselves—or even helped them plan it out.







OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. Certain conversational triggers alert the company’s system to suicidal ideations, which then relay the information to a human safety team. The company claims that every time it receives this kind of notification, the incident is reviewed by a human. “We strive to review these safety notifications in under one hour,” the company says. 

If OpenAI’s internal team decides that the situation represents a serious safety risk, ChatGPT proceeds to send the trusted contact an alert—either by email, text message, or an in-app notification. The alert is designed to be brief and to encourage the contact to check in with the person in question. It does not include detailed information about what was being discussed, as a means of protecting the user’s privacy, the company says.  

Image Credits:OpenAI

The Trusted Contact feature follows the safeguards the company introduced last September that gave parents the power to have some oversight of their teens’ accounts, including receiving safety notifications designed to alert the parent if OpenAI’s system believes their child is facing a “serious safety risk.” For some time now, ChatGPT has also included automated alerts to seek professional health services, should a conversation trend toward the topic of self-harm.

Crucially, Trust Contact is optional and, even if the protection is activated on a particular account, any user can have multiple ChatGPT accounts. OpenAI’s parental controls are also optional, presenting a similar limitation.

“Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments,” the company wrote in the announcement post. “We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress.”

	
		
		Techcrunch event
		
			
			
									San Francisco, CA
													|
													October 13-15, 2026
							
			
		
	

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.#OpenAI #introduces #Trusted #Contact #safeguard #cases #selfharm #TechCrunchAI,ChatGPT,OpenAI
Image Credits:OpenAI

The Trusted Contact feature follows the safeguards the company introduced last September that gave parents the power to have some oversight of their teens’ accounts, including receiving safety notifications designed to alert the parent if OpenAI’s system believes their child is facing a “serious safety risk.” For some time now, ChatGPT has also included automated alerts to seek professional health services, should a conversation trend toward the topic of self-harm.

Crucially, Trust Contact is optional and, even if the protection is activated on a particular account, any user can have multiple ChatGPT accounts. OpenAI’s parental controls are also optional, presenting a similar limitation.

“Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments,” the company wrote in the announcement post. “We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress.”

Techcrunch event

San Francisco, CA | October 13-15, 2026

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#OpenAI #introduces #Trusted #Contact #safeguard #cases #selfharm #TechCrunchAI,ChatGPT,OpenAI

Post Comment