×
Some Cities in China Are Advertising Exclusive Subsidies for Huawei-Powered Cars

Some Cities in China Are Advertising Exclusive Subsidies for Huawei-Powered Cars

WIRED contacted Huawei to ask about its potential role in the subsidies. Huawei did not comment in time for publication.

One of the earliest subsidies appeared online in March, when the Commerce Bureau of Shenzhen Longgang District—the district where Huawei’s headquarters are located—posted that local car buyers can get up to 4,000 RMB (about $560) for buying a car that runs on Huawei’s driver-assistance system. The subsidies will be given out on a first-come, first-served basis until the total budget of 14,000,000 RMB is exhausted, meaning over 3,500 Shenzhen residents could have benefited from it.

Starting in May, many announcements in similar language were subsequently posted by the commerce bureaus in other provinces and municipalities. In China, these commerce bureaus function as consumer regulators and are in charge of distributing government subsidies, including a massive program launched last year to encourage trading in old electronics and cars to help stimulate the economy. The fact that the Huawei subsidies are being announced through the commerce bureaus make them almost indistinguishable from the official government welfare program.

In some cases, like in Henan and Anhui provinces, the subsidies were instead published by provincial auto industry associations. While these are technically private trade groups, the announcements were printed on official-looking letterheads and with red stamps, giving them a sense of authority.

After American trade restrictions devastated Huawei’s global smartphone business and essentially forced it to exit markets outside of China, the tech giant has been trying to reinvent itself. Along with creating the Harmony operating system for smartphones, smart appliances, and cars, it’s also increasingly working on large language models and autonomous driving technologies amid the AI boom.

The company has famously vowed to never make a car itself—unlike its smartphone peer and competitor Xiaomi—but it has partnered with a slew of Chinese auto companies. Huawei’s autonomous driving technology is particularly appealing to Chinese manufacturers that don’t have the capacity to develop self-driving on their own. It’s “technically brand-agnostic, which is attractive for the brands that are struggling to keep up with progress in the intelligent driving space,” says Tu. “Effectively, if you’re desperate and you can’t keep up, you should partner with Huawei in the China market.”

The subsidies have stirred up controversy in China, as they seem to give certain brands a leg up in what has become a brutally competitive EV landscape. As the domestic market saturates, Chinese EV brands have been forced to slash prices and give consumers free tech upgrades or interest-free financing options to stay afloat.

Earlier this year, Beijing signaled that carmakers should avoid using extreme pricing tactics. “The central government ultimately wants to see stable, profitable companies and not a super fragmented industry where nobody’s making any money,” says Ilaria Mazzocco, a senior fellow at the Center for Strategic and International Studies who has closely studied China’s industrial policy for EVs. “For consumers, this is fantastic right now, but it just isn’t sustainable in the long term.”

Pressure from the central government to avoid fueling price wars may be driving companies to come up with more creative ways to make their cars more affordable. At the same time, Mazzocco says, local governments may view Huawei’s self-driving technology favorably because it fits with another policy goal to develop high-tech manufacturing and self-sufficient AI technologies in China.

Before this year, WIRED could only identify one other similar Huawei car subsidy from 2022. That year, Shenzhen, Huawei’s hometown, was giving out $1,400 per car to people who bought vehicles equipped with HarmonyOS. Huawei didn’t answer questions from WIRED about whether the company was paying for those either.

Source link
#Cities #China #Advertising #Exclusive #Subsidies #HuaweiPowered #Cars

ShinyHunters has breached Instructure (again). Instead of contacting us to resolve it they ignored us and did some “security patches.” If any of the schools in the affected list are interested in preventing the release of their data, please consult with a cyber advisory firm and contact us privately at TOX to negotiate a settlement. You have till the end of the day by 12 May 2026 before everything is leaked.

The message included a link to a list of schools ShinyHunter claims to have breached through Canvas. The platform’s status page says Canvas, Canvas Beta, and Canvas Test are currently unavailable and that it is investigating the outage.

Instructure said last week that it “deployed patches to enhance system security” following the breach. ShinyHunters — which has claimed responsibility for attacks on Ticketmaster, AT&T, Rockstar Games, ADT, and Vercel — said its data leak site contains 9,000 schools, including data belonging to 275 million students, teachers, and other staff, according to Bleeping Computer.

#Canvas #ShinyHunters #threatens #leak #schools #dataNews,Security,Tech">Canvas is down as ShinyHunters threatens to leak schools’ dataThe Instructure-owned learning management platform, Canvas, is down after recently confirming a massive data breach that impacted student names, email addresses, ID numbers, and messages. Students attempting to access the system on Thursday saw a message from the hacking group ShinyHunters, which claimed responsibility for the attack:ShinyHunters has breached Instructure (again). Instead of contacting us to resolve it they ignored us and did some “security patches.” If any of the schools in the affected list are interested in preventing the release of their data, please consult with a cyber advisory firm and contact us privately at TOX to negotiate a settlement. You have till the end of the day by 12 May 2026 before everything is leaked.The message included a link to a list of schools ShinyHunter claims to have breached through Canvas. The platform’s status page says Canvas, Canvas Beta, and Canvas Test are currently unavailable and that it is investigating the outage.Instructure said last week that it “deployed patches to enhance system security” following the breach. ShinyHunters — which has claimed responsibility for attacks on Ticketmaster, AT&T, Rockstar Games, ADT, and Vercel — said its data leak site contains 9,000 schools, including data belonging to 275 million students, teachers, and other staff, according to Bleeping Computer.#Canvas #ShinyHunters #threatens #leak #schools #dataNews,Security,Tech

impacted student names, email addresses, ID numbers, and messages. Students attempting to access the system on Thursday saw a message from the hacking group ShinyHunters, which claimed responsibility for the attack:

ShinyHunters has breached Instructure (again). Instead of contacting us to resolve it they ignored us and did some “security patches.” If any of the schools in the affected list are interested in preventing the release of their data, please consult with a cyber advisory firm and contact us privately at TOX to negotiate a settlement. You have till the end of the day by 12 May 2026 before everything is leaked.

The message included a link to a list of schools ShinyHunter claims to have breached through Canvas. The platform’s status page says Canvas, Canvas Beta, and Canvas Test are currently unavailable and that it is investigating the outage.

Instructure said last week that it “deployed patches to enhance system security” following the breach. ShinyHunters — which has claimed responsibility for attacks on Ticketmaster, AT&T, Rockstar Games, ADT, and Vercel — said its data leak site contains 9,000 schools, including data belonging to 275 million students, teachers, and other staff, according to Bleeping Computer.

#Canvas #ShinyHunters #threatens #leak #schools #dataNews,Security,Tech">Canvas is down as ShinyHunters threatens to leak schools’ data

The Instructure-owned learning management platform, Canvas, is down after recently confirming a massive data breach that impacted student names, email addresses, ID numbers, and messages. Students attempting to access the system on Thursday saw a message from the hacking group ShinyHunters, which claimed responsibility for the attack:

ShinyHunters has breached Instructure (again). Instead of contacting us to resolve it they ignored us and did some “security patches.” If any of the schools in the affected list are interested in preventing the release of their data, please consult with a cyber advisory firm and contact us privately at TOX to negotiate a settlement. You have till the end of the day by 12 May 2026 before everything is leaked.

The message included a link to a list of schools ShinyHunter claims to have breached through Canvas. The platform’s status page says Canvas, Canvas Beta, and Canvas Test are currently unavailable and that it is investigating the outage.

Instructure said last week that it “deployed patches to enhance system security” following the breach. ShinyHunters — which has claimed responsibility for attacks on Ticketmaster, AT&T, Rockstar Games, ADT, and Vercel — said its data leak site contains 9,000 schools, including data belonging to 275 million students, teachers, and other staff, according to Bleeping Computer.

#Canvas #ShinyHunters #threatens #leak #schools #dataNews,Security,Tech
On Thursday OpenAI announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation. The feature allows an adult ChatGPT user to designate another person as a trusted contact within their account, such as a friend or family member. In cases where a conversation may turn to self-harm, OpenAI will now encourage the user to reach out to that contact. It also sends an automated alert to the contact, encouraging them to check in with the user.

OpenAI has faced a wave of lawsuits from the families of people who have committed suicide after talking with its chatbot. In a number of cases, the families say ChatGPT encouraged their loved one to kill themselves—or even helped them plan it out.

OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. Certain conversational triggers alert the company’s system to suicidal ideations, which then relay the information to a human safety team. The company claims that every time it receives this kind of notification, the incident is reviewed by a human. “We strive to review these safety notifications in under one hour,” the company says.

If OpenAI’s internal team decides that the situation represents a serious safety risk, ChatGPT proceeds to send the trusted contact an alert—either by email, text message, or an in-app notification. The alert is designed to be brief and to encourage the contact to check in with the person in question. It does not include detailed information about what was being discussed, as a means of protecting the user’s privacy, the company says.

OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm | TechCrunch
On Thursday OpenAI announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation. The feature allows an adult ChatGPT user to designate another person as a trusted contact within their account, such as a friend or family member. In cases where a conversation may turn to self-harm, OpenAI will now encourage the user to reach out to that contact. It also sends an automated alert to the contact, encouraging them to check in with the user.

OpenAI has faced a wave of lawsuits from the families of people who have committed suicide after talking with its chatbot. In a number of cases, the families say ChatGPT encouraged their loved one to kill themselves—or even helped them plan it out.







OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. Certain conversational triggers alert the company’s system to suicidal ideations, which then relay the information to a human safety team. The company claims that every time it receives this kind of notification, the incident is reviewed by a human. “We strive to review these safety notifications in under one hour,” the company says. 

If OpenAI’s internal team decides that the situation represents a serious safety risk, ChatGPT proceeds to send the trusted contact an alert—either by email, text message, or an in-app notification. The alert is designed to be brief and to encourage the contact to check in with the person in question. It does not include detailed information about what was being discussed, as a means of protecting the user’s privacy, the company says.  

Image Credits:OpenAI

The Trusted Contact feature follows the safeguards the company introduced last September that gave parents the power to have some oversight of their teens’ accounts, including receiving safety notifications designed to alert the parent if OpenAI’s system believes their child is facing a “serious safety risk.” For some time now, ChatGPT has also included automated alerts to seek professional health services, should a conversation trend toward the topic of self-harm.

Crucially, Trust Contact is optional and, even if the protection is activated on a particular account, any user can have multiple ChatGPT accounts. OpenAI’s parental controls are also optional, presenting a similar limitation.

“Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments,” the company wrote in the announcement post. “We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress.”

	
		
		Techcrunch event
		
			
			
									San Francisco, CA
													|
													October 13-15, 2026
							
			
		
	

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.#OpenAI #introduces #Trusted #Contact #safeguard #cases #selfharm #TechCrunchAI,ChatGPT,OpenAI
Image Credits:OpenAI

The Trusted Contact feature follows the safeguards the company introduced last September that gave parents the power to have some oversight of their teens’ accounts, including receiving safety notifications designed to alert the parent if OpenAI’s system believes their child is facing a “serious safety risk.” For some time now, ChatGPT has also included automated alerts to seek professional health services, should a conversation trend toward the topic of self-harm.

Crucially, Trust Contact is optional and, even if the protection is activated on a particular account, any user can have multiple ChatGPT accounts. OpenAI’s parental controls are also optional, presenting a similar limitation.

“Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments,” the company wrote in the announcement post. “We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress.”

Techcrunch event

San Francisco, CA | October 13-15, 2026

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#OpenAI #introduces #Trusted #Contact #safeguard #cases #selfharm #TechCrunchAI,ChatGPT,OpenAI">OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm | TechCrunch
On Thursday OpenAI announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation. The feature allows an adult ChatGPT user to designate another person as a trusted contact within their account, such as a friend or family member. In cases where a conversation may turn to self-harm, OpenAI will now encourage the user to reach out to that contact. It also sends an automated alert to the contact, encouraging them to check in with the user.

OpenAI has faced a wave of lawsuits from the families of people who have committed suicide after talking with its chatbot. In a number of cases, the families say ChatGPT encouraged their loved one to kill themselves—or even helped them plan it out.







OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. Certain conversational triggers alert the company’s system to suicidal ideations, which then relay the information to a human safety team. The company claims that every time it receives this kind of notification, the incident is reviewed by a human. “We strive to review these safety notifications in under one hour,” the company says. 

If OpenAI’s internal team decides that the situation represents a serious safety risk, ChatGPT proceeds to send the trusted contact an alert—either by email, text message, or an in-app notification. The alert is designed to be brief and to encourage the contact to check in with the person in question. It does not include detailed information about what was being discussed, as a means of protecting the user’s privacy, the company says.  

Image Credits:OpenAI

The Trusted Contact feature follows the safeguards the company introduced last September that gave parents the power to have some oversight of their teens’ accounts, including receiving safety notifications designed to alert the parent if OpenAI’s system believes their child is facing a “serious safety risk.” For some time now, ChatGPT has also included automated alerts to seek professional health services, should a conversation trend toward the topic of self-harm.

Crucially, Trust Contact is optional and, even if the protection is activated on a particular account, any user can have multiple ChatGPT accounts. OpenAI’s parental controls are also optional, presenting a similar limitation.

“Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments,” the company wrote in the announcement post. “We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress.”

	
		
		Techcrunch event
		
			
			
									San Francisco, CA
													|
													October 13-15, 2026
							
			
		
	

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.#OpenAI #introduces #Trusted #Contact #safeguard #cases #selfharm #TechCrunchAI,ChatGPT,OpenAI

announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation. The feature allows an adult ChatGPT user to designate another person as a trusted contact within their account, such as a friend or family member. In cases where a conversation may turn to self-harm, OpenAI will now encourage the user to reach out to that contact. It also sends an automated alert to the contact, encouraging them to check in with the user.

OpenAI has faced a wave of lawsuits from the families of people who have committed suicide after talking with its chatbot. In a number of cases, the families say ChatGPT encouraged their loved one to kill themselves—or even helped them plan it out.

OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. Certain conversational triggers alert the company’s system to suicidal ideations, which then relay the information to a human safety team. The company claims that every time it receives this kind of notification, the incident is reviewed by a human. “We strive to review these safety notifications in under one hour,” the company says.

If OpenAI’s internal team decides that the situation represents a serious safety risk, ChatGPT proceeds to send the trusted contact an alert—either by email, text message, or an in-app notification. The alert is designed to be brief and to encourage the contact to check in with the person in question. It does not include detailed information about what was being discussed, as a means of protecting the user’s privacy, the company says.

OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm | TechCrunch
On Thursday OpenAI announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation. The feature allows an adult ChatGPT user to designate another person as a trusted contact within their account, such as a friend or family member. In cases where a conversation may turn to self-harm, OpenAI will now encourage the user to reach out to that contact. It also sends an automated alert to the contact, encouraging them to check in with the user.

OpenAI has faced a wave of lawsuits from the families of people who have committed suicide after talking with its chatbot. In a number of cases, the families say ChatGPT encouraged their loved one to kill themselves—or even helped them plan it out.







OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. Certain conversational triggers alert the company’s system to suicidal ideations, which then relay the information to a human safety team. The company claims that every time it receives this kind of notification, the incident is reviewed by a human. “We strive to review these safety notifications in under one hour,” the company says. 

If OpenAI’s internal team decides that the situation represents a serious safety risk, ChatGPT proceeds to send the trusted contact an alert—either by email, text message, or an in-app notification. The alert is designed to be brief and to encourage the contact to check in with the person in question. It does not include detailed information about what was being discussed, as a means of protecting the user’s privacy, the company says.  

Image Credits:OpenAI

The Trusted Contact feature follows the safeguards the company introduced last September that gave parents the power to have some oversight of their teens’ accounts, including receiving safety notifications designed to alert the parent if OpenAI’s system believes their child is facing a “serious safety risk.” For some time now, ChatGPT has also included automated alerts to seek professional health services, should a conversation trend toward the topic of self-harm.

Crucially, Trust Contact is optional and, even if the protection is activated on a particular account, any user can have multiple ChatGPT accounts. OpenAI’s parental controls are also optional, presenting a similar limitation.

“Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments,” the company wrote in the announcement post. “We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress.”

	
		
		Techcrunch event
		
			
			
									San Francisco, CA
													|
													October 13-15, 2026
							
			
		
	

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.#OpenAI #introduces #Trusted #Contact #safeguard #cases #selfharm #TechCrunchAI,ChatGPT,OpenAI
Image Credits:OpenAI

The Trusted Contact feature follows the safeguards the company introduced last September that gave parents the power to have some oversight of their teens’ accounts, including receiving safety notifications designed to alert the parent if OpenAI’s system believes their child is facing a “serious safety risk.” For some time now, ChatGPT has also included automated alerts to seek professional health services, should a conversation trend toward the topic of self-harm.

Crucially, Trust Contact is optional and, even if the protection is activated on a particular account, any user can have multiple ChatGPT accounts. OpenAI’s parental controls are also optional, presenting a similar limitation.

“Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments,” the company wrote in the announcement post. “We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress.”

Techcrunch event

San Francisco, CA | October 13-15, 2026

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#OpenAI #introduces #Trusted #Contact #safeguard #cases #selfharm #TechCrunchAI,ChatGPT,OpenAI">OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm | TechCrunch

On Thursday OpenAI announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation. The feature allows an adult ChatGPT user to designate another person as a trusted contact within their account, such as a friend or family member. In cases where a conversation may turn to self-harm, OpenAI will now encourage the user to reach out to that contact. It also sends an automated alert to the contact, encouraging them to check in with the user.

OpenAI has faced a wave of lawsuits from the families of people who have committed suicide after talking with its chatbot. In a number of cases, the families say ChatGPT encouraged their loved one to kill themselves—or even helped them plan it out.

OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. Certain conversational triggers alert the company’s system to suicidal ideations, which then relay the information to a human safety team. The company claims that every time it receives this kind of notification, the incident is reviewed by a human. “We strive to review these safety notifications in under one hour,” the company says.

If OpenAI’s internal team decides that the situation represents a serious safety risk, ChatGPT proceeds to send the trusted contact an alert—either by email, text message, or an in-app notification. The alert is designed to be brief and to encourage the contact to check in with the person in question. It does not include detailed information about what was being discussed, as a means of protecting the user’s privacy, the company says.

OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm | TechCrunch
On Thursday OpenAI announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation. The feature allows an adult ChatGPT user to designate another person as a trusted contact within their account, such as a friend or family member. In cases where a conversation may turn to self-harm, OpenAI will now encourage the user to reach out to that contact. It also sends an automated alert to the contact, encouraging them to check in with the user.

OpenAI has faced a wave of lawsuits from the families of people who have committed suicide after talking with its chatbot. In a number of cases, the families say ChatGPT encouraged their loved one to kill themselves—or even helped them plan it out.







OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. Certain conversational triggers alert the company’s system to suicidal ideations, which then relay the information to a human safety team. The company claims that every time it receives this kind of notification, the incident is reviewed by a human. “We strive to review these safety notifications in under one hour,” the company says. 

If OpenAI’s internal team decides that the situation represents a serious safety risk, ChatGPT proceeds to send the trusted contact an alert—either by email, text message, or an in-app notification. The alert is designed to be brief and to encourage the contact to check in with the person in question. It does not include detailed information about what was being discussed, as a means of protecting the user’s privacy, the company says.  

Image Credits:OpenAI

The Trusted Contact feature follows the safeguards the company introduced last September that gave parents the power to have some oversight of their teens’ accounts, including receiving safety notifications designed to alert the parent if OpenAI’s system believes their child is facing a “serious safety risk.” For some time now, ChatGPT has also included automated alerts to seek professional health services, should a conversation trend toward the topic of self-harm.

Crucially, Trust Contact is optional and, even if the protection is activated on a particular account, any user can have multiple ChatGPT accounts. OpenAI’s parental controls are also optional, presenting a similar limitation.

“Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments,” the company wrote in the announcement post. “We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress.”

	
		
		Techcrunch event
		
			
			
									San Francisco, CA
													|
													October 13-15, 2026
							
			
		
	

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.#OpenAI #introduces #Trusted #Contact #safeguard #cases #selfharm #TechCrunchAI,ChatGPT,OpenAI
Image Credits:OpenAI

The Trusted Contact feature follows the safeguards the company introduced last September that gave parents the power to have some oversight of their teens’ accounts, including receiving safety notifications designed to alert the parent if OpenAI’s system believes their child is facing a “serious safety risk.” For some time now, ChatGPT has also included automated alerts to seek professional health services, should a conversation trend toward the topic of self-harm.

Crucially, Trust Contact is optional and, even if the protection is activated on a particular account, any user can have multiple ChatGPT accounts. OpenAI’s parental controls are also optional, presenting a similar limitation.

“Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments,” the company wrote in the announcement post. “We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress.”

Techcrunch event

San Francisco, CA | October 13-15, 2026

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#OpenAI #introduces #Trusted #Contact #safeguard #cases #selfharm #TechCrunchAI,ChatGPT,OpenAI

Post Comment