×
The Canvas Hack Is a New Kind of Ransomware DebacleHigher education has long been a target of ransomware gangs and data extortion attacks. But never before, perhaps, has a cyberattack against a single software platform so thoroughly disrupted the daily operations of thousands of schools across the United States.The widely used digital learning platform Canvas was put into “maintenance mode” on Thursday after its maker, the education tech giant Instructure, suffered a data breach and faced an extortion attempt by attackers using the recognizable moniker “ShinyHunters.” Though the hackers have been advertising the breach and attempting to extract a ransom payment from Instructure since May 1, the situation took on additional immediacy for regular people across the US and beyond on Thursday because the Canvas downtime caused chaos at schools, including those in the midst of finals and end-of-year assignments.Universities like Harvard, Columbia, Rutgers, and Georgetown sent alerts to students about the situation in recent days; other institutions, including school districts in at least a dozen states, also appear to have been affected. In a list published by the hackers behind the attack on their ransom-focused dark web site, they claim the breach affected more than 8,800 schools. The exact scale and reach of the breach is currently unclear, though. And the fact that Canvas was down throughout Thursday afternoon and evening further complicated the picture.In a running incident update log that began on May 1, Steve Proud, Instructure’s chief information security officer, said that the company had “recently experienced a cybersecurity incident perpetrated by a criminal threat actor.” He added on May 2 that “the information involved” for “users at affected institutions” included names, email addresses, student ID numbers, and messages exchanged by users on the platform.The situation was ultimately marked as “Resolved” on Wednesday, with Proud writing that “Canvas is fully operational, and we are not seeing any ongoing unauthorized activity.” At midday on Thursday, though, the Instructure status page registered an “issue” where “some users are having difficulties logging into Student ePortfolios.” Within a few hours, the company had added another status update: “Instructure has placed Canvas, Canvas Beta and Canvas Test in maintenance mode.” Late Thursday evening, the company said that Canvas was available again “for most users.”TechCrunch reported on Thursday that the hackers launched a secondary wave of attacks, defacing some schools’ Canvas portals by injecting an HTML file to display their own message on the schools’ Canvas login pages. According to The Harvard Crimson, attackers modified the Harvard Canvas login page to show a message that included a list of schools that the hackers claim were impacted by the breach.The message from attackers “urged schools included on the affected list to consult with a cyber advisory firm and contact the group privately to negotiate a settlement before the end of the day on May 12—or else risk their data being leaked,” The Crimson reported. “It is unclear what information tied to Harvard affiliates was included in the alleged breach.”Instructure did not immediately respond to a request for comment about Thursday’s outages and how they fit into the bigger picture of the breach. But the situation is significant given that a massive trove of student information has potentially been exposed, and the visibility of the incident across the country makes it a key example of a longstanding, yet endlessly escalating problem of data extortion and ransomware attacks.The ShinyHunters name is associated with massive data dumps and has been linked to the infamous hacker collective known as the Com. But as the constellation of actors has shifted over the years, numerous attackers have taken up the most prominent Com-related monikers. A number of recent attacks have invoked other names, such as Lapsus$, with little or no connection to the original group that operated under the name.#Canvas #Hack #Kind #Ransomware #Debacleransomware,cybersecurity,malware,hacks,hacking,security,vulnerabilities

The Canvas Hack Is a New Kind of Ransomware Debacle

Higher education has long been a target of ransomware gangs and data extortion attacks. But never before, perhaps, has a cyberattack against a single software platform so thoroughly disrupted the daily operations of thousands of schools across the United States.

The widely used digital learning platform Canvas was put into “maintenance mode” on Thursday after its maker, the education tech giant Instructure, suffered a data breach and faced an extortion attempt by attackers using the recognizable moniker “ShinyHunters.” Though the hackers have been advertising the breach and attempting to extract a ransom payment from Instructure since May 1, the situation took on additional immediacy for regular people across the US and beyond on Thursday because the Canvas downtime caused chaos at schools, including those in the midst of finals and end-of-year assignments.

Universities like Harvard, Columbia, Rutgers, and Georgetown sent alerts to students about the situation in recent days; other institutions, including school districts in at least a dozen states, also appear to have been affected. In a list published by the hackers behind the attack on their ransom-focused dark web site, they claim the breach affected more than 8,800 schools. The exact scale and reach of the breach is currently unclear, though. And the fact that Canvas was down throughout Thursday afternoon and evening further complicated the picture.

In a running incident update log that began on May 1, Steve Proud, Instructure’s chief information security officer, said that the company had “recently experienced a cybersecurity incident perpetrated by a criminal threat actor.” He added on May 2 that “the information involved” for “users at affected institutions” included names, email addresses, student ID numbers, and messages exchanged by users on the platform.

The situation was ultimately marked as “Resolved” on Wednesday, with Proud writing that “Canvas is fully operational, and we are not seeing any ongoing unauthorized activity.” At midday on Thursday, though, the Instructure status page registered an “issue” where “some users are having difficulties logging into Student ePortfolios.” Within a few hours, the company had added another status update: “Instructure has placed Canvas, Canvas Beta and Canvas Test in maintenance mode.” Late Thursday evening, the company said that Canvas was available again “for most users.”

TechCrunch reported on Thursday that the hackers launched a secondary wave of attacks, defacing some schools’ Canvas portals by injecting an HTML file to display their own message on the schools’ Canvas login pages. According to The Harvard Crimson, attackers modified the Harvard Canvas login page to show a message that included a list of schools that the hackers claim were impacted by the breach.

The message from attackers “urged schools included on the affected list to consult with a cyber advisory firm and contact the group privately to negotiate a settlement before the end of the day on May 12—or else risk their data being leaked,” The Crimson reported. “It is unclear what information tied to Harvard affiliates was included in the alleged breach.”

Instructure did not immediately respond to a request for comment about Thursday’s outages and how they fit into the bigger picture of the breach. But the situation is significant given that a massive trove of student information has potentially been exposed, and the visibility of the incident across the country makes it a key example of a longstanding, yet endlessly escalating problem of data extortion and ransomware attacks.

The ShinyHunters name is associated with massive data dumps and has been linked to the infamous hacker collective known as the Com. But as the constellation of actors has shifted over the years, numerous attackers have taken up the most prominent Com-related monikers. A number of recent attacks have invoked other names, such as Lapsus$, with little or no connection to the original group that operated under the name.

#Canvas #Hack #Kind #Ransomware #Debacleransomware,cybersecurity,malware,hacks,hacking,security,vulnerabilities

Higher education has long been a target of ransomware gangs and data extortion attacks. But never before, perhaps, has a cyberattack against a single software platform so thoroughly disrupted the daily operations of thousands of schools across the United States.

The widely used digital learning platform Canvas was put into “maintenance mode” on Thursday after its maker, the education tech giant Instructure, suffered a data breach and faced an extortion attempt by attackers using the recognizable moniker “ShinyHunters.” Though the hackers have been advertising the breach and attempting to extract a ransom payment from Instructure since May 1, the situation took on additional immediacy for regular people across the US and beyond on Thursday because the Canvas downtime caused chaos at schools, including those in the midst of finals and end-of-year assignments.

Universities like Harvard, Columbia, Rutgers, and Georgetown sent alerts to students about the situation in recent days; other institutions, including school districts in at least a dozen states, also appear to have been affected. In a list published by the hackers behind the attack on their ransom-focused dark web site, they claim the breach affected more than 8,800 schools. The exact scale and reach of the breach is currently unclear, though. And the fact that Canvas was down throughout Thursday afternoon and evening further complicated the picture.

In a running incident update log that began on May 1, Steve Proud, Instructure’s chief information security officer, said that the company had “recently experienced a cybersecurity incident perpetrated by a criminal threat actor.” He added on May 2 that “the information involved” for “users at affected institutions” included names, email addresses, student ID numbers, and messages exchanged by users on the platform.

The situation was ultimately marked as “Resolved” on Wednesday, with Proud writing that “Canvas is fully operational, and we are not seeing any ongoing unauthorized activity.” At midday on Thursday, though, the Instructure status page registered an “issue” where “some users are having difficulties logging into Student ePortfolios.” Within a few hours, the company had added another status update: “Instructure has placed Canvas, Canvas Beta and Canvas Test in maintenance mode.” Late Thursday evening, the company said that Canvas was available again “for most users.”

TechCrunch reported on Thursday that the hackers launched a secondary wave of attacks, defacing some schools’ Canvas portals by injecting an HTML file to display their own message on the schools’ Canvas login pages. According to The Harvard Crimson, attackers modified the Harvard Canvas login page to show a message that included a list of schools that the hackers claim were impacted by the breach.

The message from attackers “urged schools included on the affected list to consult with a cyber advisory firm and contact the group privately to negotiate a settlement before the end of the day on May 12—or else risk their data being leaked,” The Crimson reported. “It is unclear what information tied to Harvard affiliates was included in the alleged breach.”

Instructure did not immediately respond to a request for comment about Thursday’s outages and how they fit into the bigger picture of the breach. But the situation is significant given that a massive trove of student information has potentially been exposed, and the visibility of the incident across the country makes it a key example of a longstanding, yet endlessly escalating problem of data extortion and ransomware attacks.

The ShinyHunters name is associated with massive data dumps and has been linked to the infamous hacker collective known as the Com. But as the constellation of actors has shifted over the years, numerous attackers have taken up the most prominent Com-related monikers. A number of recent attacks have invoked other names, such as Lapsus$, with little or no connection to the original group that operated under the name.

Source link
#Canvas #Hack #Kind #Ransomware #Debacle

Previous post

T20 Captain: टीम इंडिया से सूर्यकुमार यादव की छुट्टी तय! जिसे नहीं मिली टी20 वर्ल्ड कप में जगह अब वह बनेगा नया कप्तान

Next post

इंदौर में देवास नाका, सत्यसाईं चौराहा, आईटी पार्क और मूसाखेड़ी चौराहे पर बन रहे फ्लाईओवर की सर्विस रोड खराब, सुबह-शाम लगता है जाम

ShinyHunters has breached Instructure (again). Instead of contacting us to resolve it they ignored us and did some “security patches.” If any of the schools in the affected list are interested in preventing the release of their data, please consult with a cyber advisory firm and contact us privately at TOX to negotiate a settlement. You have till the end of the day by 12 May 2026 before everything is leaked.

The message included a link to a list of schools ShinyHunter claims to have breached through Canvas. The platform’s status page says Canvas, Canvas Beta, and Canvas Test are currently unavailable and that it is investigating the outage.

Instructure said last week that it “deployed patches to enhance system security” following the breach. ShinyHunters — which has claimed responsibility for attacks on Ticketmaster, AT&T, Rockstar Games, ADT, and Vercel — said its data leak site contains 9,000 schools, including data belonging to 275 million students, teachers, and other staff, according to Bleeping Computer.

#Canvas #ShinyHunters #threatens #leak #schools #dataNews,Security,Tech">Canvas is down as ShinyHunters threatens to leak schools’ dataThe Instructure-owned learning management platform, Canvas, is down after recently confirming a massive data breach that impacted student names, email addresses, ID numbers, and messages. Students attempting to access the system on Thursday saw a message from the hacking group ShinyHunters, which claimed responsibility for the attack:ShinyHunters has breached Instructure (again). Instead of contacting us to resolve it they ignored us and did some “security patches.” If any of the schools in the affected list are interested in preventing the release of their data, please consult with a cyber advisory firm and contact us privately at TOX to negotiate a settlement. You have till the end of the day by 12 May 2026 before everything is leaked.The message included a link to a list of schools ShinyHunter claims to have breached through Canvas. The platform’s status page says Canvas, Canvas Beta, and Canvas Test are currently unavailable and that it is investigating the outage.Instructure said last week that it “deployed patches to enhance system security” following the breach. ShinyHunters — which has claimed responsibility for attacks on Ticketmaster, AT&T, Rockstar Games, ADT, and Vercel — said its data leak site contains 9,000 schools, including data belonging to 275 million students, teachers, and other staff, according to Bleeping Computer.#Canvas #ShinyHunters #threatens #leak #schools #dataNews,Security,Tech

impacted student names, email addresses, ID numbers, and messages. Students attempting to access the system on Thursday saw a message from the hacking group ShinyHunters, which claimed responsibility for the attack:

ShinyHunters has breached Instructure (again). Instead of contacting us to resolve it they ignored us and did some “security patches.” If any of the schools in the affected list are interested in preventing the release of their data, please consult with a cyber advisory firm and contact us privately at TOX to negotiate a settlement. You have till the end of the day by 12 May 2026 before everything is leaked.

The message included a link to a list of schools ShinyHunter claims to have breached through Canvas. The platform’s status page says Canvas, Canvas Beta, and Canvas Test are currently unavailable and that it is investigating the outage.

Instructure said last week that it “deployed patches to enhance system security” following the breach. ShinyHunters — which has claimed responsibility for attacks on Ticketmaster, AT&T, Rockstar Games, ADT, and Vercel — said its data leak site contains 9,000 schools, including data belonging to 275 million students, teachers, and other staff, according to Bleeping Computer.

#Canvas #ShinyHunters #threatens #leak #schools #dataNews,Security,Tech">Canvas is down as ShinyHunters threatens to leak schools’ data

The Instructure-owned learning management platform, Canvas, is down after recently confirming a massive data breach that impacted student names, email addresses, ID numbers, and messages. Students attempting to access the system on Thursday saw a message from the hacking group ShinyHunters, which claimed responsibility for the attack:

ShinyHunters has breached Instructure (again). Instead of contacting us to resolve it they ignored us and did some “security patches.” If any of the schools in the affected list are interested in preventing the release of their data, please consult with a cyber advisory firm and contact us privately at TOX to negotiate a settlement. You have till the end of the day by 12 May 2026 before everything is leaked.

The message included a link to a list of schools ShinyHunter claims to have breached through Canvas. The platform’s status page says Canvas, Canvas Beta, and Canvas Test are currently unavailable and that it is investigating the outage.

Instructure said last week that it “deployed patches to enhance system security” following the breach. ShinyHunters — which has claimed responsibility for attacks on Ticketmaster, AT&T, Rockstar Games, ADT, and Vercel — said its data leak site contains 9,000 schools, including data belonging to 275 million students, teachers, and other staff, according to Bleeping Computer.

#Canvas #ShinyHunters #threatens #leak #schools #dataNews,Security,Tech
On Thursday OpenAI announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation. The feature allows an adult ChatGPT user to designate another person as a trusted contact within their account, such as a friend or family member. In cases where a conversation may turn to self-harm, OpenAI will now encourage the user to reach out to that contact. It also sends an automated alert to the contact, encouraging them to check in with the user.

OpenAI has faced a wave of lawsuits from the families of people who have committed suicide after talking with its chatbot. In a number of cases, the families say ChatGPT encouraged their loved one to kill themselves—or even helped them plan it out.

OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. Certain conversational triggers alert the company’s system to suicidal ideations, which then relay the information to a human safety team. The company claims that every time it receives this kind of notification, the incident is reviewed by a human. “We strive to review these safety notifications in under one hour,” the company says.

If OpenAI’s internal team decides that the situation represents a serious safety risk, ChatGPT proceeds to send the trusted contact an alert—either by email, text message, or an in-app notification. The alert is designed to be brief and to encourage the contact to check in with the person in question. It does not include detailed information about what was being discussed, as a means of protecting the user’s privacy, the company says.

OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm | TechCrunch
On Thursday OpenAI announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation. The feature allows an adult ChatGPT user to designate another person as a trusted contact within their account, such as a friend or family member. In cases where a conversation may turn to self-harm, OpenAI will now encourage the user to reach out to that contact. It also sends an automated alert to the contact, encouraging them to check in with the user.

OpenAI has faced a wave of lawsuits from the families of people who have committed suicide after talking with its chatbot. In a number of cases, the families say ChatGPT encouraged their loved one to kill themselves—or even helped them plan it out.







OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. Certain conversational triggers alert the company’s system to suicidal ideations, which then relay the information to a human safety team. The company claims that every time it receives this kind of notification, the incident is reviewed by a human. “We strive to review these safety notifications in under one hour,” the company says. 

If OpenAI’s internal team decides that the situation represents a serious safety risk, ChatGPT proceeds to send the trusted contact an alert—either by email, text message, or an in-app notification. The alert is designed to be brief and to encourage the contact to check in with the person in question. It does not include detailed information about what was being discussed, as a means of protecting the user’s privacy, the company says.  

Image Credits:OpenAI

The Trusted Contact feature follows the safeguards the company introduced last September that gave parents the power to have some oversight of their teens’ accounts, including receiving safety notifications designed to alert the parent if OpenAI’s system believes their child is facing a “serious safety risk.” For some time now, ChatGPT has also included automated alerts to seek professional health services, should a conversation trend toward the topic of self-harm.

Crucially, Trust Contact is optional and, even if the protection is activated on a particular account, any user can have multiple ChatGPT accounts. OpenAI’s parental controls are also optional, presenting a similar limitation.

“Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments,” the company wrote in the announcement post. “We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress.”

	
		
		Techcrunch event
		
			
			
									San Francisco, CA
													|
													October 13-15, 2026
							
			
		
	

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.#OpenAI #introduces #Trusted #Contact #safeguard #cases #selfharm #TechCrunchAI,ChatGPT,OpenAI
Image Credits:OpenAI

The Trusted Contact feature follows the safeguards the company introduced last September that gave parents the power to have some oversight of their teens’ accounts, including receiving safety notifications designed to alert the parent if OpenAI’s system believes their child is facing a “serious safety risk.” For some time now, ChatGPT has also included automated alerts to seek professional health services, should a conversation trend toward the topic of self-harm.

Crucially, Trust Contact is optional and, even if the protection is activated on a particular account, any user can have multiple ChatGPT accounts. OpenAI’s parental controls are also optional, presenting a similar limitation.

“Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments,” the company wrote in the announcement post. “We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress.”

Techcrunch event

San Francisco, CA | October 13-15, 2026

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#OpenAI #introduces #Trusted #Contact #safeguard #cases #selfharm #TechCrunchAI,ChatGPT,OpenAI">OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm | TechCrunch
On Thursday OpenAI announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation. The feature allows an adult ChatGPT user to designate another person as a trusted contact within their account, such as a friend or family member. In cases where a conversation may turn to self-harm, OpenAI will now encourage the user to reach out to that contact. It also sends an automated alert to the contact, encouraging them to check in with the user.

OpenAI has faced a wave of lawsuits from the families of people who have committed suicide after talking with its chatbot. In a number of cases, the families say ChatGPT encouraged their loved one to kill themselves—or even helped them plan it out.







OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. Certain conversational triggers alert the company’s system to suicidal ideations, which then relay the information to a human safety team. The company claims that every time it receives this kind of notification, the incident is reviewed by a human. “We strive to review these safety notifications in under one hour,” the company says. 

If OpenAI’s internal team decides that the situation represents a serious safety risk, ChatGPT proceeds to send the trusted contact an alert—either by email, text message, or an in-app notification. The alert is designed to be brief and to encourage the contact to check in with the person in question. It does not include detailed information about what was being discussed, as a means of protecting the user’s privacy, the company says.  

Image Credits:OpenAI

The Trusted Contact feature follows the safeguards the company introduced last September that gave parents the power to have some oversight of their teens’ accounts, including receiving safety notifications designed to alert the parent if OpenAI’s system believes their child is facing a “serious safety risk.” For some time now, ChatGPT has also included automated alerts to seek professional health services, should a conversation trend toward the topic of self-harm.

Crucially, Trust Contact is optional and, even if the protection is activated on a particular account, any user can have multiple ChatGPT accounts. OpenAI’s parental controls are also optional, presenting a similar limitation.

“Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments,” the company wrote in the announcement post. “We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress.”

	
		
		Techcrunch event
		
			
			
									San Francisco, CA
													|
													October 13-15, 2026
							
			
		
	

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.#OpenAI #introduces #Trusted #Contact #safeguard #cases #selfharm #TechCrunchAI,ChatGPT,OpenAI

announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation. The feature allows an adult ChatGPT user to designate another person as a trusted contact within their account, such as a friend or family member. In cases where a conversation may turn to self-harm, OpenAI will now encourage the user to reach out to that contact. It also sends an automated alert to the contact, encouraging them to check in with the user.

OpenAI has faced a wave of lawsuits from the families of people who have committed suicide after talking with its chatbot. In a number of cases, the families say ChatGPT encouraged their loved one to kill themselves—or even helped them plan it out.

OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. Certain conversational triggers alert the company’s system to suicidal ideations, which then relay the information to a human safety team. The company claims that every time it receives this kind of notification, the incident is reviewed by a human. “We strive to review these safety notifications in under one hour,” the company says.

If OpenAI’s internal team decides that the situation represents a serious safety risk, ChatGPT proceeds to send the trusted contact an alert—either by email, text message, or an in-app notification. The alert is designed to be brief and to encourage the contact to check in with the person in question. It does not include detailed information about what was being discussed, as a means of protecting the user’s privacy, the company says.

OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm | TechCrunch
On Thursday OpenAI announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation. The feature allows an adult ChatGPT user to designate another person as a trusted contact within their account, such as a friend or family member. In cases where a conversation may turn to self-harm, OpenAI will now encourage the user to reach out to that contact. It also sends an automated alert to the contact, encouraging them to check in with the user.

OpenAI has faced a wave of lawsuits from the families of people who have committed suicide after talking with its chatbot. In a number of cases, the families say ChatGPT encouraged their loved one to kill themselves—or even helped them plan it out.







OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. Certain conversational triggers alert the company’s system to suicidal ideations, which then relay the information to a human safety team. The company claims that every time it receives this kind of notification, the incident is reviewed by a human. “We strive to review these safety notifications in under one hour,” the company says. 

If OpenAI’s internal team decides that the situation represents a serious safety risk, ChatGPT proceeds to send the trusted contact an alert—either by email, text message, or an in-app notification. The alert is designed to be brief and to encourage the contact to check in with the person in question. It does not include detailed information about what was being discussed, as a means of protecting the user’s privacy, the company says.  

Image Credits:OpenAI

The Trusted Contact feature follows the safeguards the company introduced last September that gave parents the power to have some oversight of their teens’ accounts, including receiving safety notifications designed to alert the parent if OpenAI’s system believes their child is facing a “serious safety risk.” For some time now, ChatGPT has also included automated alerts to seek professional health services, should a conversation trend toward the topic of self-harm.

Crucially, Trust Contact is optional and, even if the protection is activated on a particular account, any user can have multiple ChatGPT accounts. OpenAI’s parental controls are also optional, presenting a similar limitation.

“Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments,” the company wrote in the announcement post. “We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress.”

	
		
		Techcrunch event
		
			
			
									San Francisco, CA
													|
													October 13-15, 2026
							
			
		
	

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.#OpenAI #introduces #Trusted #Contact #safeguard #cases #selfharm #TechCrunchAI,ChatGPT,OpenAI
Image Credits:OpenAI

The Trusted Contact feature follows the safeguards the company introduced last September that gave parents the power to have some oversight of their teens’ accounts, including receiving safety notifications designed to alert the parent if OpenAI’s system believes their child is facing a “serious safety risk.” For some time now, ChatGPT has also included automated alerts to seek professional health services, should a conversation trend toward the topic of self-harm.

Crucially, Trust Contact is optional and, even if the protection is activated on a particular account, any user can have multiple ChatGPT accounts. OpenAI’s parental controls are also optional, presenting a similar limitation.

“Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments,” the company wrote in the announcement post. “We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress.”

Techcrunch event

San Francisco, CA | October 13-15, 2026

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#OpenAI #introduces #Trusted #Contact #safeguard #cases #selfharm #TechCrunchAI,ChatGPT,OpenAI">OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm | TechCrunch

On Thursday OpenAI announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation. The feature allows an adult ChatGPT user to designate another person as a trusted contact within their account, such as a friend or family member. In cases where a conversation may turn to self-harm, OpenAI will now encourage the user to reach out to that contact. It also sends an automated alert to the contact, encouraging them to check in with the user.

OpenAI has faced a wave of lawsuits from the families of people who have committed suicide after talking with its chatbot. In a number of cases, the families say ChatGPT encouraged their loved one to kill themselves—or even helped them plan it out.

OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. Certain conversational triggers alert the company’s system to suicidal ideations, which then relay the information to a human safety team. The company claims that every time it receives this kind of notification, the incident is reviewed by a human. “We strive to review these safety notifications in under one hour,” the company says.

If OpenAI’s internal team decides that the situation represents a serious safety risk, ChatGPT proceeds to send the trusted contact an alert—either by email, text message, or an in-app notification. The alert is designed to be brief and to encourage the contact to check in with the person in question. It does not include detailed information about what was being discussed, as a means of protecting the user’s privacy, the company says.

OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm | TechCrunch
On Thursday OpenAI announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation. The feature allows an adult ChatGPT user to designate another person as a trusted contact within their account, such as a friend or family member. In cases where a conversation may turn to self-harm, OpenAI will now encourage the user to reach out to that contact. It also sends an automated alert to the contact, encouraging them to check in with the user.

OpenAI has faced a wave of lawsuits from the families of people who have committed suicide after talking with its chatbot. In a number of cases, the families say ChatGPT encouraged their loved one to kill themselves—or even helped them plan it out.







OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. Certain conversational triggers alert the company’s system to suicidal ideations, which then relay the information to a human safety team. The company claims that every time it receives this kind of notification, the incident is reviewed by a human. “We strive to review these safety notifications in under one hour,” the company says. 

If OpenAI’s internal team decides that the situation represents a serious safety risk, ChatGPT proceeds to send the trusted contact an alert—either by email, text message, or an in-app notification. The alert is designed to be brief and to encourage the contact to check in with the person in question. It does not include detailed information about what was being discussed, as a means of protecting the user’s privacy, the company says.  

Image Credits:OpenAI

The Trusted Contact feature follows the safeguards the company introduced last September that gave parents the power to have some oversight of their teens’ accounts, including receiving safety notifications designed to alert the parent if OpenAI’s system believes their child is facing a “serious safety risk.” For some time now, ChatGPT has also included automated alerts to seek professional health services, should a conversation trend toward the topic of self-harm.

Crucially, Trust Contact is optional and, even if the protection is activated on a particular account, any user can have multiple ChatGPT accounts. OpenAI’s parental controls are also optional, presenting a similar limitation.

“Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments,” the company wrote in the announcement post. “We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress.”

	
		
		Techcrunch event
		
			
			
									San Francisco, CA
													|
													October 13-15, 2026
							
			
		
	

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.#OpenAI #introduces #Trusted #Contact #safeguard #cases #selfharm #TechCrunchAI,ChatGPT,OpenAI
Image Credits:OpenAI

The Trusted Contact feature follows the safeguards the company introduced last September that gave parents the power to have some oversight of their teens’ accounts, including receiving safety notifications designed to alert the parent if OpenAI’s system believes their child is facing a “serious safety risk.” For some time now, ChatGPT has also included automated alerts to seek professional health services, should a conversation trend toward the topic of self-harm.

Crucially, Trust Contact is optional and, even if the protection is activated on a particular account, any user can have multiple ChatGPT accounts. OpenAI’s parental controls are also optional, presenting a similar limitation.

“Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments,” the company wrote in the announcement post. “We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress.”

Techcrunch event

San Francisco, CA | October 13-15, 2026

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#OpenAI #introduces #Trusted #Contact #safeguard #cases #selfharm #TechCrunchAI,ChatGPT,OpenAI

Post Comment