×
RidePods is the first iPhone game you control with AirPods

RidePods is the first iPhone game you control with AirPods

Developer Ali Tanis has released the first game for iPhones and iPads that’s played using Apple’s AirPods as a wearable motion controller. The RidePods – Race with Head gameplay is relatively basic – you’re just steering a motorcycle through oncoming traffic at high speeds – but instead of swiping the screen or tiling your phone, you control the bike’s movements by tilting your head from side-to-side while wearing AirPods.

The game only works with Apple’s wireless headphones that support Spatial Audio including the AirPods Pro, the AirPods Max, and the third and fourth generation AirPods. Spatial Audio relies on the accelerometer and gyroscope included in those AirPods models to track the movements of your head and adjust the position of the audio accordingly. Tanis announced the game’s release on Y Combinator and while they said they had to reverse engineer the Spatial Audio feature to make their game work, Apple does provide developers with access to headphone motion data so they can incorporate features like fitness tracking into their apps.

RidePods isn’t the most polished iOS or iPadOS game I’ve ever played. There are random graphical glitches such as the road occasionally disappearing, and from the limited amount of time I’ve spent dodging traffic it appears that you’re racing on a perfect straightaway that never curves. It feels more like a tech demo than a fully realized game, but the controls are surprisingly nuanced. I tested it with the second generation AirPods Pro and the AirPods Max and using my head to steer felt both more natural and responsive than I thought it would be.

The bike responds well to subtle movements, even when using just a single AirPod earbud. If you turn off Automatic Head or Ear Detection in your AirPods’ settings, you can even use your headphones or a single AirPods earbud as a handheld controller, but it requires a lot more finesse with your movements and definitely ups the challenge.

The app includes a setting for controlling the motorcycle’s braking and acceleration by tilting your head forwards and back, but I couldn’t detect that having any discernible effect on the bike’s speed. You can also toggle between a first-person and third-person view of the riderless motorcycle, and for those wanting to share their high scores with the world, the app includes a record function that captures both the gameplay and a selfie video of you playing in a single clip.

RidePods – Race with Head isn’t a game I’m going to return to frequently, but I can see the potential of using headphone motion controls for mobile gaming. It’s completely free, but I would definitely pay a tidy sum for a hands-free version of Solitaire that lets me move around stacks of cards using nothing but subtle head gestures and motions.

Source link
#RidePods #iPhone #game #control #AirPods

ShinyHunters has breached Instructure (again). Instead of contacting us to resolve it they ignored us and did some “security patches.” If any of the schools in the affected list are interested in preventing the release of their data, please consult with a cyber advisory firm and contact us privately at TOX to negotiate a settlement. You have till the end of the day by 12 May 2026 before everything is leaked.

The message included a link to a list of schools ShinyHunter claims to have breached through Canvas. The platform’s status page says Canvas, Canvas Beta, and Canvas Test are currently unavailable and that it is investigating the outage.

Instructure said last week that it “deployed patches to enhance system security” following the breach. ShinyHunters — which has claimed responsibility for attacks on Ticketmaster, AT&T, Rockstar Games, ADT, and Vercel — said its data leak site contains 9,000 schools, including data belonging to 275 million students, teachers, and other staff, according to Bleeping Computer.

#Canvas #ShinyHunters #threatens #leak #schools #dataNews,Security,Tech">Canvas is down as ShinyHunters threatens to leak schools’ dataThe Instructure-owned learning management platform, Canvas, is down after recently confirming a massive data breach that impacted student names, email addresses, ID numbers, and messages. Students attempting to access the system on Thursday saw a message from the hacking group ShinyHunters, which claimed responsibility for the attack:ShinyHunters has breached Instructure (again). Instead of contacting us to resolve it they ignored us and did some “security patches.” If any of the schools in the affected list are interested in preventing the release of their data, please consult with a cyber advisory firm and contact us privately at TOX to negotiate a settlement. You have till the end of the day by 12 May 2026 before everything is leaked.The message included a link to a list of schools ShinyHunter claims to have breached through Canvas. The platform’s status page says Canvas, Canvas Beta, and Canvas Test are currently unavailable and that it is investigating the outage.Instructure said last week that it “deployed patches to enhance system security” following the breach. ShinyHunters — which has claimed responsibility for attacks on Ticketmaster, AT&T, Rockstar Games, ADT, and Vercel — said its data leak site contains 9,000 schools, including data belonging to 275 million students, teachers, and other staff, according to Bleeping Computer.#Canvas #ShinyHunters #threatens #leak #schools #dataNews,Security,Tech

impacted student names, email addresses, ID numbers, and messages. Students attempting to access the system on Thursday saw a message from the hacking group ShinyHunters, which claimed responsibility for the attack:

ShinyHunters has breached Instructure (again). Instead of contacting us to resolve it they ignored us and did some “security patches.” If any of the schools in the affected list are interested in preventing the release of their data, please consult with a cyber advisory firm and contact us privately at TOX to negotiate a settlement. You have till the end of the day by 12 May 2026 before everything is leaked.

The message included a link to a list of schools ShinyHunter claims to have breached through Canvas. The platform’s status page says Canvas, Canvas Beta, and Canvas Test are currently unavailable and that it is investigating the outage.

Instructure said last week that it “deployed patches to enhance system security” following the breach. ShinyHunters — which has claimed responsibility for attacks on Ticketmaster, AT&T, Rockstar Games, ADT, and Vercel — said its data leak site contains 9,000 schools, including data belonging to 275 million students, teachers, and other staff, according to Bleeping Computer.

#Canvas #ShinyHunters #threatens #leak #schools #dataNews,Security,Tech">Canvas is down as ShinyHunters threatens to leak schools’ data

The Instructure-owned learning management platform, Canvas, is down after recently confirming a massive data breach that impacted student names, email addresses, ID numbers, and messages. Students attempting to access the system on Thursday saw a message from the hacking group ShinyHunters, which claimed responsibility for the attack:

ShinyHunters has breached Instructure (again). Instead of contacting us to resolve it they ignored us and did some “security patches.” If any of the schools in the affected list are interested in preventing the release of their data, please consult with a cyber advisory firm and contact us privately at TOX to negotiate a settlement. You have till the end of the day by 12 May 2026 before everything is leaked.

The message included a link to a list of schools ShinyHunter claims to have breached through Canvas. The platform’s status page says Canvas, Canvas Beta, and Canvas Test are currently unavailable and that it is investigating the outage.

Instructure said last week that it “deployed patches to enhance system security” following the breach. ShinyHunters — which has claimed responsibility for attacks on Ticketmaster, AT&T, Rockstar Games, ADT, and Vercel — said its data leak site contains 9,000 schools, including data belonging to 275 million students, teachers, and other staff, according to Bleeping Computer.

#Canvas #ShinyHunters #threatens #leak #schools #dataNews,Security,Tech
On Thursday OpenAI announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation. The feature allows an adult ChatGPT user to designate another person as a trusted contact within their account, such as a friend or family member. In cases where a conversation may turn to self-harm, OpenAI will now encourage the user to reach out to that contact. It also sends an automated alert to the contact, encouraging them to check in with the user.

OpenAI has faced a wave of lawsuits from the families of people who have committed suicide after talking with its chatbot. In a number of cases, the families say ChatGPT encouraged their loved one to kill themselves—or even helped them plan it out.

OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. Certain conversational triggers alert the company’s system to suicidal ideations, which then relay the information to a human safety team. The company claims that every time it receives this kind of notification, the incident is reviewed by a human. “We strive to review these safety notifications in under one hour,” the company says.

If OpenAI’s internal team decides that the situation represents a serious safety risk, ChatGPT proceeds to send the trusted contact an alert—either by email, text message, or an in-app notification. The alert is designed to be brief and to encourage the contact to check in with the person in question. It does not include detailed information about what was being discussed, as a means of protecting the user’s privacy, the company says.

OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm | TechCrunch
On Thursday OpenAI announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation. The feature allows an adult ChatGPT user to designate another person as a trusted contact within their account, such as a friend or family member. In cases where a conversation may turn to self-harm, OpenAI will now encourage the user to reach out to that contact. It also sends an automated alert to the contact, encouraging them to check in with the user.

OpenAI has faced a wave of lawsuits from the families of people who have committed suicide after talking with its chatbot. In a number of cases, the families say ChatGPT encouraged their loved one to kill themselves—or even helped them plan it out.







OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. Certain conversational triggers alert the company’s system to suicidal ideations, which then relay the information to a human safety team. The company claims that every time it receives this kind of notification, the incident is reviewed by a human. “We strive to review these safety notifications in under one hour,” the company says. 

If OpenAI’s internal team decides that the situation represents a serious safety risk, ChatGPT proceeds to send the trusted contact an alert—either by email, text message, or an in-app notification. The alert is designed to be brief and to encourage the contact to check in with the person in question. It does not include detailed information about what was being discussed, as a means of protecting the user’s privacy, the company says.  

Image Credits:OpenAI

The Trusted Contact feature follows the safeguards the company introduced last September that gave parents the power to have some oversight of their teens’ accounts, including receiving safety notifications designed to alert the parent if OpenAI’s system believes their child is facing a “serious safety risk.” For some time now, ChatGPT has also included automated alerts to seek professional health services, should a conversation trend toward the topic of self-harm.

Crucially, Trust Contact is optional and, even if the protection is activated on a particular account, any user can have multiple ChatGPT accounts. OpenAI’s parental controls are also optional, presenting a similar limitation.

“Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments,” the company wrote in the announcement post. “We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress.”

	
		
		Techcrunch event
		
			
			
									San Francisco, CA
													|
													October 13-15, 2026
							
			
		
	

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.#OpenAI #introduces #Trusted #Contact #safeguard #cases #selfharm #TechCrunchAI,ChatGPT,OpenAI
Image Credits:OpenAI

The Trusted Contact feature follows the safeguards the company introduced last September that gave parents the power to have some oversight of their teens’ accounts, including receiving safety notifications designed to alert the parent if OpenAI’s system believes their child is facing a “serious safety risk.” For some time now, ChatGPT has also included automated alerts to seek professional health services, should a conversation trend toward the topic of self-harm.

Crucially, Trust Contact is optional and, even if the protection is activated on a particular account, any user can have multiple ChatGPT accounts. OpenAI’s parental controls are also optional, presenting a similar limitation.

“Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments,” the company wrote in the announcement post. “We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress.”

Techcrunch event

San Francisco, CA | October 13-15, 2026

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#OpenAI #introduces #Trusted #Contact #safeguard #cases #selfharm #TechCrunchAI,ChatGPT,OpenAI">OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm | TechCrunch
On Thursday OpenAI announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation. The feature allows an adult ChatGPT user to designate another person as a trusted contact within their account, such as a friend or family member. In cases where a conversation may turn to self-harm, OpenAI will now encourage the user to reach out to that contact. It also sends an automated alert to the contact, encouraging them to check in with the user.

OpenAI has faced a wave of lawsuits from the families of people who have committed suicide after talking with its chatbot. In a number of cases, the families say ChatGPT encouraged their loved one to kill themselves—or even helped them plan it out.







OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. Certain conversational triggers alert the company’s system to suicidal ideations, which then relay the information to a human safety team. The company claims that every time it receives this kind of notification, the incident is reviewed by a human. “We strive to review these safety notifications in under one hour,” the company says. 

If OpenAI’s internal team decides that the situation represents a serious safety risk, ChatGPT proceeds to send the trusted contact an alert—either by email, text message, or an in-app notification. The alert is designed to be brief and to encourage the contact to check in with the person in question. It does not include detailed information about what was being discussed, as a means of protecting the user’s privacy, the company says.  

Image Credits:OpenAI

The Trusted Contact feature follows the safeguards the company introduced last September that gave parents the power to have some oversight of their teens’ accounts, including receiving safety notifications designed to alert the parent if OpenAI’s system believes their child is facing a “serious safety risk.” For some time now, ChatGPT has also included automated alerts to seek professional health services, should a conversation trend toward the topic of self-harm.

Crucially, Trust Contact is optional and, even if the protection is activated on a particular account, any user can have multiple ChatGPT accounts. OpenAI’s parental controls are also optional, presenting a similar limitation.

“Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments,” the company wrote in the announcement post. “We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress.”

	
		
		Techcrunch event
		
			
			
									San Francisco, CA
													|
													October 13-15, 2026
							
			
		
	

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.#OpenAI #introduces #Trusted #Contact #safeguard #cases #selfharm #TechCrunchAI,ChatGPT,OpenAI

announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation. The feature allows an adult ChatGPT user to designate another person as a trusted contact within their account, such as a friend or family member. In cases where a conversation may turn to self-harm, OpenAI will now encourage the user to reach out to that contact. It also sends an automated alert to the contact, encouraging them to check in with the user.

OpenAI has faced a wave of lawsuits from the families of people who have committed suicide after talking with its chatbot. In a number of cases, the families say ChatGPT encouraged their loved one to kill themselves—or even helped them plan it out.

OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. Certain conversational triggers alert the company’s system to suicidal ideations, which then relay the information to a human safety team. The company claims that every time it receives this kind of notification, the incident is reviewed by a human. “We strive to review these safety notifications in under one hour,” the company says.

If OpenAI’s internal team decides that the situation represents a serious safety risk, ChatGPT proceeds to send the trusted contact an alert—either by email, text message, or an in-app notification. The alert is designed to be brief and to encourage the contact to check in with the person in question. It does not include detailed information about what was being discussed, as a means of protecting the user’s privacy, the company says.

OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm | TechCrunch
On Thursday OpenAI announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation. The feature allows an adult ChatGPT user to designate another person as a trusted contact within their account, such as a friend or family member. In cases where a conversation may turn to self-harm, OpenAI will now encourage the user to reach out to that contact. It also sends an automated alert to the contact, encouraging them to check in with the user.

OpenAI has faced a wave of lawsuits from the families of people who have committed suicide after talking with its chatbot. In a number of cases, the families say ChatGPT encouraged their loved one to kill themselves—or even helped them plan it out.







OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. Certain conversational triggers alert the company’s system to suicidal ideations, which then relay the information to a human safety team. The company claims that every time it receives this kind of notification, the incident is reviewed by a human. “We strive to review these safety notifications in under one hour,” the company says. 

If OpenAI’s internal team decides that the situation represents a serious safety risk, ChatGPT proceeds to send the trusted contact an alert—either by email, text message, or an in-app notification. The alert is designed to be brief and to encourage the contact to check in with the person in question. It does not include detailed information about what was being discussed, as a means of protecting the user’s privacy, the company says.  

Image Credits:OpenAI

The Trusted Contact feature follows the safeguards the company introduced last September that gave parents the power to have some oversight of their teens’ accounts, including receiving safety notifications designed to alert the parent if OpenAI’s system believes their child is facing a “serious safety risk.” For some time now, ChatGPT has also included automated alerts to seek professional health services, should a conversation trend toward the topic of self-harm.

Crucially, Trust Contact is optional and, even if the protection is activated on a particular account, any user can have multiple ChatGPT accounts. OpenAI’s parental controls are also optional, presenting a similar limitation.

“Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments,” the company wrote in the announcement post. “We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress.”

	
		
		Techcrunch event
		
			
			
									San Francisco, CA
													|
													October 13-15, 2026
							
			
		
	

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.#OpenAI #introduces #Trusted #Contact #safeguard #cases #selfharm #TechCrunchAI,ChatGPT,OpenAI
Image Credits:OpenAI

The Trusted Contact feature follows the safeguards the company introduced last September that gave parents the power to have some oversight of their teens’ accounts, including receiving safety notifications designed to alert the parent if OpenAI’s system believes their child is facing a “serious safety risk.” For some time now, ChatGPT has also included automated alerts to seek professional health services, should a conversation trend toward the topic of self-harm.

Crucially, Trust Contact is optional and, even if the protection is activated on a particular account, any user can have multiple ChatGPT accounts. OpenAI’s parental controls are also optional, presenting a similar limitation.

“Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments,” the company wrote in the announcement post. “We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress.”

Techcrunch event

San Francisco, CA | October 13-15, 2026

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#OpenAI #introduces #Trusted #Contact #safeguard #cases #selfharm #TechCrunchAI,ChatGPT,OpenAI">OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm | TechCrunch

On Thursday OpenAI announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation. The feature allows an adult ChatGPT user to designate another person as a trusted contact within their account, such as a friend or family member. In cases where a conversation may turn to self-harm, OpenAI will now encourage the user to reach out to that contact. It also sends an automated alert to the contact, encouraging them to check in with the user.

OpenAI has faced a wave of lawsuits from the families of people who have committed suicide after talking with its chatbot. In a number of cases, the families say ChatGPT encouraged their loved one to kill themselves—or even helped them plan it out.

OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. Certain conversational triggers alert the company’s system to suicidal ideations, which then relay the information to a human safety team. The company claims that every time it receives this kind of notification, the incident is reviewed by a human. “We strive to review these safety notifications in under one hour,” the company says.

If OpenAI’s internal team decides that the situation represents a serious safety risk, ChatGPT proceeds to send the trusted contact an alert—either by email, text message, or an in-app notification. The alert is designed to be brief and to encourage the contact to check in with the person in question. It does not include detailed information about what was being discussed, as a means of protecting the user’s privacy, the company says.

OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm | TechCrunch
On Thursday OpenAI announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation. The feature allows an adult ChatGPT user to designate another person as a trusted contact within their account, such as a friend or family member. In cases where a conversation may turn to self-harm, OpenAI will now encourage the user to reach out to that contact. It also sends an automated alert to the contact, encouraging them to check in with the user.

OpenAI has faced a wave of lawsuits from the families of people who have committed suicide after talking with its chatbot. In a number of cases, the families say ChatGPT encouraged their loved one to kill themselves—or even helped them plan it out.







OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. Certain conversational triggers alert the company’s system to suicidal ideations, which then relay the information to a human safety team. The company claims that every time it receives this kind of notification, the incident is reviewed by a human. “We strive to review these safety notifications in under one hour,” the company says. 

If OpenAI’s internal team decides that the situation represents a serious safety risk, ChatGPT proceeds to send the trusted contact an alert—either by email, text message, or an in-app notification. The alert is designed to be brief and to encourage the contact to check in with the person in question. It does not include detailed information about what was being discussed, as a means of protecting the user’s privacy, the company says.  

Image Credits:OpenAI

The Trusted Contact feature follows the safeguards the company introduced last September that gave parents the power to have some oversight of their teens’ accounts, including receiving safety notifications designed to alert the parent if OpenAI’s system believes their child is facing a “serious safety risk.” For some time now, ChatGPT has also included automated alerts to seek professional health services, should a conversation trend toward the topic of self-harm.

Crucially, Trust Contact is optional and, even if the protection is activated on a particular account, any user can have multiple ChatGPT accounts. OpenAI’s parental controls are also optional, presenting a similar limitation.

“Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments,” the company wrote in the announcement post. “We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress.”

	
		
		Techcrunch event
		
			
			
									San Francisco, CA
													|
													October 13-15, 2026
							
			
		
	

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.#OpenAI #introduces #Trusted #Contact #safeguard #cases #selfharm #TechCrunchAI,ChatGPT,OpenAI
Image Credits:OpenAI

The Trusted Contact feature follows the safeguards the company introduced last September that gave parents the power to have some oversight of their teens’ accounts, including receiving safety notifications designed to alert the parent if OpenAI’s system believes their child is facing a “serious safety risk.” For some time now, ChatGPT has also included automated alerts to seek professional health services, should a conversation trend toward the topic of self-harm.

Crucially, Trust Contact is optional and, even if the protection is activated on a particular account, any user can have multiple ChatGPT accounts. OpenAI’s parental controls are also optional, presenting a similar limitation.

“Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments,” the company wrote in the announcement post. “We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress.”

Techcrunch event

San Francisco, CA | October 13-15, 2026

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

#OpenAI #introduces #Trusted #Contact #safeguard #cases #selfharm #TechCrunchAI,ChatGPT,OpenAI

Post Comment