×
Russian strikes on Kyiv kill three as Ukraine envoys travel to US for talks

Russian strikes on Kyiv kill three as Ukraine envoys travel to US for talks

Two people were killed in the strikes on the capital, and a woman died in a combined missile and drone attack on the broader Kyiv region, officials said.

Russian drone and missile strikes in and around Ukraine’s capital, Kyiv, have killed at least three people and wounded dozens of others, officials said, as Ukrainian representatives travelled to the United States for talks on a renewed push to end the war.

“Russia shot dozens of cruise and ballistic missiles and over 500 drones at ordinary homes, the energy grid, and critical infrastructure,” Foreign Minister Andrii Sybiha wrote on X on Saturday.

Recommended Stories

list of 4 itemsend of list

“While everyone is discussing points of peace plans, Russia continues to pursue its ‘war plan’ of two points: to kill and destroy,” he added.

The Kyiv City Military Administration said two people were killed in the strikes on the capital in Kyiv. A woman died, and eight people were wounded in a combined missile and drone attack on the broader Kyiv region, according to the regional police.

Vehicles burn after being damaged during a Russian missile and drone attack on Kyiv, amid Russia’s attack on Ukraine, November 29, 2025 [Valentyn Ogirenko/Reuters]

Mayor Vitali Klitschko said 29 people were wounded in Kyiv, noting that falling debris from intercepted Russian drones hit residential buildings. He also said the western part of Kyiv had lost power.

Kyiv’s military administration head, Tymur Tkachenko, said in a social media post that a 42-year-old man was killed by a drone, while the man’s 10-year-old son was taken to hospital with “burns and other injuries”.

“The world should know that Russia is targeting entire families,” Tkachenko said, adding that the son was the only child recorded among the injured so far.

Following the attacks on Kyiv, EU Ambassador Katarina Mathernova cast doubt on Russia’s stated interest in a peace deal.

“While the world discusses a possible peace deal. Moscow answers with missiles, not diplomacy,” Mathernova said in a post on X.

Ukraine team heads to US

On the diplomatic front, Ukrainian President Volodymyr Zelenskyy said that his negotiators had left for Washington to seek a “dignified peace” and a rapid end to the war begun by Russia in 2022.

Zelenskyy is under growing pressure from Washington to agree to a US proposal to end the war that critics say heavily favours Moscow.

The Ukrainian team is being led by former defence chief Rustem Umerov, following the resignation on Friday of his chief of staff Andriy Yermak amid a corruption probe.

“The task is clear: to swiftly and substantively work out the steps needed to end the war,” he posted on X.

“Ukraine continues to work with the United States in the most constructive way possible, and we expect that the results of the meetings in Geneva will now be hammered out in the United States.”

At Kyiv’s insistence, US President Donald Trump’s initial 28-point plan to end the war was revised during talks in Geneva with European and US officials. However, many contentious issues remain unresolved.

Black Sea attacks

Separately on Saturday, an official from the SBU security service said that Ukraine had hit two tankers used by Russia to export oil while skirting Western sanctions with marine drones in the Black Sea.

The joint operation to hit the so-called “shadow fleet” vessels was run by the SBU and Ukraine’s navy, the official told the Reuters news agency on condition of anonymity.

Turkish authorities have said that blasts rocked two shadow fleet tankers near Turkiye’s Bosphorus Strait on Friday, causing fires on the vessels, and rescue operations were launched for those on board.

This video grab taken from images released by the Security service of Ukraine (SBU) on November 29, 2025, shows smoke rising from a cargo ship on fire in the Black Sea off the Turkish coast, amid the ongoing Russian-Ukrainian conflict.
This video grab taken from images released by the Security Service of Ukraine (SBU) shows smoke rising from a cargo ship on fire in the Black Sea off the Turkish coast, amid the ongoing Russian-Ukrainian conflict [AFP]

The SBU official said both tankers – identified as the Kairos and Virat – were empty and on their way to the port of Novorossiysk, a major Russian oil terminal.

“Video [footage] shows that after being hit, both tankers sustained critical damage and were effectively taken out of service. This will deal a significant blow to Russian oil transportation,” the official said. They did not say when the strikes took place.

Ukraine has consistently called for tougher international measures for Russia’s “shadow fleet”, which it says is helping Moscow export vast quantities of oil and fund its war in Ukraine despite Western sanctions.



Source link
#Russian #strikes #Kyiv #kill #Ukraine #envoys #travel #talks

岩手 大槌町 山林火災は住宅近く迫る 1000人規模で消火 | NHKニュース岩手県大槌町の山林火災は25日で発生から4日目となりますが、延焼が続いています。町によりますと複数の地区にある住宅の近くまで火が迫っているということで、1000人以上の規模での懸命な消火活動が行われています。#岩手 #大槌町 #山林火災は住宅近く迫る #1000人規模で消火 #NHKニュースNHK,ニュース,NHK ONE,火災,岩手県,一覧

OpenAI CEO Sam Altman has apologized to the Canadian town of Tumbler Ridge following a February mass shooting that left eight dead. 

Altman said he was “deeply sorry” the company didn’t alert the police about the shooter’s troubling ChatGPT accounts.

Britich Colombia Primier David Eby called the apology “necessary, and yet grossly insufficient.”

How did OpenAI fail to act?

An 18-year-old transgender woman killed her mother and stepbrother at home on February 10, before going to a local secondary school and opening fire. She killed five children and a teacher, then took her own life.

After the attack, OpenAI said it had identified the suspect’s account through its abuse detection systems and banned it in June, eight months before the shooting.

The ChatGPT developer said it did not report the account to Canadian police at the time, as the activity did not meet its threshold for referral to law enforcement.

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.” 

How does ChatGPT report suspected violance?

OpenAI says it uses automated moderation systems that scan content in real time. Accounts can be restricted or banned for violating the rules. Violations include sexual exploitation, support of self-harm and suicide, and promotion of violence and harm.

In serious cases, systems are designed to flag high-risk behavior for human review. If a credible threat is identified, the company may share relevant account data with law enforcement.

Following the attack, Canadian officials summoned OpenAI’s safety team and warned of regulation actions if changes were not made. The company said it would tighten its safety measures and had created a direct contact channel with police.

In the letter, Altman said the company is committed to find ways to prevent similar tragedies. “Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again,” he said. 

The family of a girl who was seriously injured in the shooting has filed a negligence lawsuit against the US tech giant.

Is your AI private? OpenAI and the Canadian school shooting

Edited by: Wesley Dockery 

#OpenAI #apologizes #reporting #Canada #mass #shooter">OpenAI apologizes for not reporting Canada mass shooterOpenAI CEO Sam Altman has apologized to the Canadian town of Tumbler Ridge following a February mass shooting that left eight dead. 

Altman said he was “deeply sorry” the company didn’t alert the police about the shooter’s troubling ChatGPT accounts.

Britich Colombia Primier David Eby called the apology “necessary, and yet grossly insufficient.”

How did OpenAI fail to act?

An 18-year-old transgender woman killed her mother and stepbrother at home on February 10, before going to a local secondary school and opening fire. She killed five children and a teacher, then took her own life.

After the attack, OpenAI said it had identified the suspect’s account through its abuse detection systems and banned it in June, eight months before the shooting.

The ChatGPT developer said it did not report the account to Canadian police at the time, as the activity did not meet its threshold for referral to law enforcement.

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.” 

How does ChatGPT report suspected violance?

OpenAI says it uses automated moderation systems that scan content in real time. Accounts can be restricted or banned for violating the rules. Violations include sexual exploitation, support of self-harm and suicide, and promotion of violence and harm.

In serious cases, systems are designed to flag high-risk behavior for human review. If a credible threat is identified, the company may share relevant account data with law enforcement.

Following the attack, Canadian officials summoned OpenAI’s safety team and warned of regulation actions if changes were not made. The company said it would tighten its safety measures and had created a direct contact channel with police.

In the letter, Altman said the company is committed to find ways to prevent similar tragedies. “Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again,” he said. 

The family of a girl who was seriously injured in the shooting has filed a negligence lawsuit against the US tech giant.

Is your AI private? OpenAI and the Canadian school shootingTo view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video

Edited by: Wesley Dockery 
#OpenAI #apologizes #reporting #Canada #mass #shooter

February mass shooting that left eight dead. 

Altman said he was “deeply sorry” the company didn’t alert the police about the shooter’s troubling ChatGPT accounts.

Britich Colombia Primier David Eby called the apology “necessary, and yet grossly insufficient.”

How did OpenAI fail to act?

An 18-year-old transgender woman killed her mother and stepbrother at home on February 10, before going to a local secondary school and opening fire. She killed five children and a teacher, then took her own life.

After the attack, OpenAI said it had identified the suspect’s account through its abuse detection systems and banned it in June, eight months before the shooting.

The ChatGPT developer said it did not report the account to Canadian police at the time, as the activity did not meet its threshold for referral to law enforcement.

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.” 

How does ChatGPT report suspected violance?

OpenAI says it uses automated moderation systems that scan content in real time. Accounts can be restricted or banned for violating the rules. Violations include sexual exploitation, support of self-harm and suicide, and promotion of violence and harm.

In serious cases, systems are designed to flag high-risk behavior for human review. If a credible threat is identified, the company may share relevant account data with law enforcement.

Following the attack, Canadian officials summoned OpenAI’s safety team and warned of regulation actions if changes were not made. The company said it would tighten its safety measures and had created a direct contact channel with police.

In the letter, Altman said the company is committed to find ways to prevent similar tragedies. “Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again,” he said. 

The family of a girl who was seriously injured in the shooting has filed a negligence lawsuit against the US tech giant.

Is your AI private? OpenAI and the Canadian school shooting

Edited by: Wesley Dockery 

#OpenAI #apologizes #reporting #Canada #mass #shooter">OpenAI apologizes for not reporting Canada mass shooter

OpenAI CEO Sam Altman has apologized to the Canadian town of Tumbler Ridge following a February mass shooting that left eight dead. 

Altman said he was “deeply sorry” the company didn’t alert the police about the shooter’s troubling ChatGPT accounts.

Britich Colombia Primier David Eby called the apology “necessary, and yet grossly insufficient.”

How did OpenAI fail to act?

An 18-year-old transgender woman killed her mother and stepbrother at home on February 10, before going to a local secondary school and opening fire. She killed five children and a teacher, then took her own life.

After the attack, OpenAI said it had identified the suspect’s account through its abuse detection systems and banned it in June, eight months before the shooting.

The ChatGPT developer said it did not report the account to Canadian police at the time, as the activity did not meet its threshold for referral to law enforcement.

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.” 

How does ChatGPT report suspected violance?

OpenAI says it uses automated moderation systems that scan content in real time. Accounts can be restricted or banned for violating the rules. Violations include sexual exploitation, support of self-harm and suicide, and promotion of violence and harm.

In serious cases, systems are designed to flag high-risk behavior for human review. If a credible threat is identified, the company may share relevant account data with law enforcement.

Following the attack, Canadian officials summoned OpenAI’s safety team and warned of regulation actions if changes were not made. The company said it would tighten its safety measures and had created a direct contact channel with police.

In the letter, Altman said the company is committed to find ways to prevent similar tragedies. “Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again,” he said. 

The family of a girl who was seriously injured in the shooting has filed a negligence lawsuit against the US tech giant.

Is your AI private? OpenAI and the Canadian school shooting

Edited by: Wesley Dockery 

#OpenAI #apologizes #reporting #Canada #mass #shooter

Post Comment