×
UN official says Israel expanding Gaza operations would risk ‘catastrophic consequences’

UN official says Israel expanding Gaza operations would risk ‘catastrophic consequences’

Watch: UN official says Netanyahu’s reported Gaza expansion plans risk “catastrophic consequences”

A top UN official has warned there would be “catastrophic consequences” if Israel expands its military operations in Gaza, after reports Israeli Prime Minister Benjamin Netanyahu is pushing for total reoccupation.

Assistant Secretary General Miroslav Jenča told the UN Security Council such a move would be “deeply alarming”, and could endanger the lives of more Palestinians, as well as Israeli hostages held by Hamas.

Israeli media reported that Netanyahu plans to meet with his security cabinet this week.

“The die has been cast. We’re going for the full conquest of the Gaza Strip – and defeating Hamas,” a senior Israeli official was quoted as saying.

The security cabinet, which is due to meet on Thursday, would need to approve such an action.

It has been suggested that the plan could be a negotiating tactic to pressure Hamas after a recent breakdown of ceasefire talks, or an attempt to shore up support from Netanyahu’s far-right coalition partners.

Israel has been facing mounting international pressure over the war in Gaza, where experts say that famine is unfolding.

In his remarks, Jenča warned against any expansion of Israel’s military operations.

“This would risk catastrophic consequences for millions of Palestinians and could further endanger the lives of the remaining hostages in Gaza,” he said.

He added that under international law, Gaza “is and must remain an integral part of a future Palestinian state”.

Israel’s military said it already had operational control of 75% of Gaza, but the new plan would reportedly propose occupying the entire region – including areas where more than two million Palestinians now live.

The proposals have proved divisive in Israel, with reports that the army chief and other military leaders oppose the strategy.

The unnamed Israeli official responded by saying: “If that doesn’t work for the chief of staff, he should resign.”

The families of hostages have expressed their fear that such a decision could endanger their loved ones.

Israel says 49 hostages are still being held in Gaza, of whom 27 are believed to be dead.

Jenča reiterated to the UN Security Council the call for a ceasefire and the immediate and unconditional release of all hostages.

Citing the “squalid” and “inhumane” conditions faced by Palestinians, he urged Israel to immediately allow the unimpeded passage of sufficient aid.

“Israel continues to severely restrict humanitarian assistance entering Gaza, and the aid that is permitted to enter is grossly inadequate,” Jenča said.

He also condemned the ongoing violence at food distribution sites, saying more than 1,200 Palestinians have been killed since the end of May while trying to access food and supplies.

Last week, Gaza’s Hamas-run health ministry said 154 people including 89 children had died from a lack of food since October 2023.

UN agencies have warned there is man-made, mass starvation in Gaza, and reported at least 63 malnutrition-related deaths this month.

Israel has previously insisted there are no restrictions on aid deliveries and that there is “no starvation” in Gaza.

Israel launched its military offensive in Gaza in response to Hamas’s attack on southern Israel on 7 October 2023, in which about 1,200 people were killed and 251 others taken to Gaza as hostages.

More than 60,000 Palestinians have been killed as a result of Israel’s military campaign, according to the territory’s health ministry.

Source link
#official #Israel #expanding #Gaza #operations #risk #catastrophic #consequences

岩手 大槌町 山林火災は住宅近く迫る 1000人規模で消火 | NHKニュース岩手県大槌町の山林火災は25日で発生から4日目となりますが、延焼が続いています。町によりますと複数の地区にある住宅の近くまで火が迫っているということで、1000人以上の規模での懸命な消火活動が行われています。#岩手 #大槌町 #山林火災は住宅近く迫る #1000人規模で消火 #NHKニュースNHK,ニュース,NHK ONE,火災,岩手県,一覧

OpenAI CEO Sam Altman has apologized to the Canadian town of Tumbler Ridge following a February mass shooting that left eight dead. 

Altman said he was “deeply sorry” the company didn’t alert the police about the shooter’s troubling ChatGPT accounts.

Britich Colombia Primier David Eby called the apology “necessary, and yet grossly insufficient.”

How did OpenAI fail to act?

An 18-year-old transgender woman killed her mother and stepbrother at home on February 10, before going to a local secondary school and opening fire. She killed five children and a teacher, then took her own life.

After the attack, OpenAI said it had identified the suspect’s account through its abuse detection systems and banned it in June, eight months before the shooting.

The ChatGPT developer said it did not report the account to Canadian police at the time, as the activity did not meet its threshold for referral to law enforcement.

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.” 

How does ChatGPT report suspected violance?

OpenAI says it uses automated moderation systems that scan content in real time. Accounts can be restricted or banned for violating the rules. Violations include sexual exploitation, support of self-harm and suicide, and promotion of violence and harm.

In serious cases, systems are designed to flag high-risk behavior for human review. If a credible threat is identified, the company may share relevant account data with law enforcement.

Following the attack, Canadian officials summoned OpenAI’s safety team and warned of regulation actions if changes were not made. The company said it would tighten its safety measures and had created a direct contact channel with police.

In the letter, Altman said the company is committed to find ways to prevent similar tragedies. “Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again,” he said. 

The family of a girl who was seriously injured in the shooting has filed a negligence lawsuit against the US tech giant.

Is your AI private? OpenAI and the Canadian school shooting

Edited by: Wesley Dockery 

#OpenAI #apologizes #reporting #Canada #mass #shooter">OpenAI apologizes for not reporting Canada mass shooterOpenAI CEO Sam Altman has apologized to the Canadian town of Tumbler Ridge following a February mass shooting that left eight dead. 

Altman said he was “deeply sorry” the company didn’t alert the police about the shooter’s troubling ChatGPT accounts.

Britich Colombia Primier David Eby called the apology “necessary, and yet grossly insufficient.”

How did OpenAI fail to act?

An 18-year-old transgender woman killed her mother and stepbrother at home on February 10, before going to a local secondary school and opening fire. She killed five children and a teacher, then took her own life.

After the attack, OpenAI said it had identified the suspect’s account through its abuse detection systems and banned it in June, eight months before the shooting.

The ChatGPT developer said it did not report the account to Canadian police at the time, as the activity did not meet its threshold for referral to law enforcement.

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.” 

How does ChatGPT report suspected violance?

OpenAI says it uses automated moderation systems that scan content in real time. Accounts can be restricted or banned for violating the rules. Violations include sexual exploitation, support of self-harm and suicide, and promotion of violence and harm.

In serious cases, systems are designed to flag high-risk behavior for human review. If a credible threat is identified, the company may share relevant account data with law enforcement.

Following the attack, Canadian officials summoned OpenAI’s safety team and warned of regulation actions if changes were not made. The company said it would tighten its safety measures and had created a direct contact channel with police.

In the letter, Altman said the company is committed to find ways to prevent similar tragedies. “Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again,” he said. 

The family of a girl who was seriously injured in the shooting has filed a negligence lawsuit against the US tech giant.

Is your AI private? OpenAI and the Canadian school shootingTo view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video

Edited by: Wesley Dockery 
#OpenAI #apologizes #reporting #Canada #mass #shooter

February mass shooting that left eight dead. 

Altman said he was “deeply sorry” the company didn’t alert the police about the shooter’s troubling ChatGPT accounts.

Britich Colombia Primier David Eby called the apology “necessary, and yet grossly insufficient.”

How did OpenAI fail to act?

An 18-year-old transgender woman killed her mother and stepbrother at home on February 10, before going to a local secondary school and opening fire. She killed five children and a teacher, then took her own life.

After the attack, OpenAI said it had identified the suspect’s account through its abuse detection systems and banned it in June, eight months before the shooting.

The ChatGPT developer said it did not report the account to Canadian police at the time, as the activity did not meet its threshold for referral to law enforcement.

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.” 

How does ChatGPT report suspected violance?

OpenAI says it uses automated moderation systems that scan content in real time. Accounts can be restricted or banned for violating the rules. Violations include sexual exploitation, support of self-harm and suicide, and promotion of violence and harm.

In serious cases, systems are designed to flag high-risk behavior for human review. If a credible threat is identified, the company may share relevant account data with law enforcement.

Following the attack, Canadian officials summoned OpenAI’s safety team and warned of regulation actions if changes were not made. The company said it would tighten its safety measures and had created a direct contact channel with police.

In the letter, Altman said the company is committed to find ways to prevent similar tragedies. “Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again,” he said. 

The family of a girl who was seriously injured in the shooting has filed a negligence lawsuit against the US tech giant.

Is your AI private? OpenAI and the Canadian school shooting

Edited by: Wesley Dockery 

#OpenAI #apologizes #reporting #Canada #mass #shooter">OpenAI apologizes for not reporting Canada mass shooter

OpenAI CEO Sam Altman has apologized to the Canadian town of Tumbler Ridge following a February mass shooting that left eight dead. 

Altman said he was “deeply sorry” the company didn’t alert the police about the shooter’s troubling ChatGPT accounts.

Britich Colombia Primier David Eby called the apology “necessary, and yet grossly insufficient.”

How did OpenAI fail to act?

An 18-year-old transgender woman killed her mother and stepbrother at home on February 10, before going to a local secondary school and opening fire. She killed five children and a teacher, then took her own life.

After the attack, OpenAI said it had identified the suspect’s account through its abuse detection systems and banned it in June, eight months before the shooting.

The ChatGPT developer said it did not report the account to Canadian police at the time, as the activity did not meet its threshold for referral to law enforcement.

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.” 

How does ChatGPT report suspected violance?

OpenAI says it uses automated moderation systems that scan content in real time. Accounts can be restricted or banned for violating the rules. Violations include sexual exploitation, support of self-harm and suicide, and promotion of violence and harm.

In serious cases, systems are designed to flag high-risk behavior for human review. If a credible threat is identified, the company may share relevant account data with law enforcement.

Following the attack, Canadian officials summoned OpenAI’s safety team and warned of regulation actions if changes were not made. The company said it would tighten its safety measures and had created a direct contact channel with police.

In the letter, Altman said the company is committed to find ways to prevent similar tragedies. “Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again,” he said. 

The family of a girl who was seriously injured in the shooting has filed a negligence lawsuit against the US tech giant.

Is your AI private? OpenAI and the Canadian school shooting

Edited by: Wesley Dockery 

#OpenAI #apologizes #reporting #Canada #mass #shooter

Post Comment