×
US envoy suggests it would be ‘fine’ if Israel expands across Middle East

US envoy suggests it would be ‘fine’ if Israel expands across Middle East

Mike Huckabee, the United States ambassador to Israel, has suggested that he would not object if Israel were to take most of the Middle East, stressing what he described as the Jewish people’s right to the land.

In an interview with conservative commentator Tucker Carlson that aired on Friday, Huckabee was pressed about the geographical borders of Israel, which he argues are rooted in the Bible.

Recommended Stories

list of 3 itemsend of list

Carlson told Huckabee that the biblical verse had promised the land to the descendants of Abraham, including the area between the Euphrates River in Iraq and the Nile River in Egypt.

Such a swath would encompass modern-day Lebanon, Syria, Jordan and parts of Saudi Arabia.

“It would be fine if they took it all,” said Huckabee, who was appointed by President Donald Trump last year.

Carlson, who appeared taken aback by the statement, asked Huckabee if indeed he would approve of Israel expanding over the entire region.

“They don’t want to take it over. They’re not asking to take it over,” the ambassador replied.

The US envoy, an avowed Christian Zionist and staunch defender of Israel, later appeared to walk back his assertion, saying that it “was somewhat of a hyperbolic statement”.

Still, he left the door open for Israeli expansionism based on his religious interpretation.

“If they end up getting attacked by all these places, and they win that war, and they take that land, OK, that’s a whole other discussion,” Huckabee said.

The Department of State did not respond to Al Jazeera’s request for comment on whether Secretary of State Marco Rubio shares Huckabee’s views on Israel’s right to expand.

The principle of territorial integrity and the prohibition against the acquisition of land by force have been a bedrock of international law since World War II.

In 2024, the International Court of Justice (ICJ) ruled that Israel’s occupation of the Palestinian territories is illegal and must cease immediately.

But Israeli law does not clearly demarcate the country’s borders. Israel also occupies the Golan Heights in Syria, which it illegally annexed in 1981.

The US is the only country that recognises Israel’s claimed sovereignty over the Syrian territory.

After the 2024 war with Hezbollah, Israel also set up military outposts in five points inside Lebanon.

Some Israeli politicians, including Prime Minister Benjamin Netanyahu, have openly promoted the idea of a “Greater Israel” with expanded borders.

Israel’s Finance Minister Bezalel Smotrich stirred international outrage in 2023 when he spoke at an event featuring a map that included the Palestinian territories and portions of Lebanon, Syria and Jordan as part of Israel, set against the colours of the Israeli flag.

In his interview with Carlson, Huckabee tried to argue that Israel’s right to exist is rooted in international law, but he also attacked the legal institutions that oversee international law for their opposition to Israeli abuses.

“One of the reasons I’m so grateful President Trump and Secretary Rubio are pushing hard, trying to get rid of the ICC [International Criminal Court] and the ICJ is because they have become rogue organisations that are no longer really about an equal application of law,” he said.

Beyond his professed religious devotion to Israel, Huckabee has faced criticism for failing to speak up for the rights of US citizens who have been killed and imprisoned by Israeli forces during his ambassadorship.

Last year, Huckabee even sparked anger from some conservatives in the US when he met with convicted spy Jonathan Pollard, who sold US intelligence secrets to the Israeli government, details of which later made it to the Soviet Union at the height of the Cold War.

Pollard, a former civilian analyst in the US Navy, served 30 years in jail and moved to Israel in 2020 after his release. He never expressed regret for his crimes, and in 2021, he called on Jewish employees in US security agencies to spy for Israel.

Huckabee said he does not agree with Pollard’s views, but he denied hosting him, arguing that he simply held a meeting with him at the US embassy in Jerusalem.

Asked if anyone can walk into the embassy to meet the envoy, Huckabee acknowledged that such a meeting requires a pre-approved appointment.

“He was able to come to the US embassy to have a meeting at his request. I did, and frankly, I don’t regret it,” Huckabee said.

“I met with a lot of people over the course of the time I’ve been here and will meet with a lot more.”

Source link
#envoy #suggests #fine #Israel #expands #Middle #East

岩手 大槌町 山林火災は住宅近く迫る 1000人規模で消火 | NHKニュース岩手県大槌町の山林火災は25日で発生から4日目となりますが、延焼が続いています。町によりますと複数の地区にある住宅の近くまで火が迫っているということで、1000人以上の規模での懸命な消火活動が行われています。#岩手 #大槌町 #山林火災は住宅近く迫る #1000人規模で消火 #NHKニュースNHK,ニュース,NHK ONE,火災,岩手県,一覧

OpenAI CEO Sam Altman has apologized to the Canadian town of Tumbler Ridge following a February mass shooting that left eight dead. 

Altman said he was “deeply sorry” the company didn’t alert the police about the shooter’s troubling ChatGPT accounts.

Britich Colombia Primier David Eby called the apology “necessary, and yet grossly insufficient.”

How did OpenAI fail to act?

An 18-year-old transgender woman killed her mother and stepbrother at home on February 10, before going to a local secondary school and opening fire. She killed five children and a teacher, then took her own life.

After the attack, OpenAI said it had identified the suspect’s account through its abuse detection systems and banned it in June, eight months before the shooting.

The ChatGPT developer said it did not report the account to Canadian police at the time, as the activity did not meet its threshold for referral to law enforcement.

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.” 

How does ChatGPT report suspected violance?

OpenAI says it uses automated moderation systems that scan content in real time. Accounts can be restricted or banned for violating the rules. Violations include sexual exploitation, support of self-harm and suicide, and promotion of violence and harm.

In serious cases, systems are designed to flag high-risk behavior for human review. If a credible threat is identified, the company may share relevant account data with law enforcement.

Following the attack, Canadian officials summoned OpenAI’s safety team and warned of regulation actions if changes were not made. The company said it would tighten its safety measures and had created a direct contact channel with police.

In the letter, Altman said the company is committed to find ways to prevent similar tragedies. “Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again,” he said. 

The family of a girl who was seriously injured in the shooting has filed a negligence lawsuit against the US tech giant.

Is your AI private? OpenAI and the Canadian school shooting

Edited by: Wesley Dockery 

#OpenAI #apologizes #reporting #Canada #mass #shooter">OpenAI apologizes for not reporting Canada mass shooterOpenAI CEO Sam Altman has apologized to the Canadian town of Tumbler Ridge following a February mass shooting that left eight dead. 

Altman said he was “deeply sorry” the company didn’t alert the police about the shooter’s troubling ChatGPT accounts.

Britich Colombia Primier David Eby called the apology “necessary, and yet grossly insufficient.”

How did OpenAI fail to act?

An 18-year-old transgender woman killed her mother and stepbrother at home on February 10, before going to a local secondary school and opening fire. She killed five children and a teacher, then took her own life.

After the attack, OpenAI said it had identified the suspect’s account through its abuse detection systems and banned it in June, eight months before the shooting.

The ChatGPT developer said it did not report the account to Canadian police at the time, as the activity did not meet its threshold for referral to law enforcement.

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.” 

How does ChatGPT report suspected violance?

OpenAI says it uses automated moderation systems that scan content in real time. Accounts can be restricted or banned for violating the rules. Violations include sexual exploitation, support of self-harm and suicide, and promotion of violence and harm.

In serious cases, systems are designed to flag high-risk behavior for human review. If a credible threat is identified, the company may share relevant account data with law enforcement.

Following the attack, Canadian officials summoned OpenAI’s safety team and warned of regulation actions if changes were not made. The company said it would tighten its safety measures and had created a direct contact channel with police.

In the letter, Altman said the company is committed to find ways to prevent similar tragedies. “Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again,” he said. 

The family of a girl who was seriously injured in the shooting has filed a negligence lawsuit against the US tech giant.

Is your AI private? OpenAI and the Canadian school shootingTo view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video

Edited by: Wesley Dockery 
#OpenAI #apologizes #reporting #Canada #mass #shooter

February mass shooting that left eight dead. 

Altman said he was “deeply sorry” the company didn’t alert the police about the shooter’s troubling ChatGPT accounts.

Britich Colombia Primier David Eby called the apology “necessary, and yet grossly insufficient.”

How did OpenAI fail to act?

An 18-year-old transgender woman killed her mother and stepbrother at home on February 10, before going to a local secondary school and opening fire. She killed five children and a teacher, then took her own life.

After the attack, OpenAI said it had identified the suspect’s account through its abuse detection systems and banned it in June, eight months before the shooting.

The ChatGPT developer said it did not report the account to Canadian police at the time, as the activity did not meet its threshold for referral to law enforcement.

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.” 

How does ChatGPT report suspected violance?

OpenAI says it uses automated moderation systems that scan content in real time. Accounts can be restricted or banned for violating the rules. Violations include sexual exploitation, support of self-harm and suicide, and promotion of violence and harm.

In serious cases, systems are designed to flag high-risk behavior for human review. If a credible threat is identified, the company may share relevant account data with law enforcement.

Following the attack, Canadian officials summoned OpenAI’s safety team and warned of regulation actions if changes were not made. The company said it would tighten its safety measures and had created a direct contact channel with police.

In the letter, Altman said the company is committed to find ways to prevent similar tragedies. “Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again,” he said. 

The family of a girl who was seriously injured in the shooting has filed a negligence lawsuit against the US tech giant.

Is your AI private? OpenAI and the Canadian school shooting

Edited by: Wesley Dockery 

#OpenAI #apologizes #reporting #Canada #mass #shooter">OpenAI apologizes for not reporting Canada mass shooter

OpenAI CEO Sam Altman has apologized to the Canadian town of Tumbler Ridge following a February mass shooting that left eight dead. 

Altman said he was “deeply sorry” the company didn’t alert the police about the shooter’s troubling ChatGPT accounts.

Britich Colombia Primier David Eby called the apology “necessary, and yet grossly insufficient.”

How did OpenAI fail to act?

An 18-year-old transgender woman killed her mother and stepbrother at home on February 10, before going to a local secondary school and opening fire. She killed five children and a teacher, then took her own life.

After the attack, OpenAI said it had identified the suspect’s account through its abuse detection systems and banned it in June, eight months before the shooting.

The ChatGPT developer said it did not report the account to Canadian police at the time, as the activity did not meet its threshold for referral to law enforcement.

“I am deeply sorry that we did not alert law enforcement to the account that was banned in June,” Altman said. “While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.” 

How does ChatGPT report suspected violance?

OpenAI says it uses automated moderation systems that scan content in real time. Accounts can be restricted or banned for violating the rules. Violations include sexual exploitation, support of self-harm and suicide, and promotion of violence and harm.

In serious cases, systems are designed to flag high-risk behavior for human review. If a credible threat is identified, the company may share relevant account data with law enforcement.

Following the attack, Canadian officials summoned OpenAI’s safety team and warned of regulation actions if changes were not made. The company said it would tighten its safety measures and had created a direct contact channel with police.

In the letter, Altman said the company is committed to find ways to prevent similar tragedies. “Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again,” he said. 

The family of a girl who was seriously injured in the shooting has filed a negligence lawsuit against the US tech giant.

Is your AI private? OpenAI and the Canadian school shooting

Edited by: Wesley Dockery 

#OpenAI #apologizes #reporting #Canada #mass #shooter

Post Comment