×
ICE Agent’s ‘Dragging’ Case May Help Expose Evidence in Renee Good Shooting

ICE Agent’s ‘Dragging’ Case May Help Expose Evidence in Renee Good Shooting

Defense attorneys for a Minnesota man convicted in December of assaulting Immigration and Customs Enforcement officer Jonathan Ross are seeking access to investigative files related to the killing of Renee Nicole Good, after learning Ross was the same officer who shot and killed her during a targeted operation in Minneapolis last month.

Attorneys for Roberto Carlos Muñoz-Guatemala asked a federal judge on Friday to order prosecutors to turn over training records as well as investigative files related to Ross, the ICE agent who killed Good on January 7 during Operation Metro Surge and was also injured in a June 2025 incident in which Muñoz-Guatemala dragged him with his car.

A separate post-trial motion by the defense, filed in the US District Court in Minnesota, asks the judge to pause deadlines for a new-trial motion until the discovery motion is resolved.

Muñoz-Guatemala’s attorneys argue that even if the court ultimately decides that any newly discovered evidence doesn’t entitle their client to a new trial, he’s entitled to explore whether there are mitigating factors that could impact the length of his sentence, such as whether Ross’ injuries could have been, to some degree, brought upon him by his own behavior.

A jury convicted Muñoz-Guatemala on December 10 of assault on a federal officer with a dangerous weapon and causing bodily injury.

Court filings say that Ross and other agents were attempting to interview Muñoz-Guatemala last summer, and possibly process him for deportation, because he had an administrative warrant out for being in the country without authorization. They surrounded his Nissan Altima and attempted to remove him from the vehicle. Ross then used a tool to shatter the rear driver’s-side window before reaching inside. When the defendant accelerated away, Ross testified, he was dragged approximately 100 yards, during which time he repeatedly deployed a taser. Muñoz-Guatemala subsequently called 911 to report he’d been the victim of an assault.

During his trial, Muñoz-Guatemala said he didn’t understand that Ross—who according to his own testimony was wearing ranger green and gray and wore his badge on his belt—was a federal agent. (Ross testified that Muñoz-Guatemala had asked to speak to an attorney, which would suggest he knew Ross was acting as law enforcement, but an FBI agent who witnessed the incident said he didn’t hear this. According to court records, this claim did not come up in pretrial interviews, and prosecutors said they had not heard it before he made the claim in court.) Muñoz-Guatemala’s attorneys say now that had he been tried after Good’s killing, his defense may have also asserted that he was justified in resisting Ross, who they claim was the aggressor and used excessive force.

The argument is that the jury instructions essentially contained a two-part decision tree: Jurors could convict Muñoz-Guatemala if they believed he should have known Ross was law enforcement. They could also convict him if they believed driving away was not a reasonable response.

Muñoz-Guatemala’s conviction does not indicate which of these prongs the jury relied on. If it was the latter, the defense argues in the motion, the court should have access to evidence that may have bearing on Ross’ conduct, tactics, and whether he behaved aggressively—information that might indicate whether the agent has a history behaving recklessly in the field or contrary to his training.

Prosecutors have not yet filed a response to the motions. An email to an address associated with Ross in publicly available records did not result in an immediate response. The Department of Justice did not immediately respond to a request for comment. The Department of Homeland Security did not immediately respond to questions about Ross’ current duty status or the status of any departmental review.

Ross has been placed on administrative leave following the January 7 shooting of Good, a 37-year-old Minnesota poet and mother of three, a step DHS officials say is standard protocol after fatal use of force. Ross has not been charged in Good’s killing, and the Justice Department has said it will not pursue criminal charges.

Source link
#ICE #Agents #Dragging #Case #Expose #Evidence #Renee #Good #Shooting

Speech-to-text capability is now baked into all modern computers. But what if you didn’t have to dictate to your computer? What if you could type just by thinking?

Silicon Valley startup Sabi is emerging from stealth with that goal. The company is developing a brain wearable that decodes a person’s internal speech into words on a computer screen. CEO Rahul Chhabra says its first product, a brain-reading beanie, will be available by the end of the year. The company is also designing a baseball cap version.

The technology is known as a brain-computer interface, or BCI, a device that provides a direct communication pathway between the brain and an external device. While many companies such as Elon Musk’s Neuralink are developing surgically implanted BCIs for people with severe motor disabilities, Sabi’s device could allow anyone to become a cyborg.

It’s not exactly Musk’s vision of the future, which involves implanted brain chips to allow humans to merge with AI. But venture capitalist Vinod Khosla, who was an early investor in OpenAI, says a noninvasive, wearable device is the only path to getting lots of people to use BCI technology.

“The biggest and baddest application of BCI is if you can talk to your computer by thinking about it,” says Khosla, founder of Khosla Ventures, one of Sabi’s investors. “If you’re going to have a billion people use BCI for access to their computers every day, it can’t be invasive.”

Sabi’s brain-reading hat relies on EEG, or electroencephalography, which uses metal disks placed on the scalp to record the brain’s electrical activity. Decoding imagined speech from EEG is already possible, but it’s currently limited to small sets of words or commands rather than continuous, natural speech.

A very small chip shown on the pad of a finger to illustrate it's tiny scale

Photograph: Courtesy of Sabi

The drawback of a wearable system is that the sensors have to listen to the brain through a layer of skin and bone, which dampens neural signals. Surgically implanted devices pick up much stronger signals because they sit so close to neurons. Sabi thinks the way to boost accuracy with a wearable is by massively scaling up the number of sensors in its device. Most EEG devices have a dozen to a few hundred sensors. Sabi’s cap will have anywhere from 70,000 to 100,000 miniature sensors.

“Given that high-density sensing, it pinpoints exactly what and where neural activity is happening. We use that information to get much more reliable data to decode what a person is thinking,” Chhabra says.

The company is aiming for an initial typing speed of 30 or so words per minute. That’s slower than most people type, but he says the speed will improve as users spend more time with the cap.

#Beanie #Designed #Read #Thoughtswearables,neuroscience,artificial intelligence,brain-computer interfaces">This Beanie Is Designed to Read Your ThoughtsSpeech-to-text capability is now baked into all modern computers. But what if you didn’t have to dictate to your computer? What if you could type just by thinking?Silicon Valley startup Sabi is emerging from stealth with that goal. The company is developing a brain wearable that decodes a person’s internal speech into words on a computer screen. CEO Rahul Chhabra says its first product, a brain-reading beanie, will be available by the end of the year. The company is also designing a baseball cap version.The technology is known as a brain-computer interface, or BCI, a device that provides a direct communication pathway between the brain and an external device. While many companies such as Elon Musk’s Neuralink are developing surgically implanted BCIs for people with severe motor disabilities, Sabi’s device could allow anyone to become a cyborg.It’s not exactly Musk’s vision of the future, which involves implanted brain chips to allow humans to merge with AI. But venture capitalist Vinod Khosla, who was an early investor in OpenAI, says a noninvasive, wearable device is the only path to getting lots of people to use BCI technology.“The biggest and baddest application of BCI is if you can talk to your computer by thinking about it,” says Khosla, founder of Khosla Ventures, one of Sabi’s investors. “If you’re going to have a billion people use BCI for access to their computers every day, it can’t be invasive.”Sabi’s brain-reading hat relies on EEG, or electroencephalography, which uses metal disks placed on the scalp to record the brain’s electrical activity. Decoding imagined speech from EEG is already possible, but it’s currently limited to small sets of words or commands rather than continuous, natural speech.Photograph: Courtesy of SabiThe drawback of a wearable system is that the sensors have to listen to the brain through a layer of skin and bone, which dampens neural signals. Surgically implanted devices pick up much stronger signals because they sit so close to neurons. Sabi thinks the way to boost accuracy with a wearable is by massively scaling up the number of sensors in its device. Most EEG devices have a dozen to a few hundred sensors. Sabi’s cap will have anywhere from 70,000 to 100,000 miniature sensors.“Given that high-density sensing, it pinpoints exactly what and where neural activity is happening. We use that information to get much more reliable data to decode what a person is thinking,” Chhabra says.The company is aiming for an initial typing speed of 30 or so words per minute. That’s slower than most people type, but he says the speed will improve as users spend more time with the cap.#Beanie #Designed #Read #Thoughtswearables,neuroscience,artificial intelligence,brain-computer interfaces

modern computers. But what if you didn’t have to dictate to your computer? What if you could type just by thinking?

Silicon Valley startup Sabi is emerging from stealth with that goal. The company is developing a brain wearable that decodes a person’s internal speech into words on a computer screen. CEO Rahul Chhabra says its first product, a brain-reading beanie, will be available by the end of the year. The company is also designing a baseball cap version.

The technology is known as a brain-computer interface, or BCI, a device that provides a direct communication pathway between the brain and an external device. While many companies such as Elon Musk’s Neuralink are developing surgically implanted BCIs for people with severe motor disabilities, Sabi’s device could allow anyone to become a cyborg.

It’s not exactly Musk’s vision of the future, which involves implanted brain chips to allow humans to merge with AI. But venture capitalist Vinod Khosla, who was an early investor in OpenAI, says a noninvasive, wearable device is the only path to getting lots of people to use BCI technology.

“The biggest and baddest application of BCI is if you can talk to your computer by thinking about it,” says Khosla, founder of Khosla Ventures, one of Sabi’s investors. “If you’re going to have a billion people use BCI for access to their computers every day, it can’t be invasive.”

Sabi’s brain-reading hat relies on EEG, or electroencephalography, which uses metal disks placed on the scalp to record the brain’s electrical activity. Decoding imagined speech from EEG is already possible, but it’s currently limited to small sets of words or commands rather than continuous, natural speech.

A very small chip shown on the pad of a finger to illustrate it's tiny scale

Photograph: Courtesy of Sabi

The drawback of a wearable system is that the sensors have to listen to the brain through a layer of skin and bone, which dampens neural signals. Surgically implanted devices pick up much stronger signals because they sit so close to neurons. Sabi thinks the way to boost accuracy with a wearable is by massively scaling up the number of sensors in its device. Most EEG devices have a dozen to a few hundred sensors. Sabi’s cap will have anywhere from 70,000 to 100,000 miniature sensors.

“Given that high-density sensing, it pinpoints exactly what and where neural activity is happening. We use that information to get much more reliable data to decode what a person is thinking,” Chhabra says.

The company is aiming for an initial typing speed of 30 or so words per minute. That’s slower than most people type, but he says the speed will improve as users spend more time with the cap.

#Beanie #Designed #Read #Thoughtswearables,neuroscience,artificial intelligence,brain-computer interfaces">This Beanie Is Designed to Read Your Thoughts

Speech-to-text capability is now baked into all modern computers. But what if you didn’t have to dictate to your computer? What if you could type just by thinking?

Silicon Valley startup Sabi is emerging from stealth with that goal. The company is developing a brain wearable that decodes a person’s internal speech into words on a computer screen. CEO Rahul Chhabra says its first product, a brain-reading beanie, will be available by the end of the year. The company is also designing a baseball cap version.

The technology is known as a brain-computer interface, or BCI, a device that provides a direct communication pathway between the brain and an external device. While many companies such as Elon Musk’s Neuralink are developing surgically implanted BCIs for people with severe motor disabilities, Sabi’s device could allow anyone to become a cyborg.

It’s not exactly Musk’s vision of the future, which involves implanted brain chips to allow humans to merge with AI. But venture capitalist Vinod Khosla, who was an early investor in OpenAI, says a noninvasive, wearable device is the only path to getting lots of people to use BCI technology.

“The biggest and baddest application of BCI is if you can talk to your computer by thinking about it,” says Khosla, founder of Khosla Ventures, one of Sabi’s investors. “If you’re going to have a billion people use BCI for access to their computers every day, it can’t be invasive.”

Sabi’s brain-reading hat relies on EEG, or electroencephalography, which uses metal disks placed on the scalp to record the brain’s electrical activity. Decoding imagined speech from EEG is already possible, but it’s currently limited to small sets of words or commands rather than continuous, natural speech.

A very small chip shown on the pad of a finger to illustrate it's tiny scale

Photograph: Courtesy of Sabi

The drawback of a wearable system is that the sensors have to listen to the brain through a layer of skin and bone, which dampens neural signals. Surgically implanted devices pick up much stronger signals because they sit so close to neurons. Sabi thinks the way to boost accuracy with a wearable is by massively scaling up the number of sensors in its device. Most EEG devices have a dozen to a few hundred sensors. Sabi’s cap will have anywhere from 70,000 to 100,000 miniature sensors.

“Given that high-density sensing, it pinpoints exactly what and where neural activity is happening. We use that information to get much more reliable data to decode what a person is thinking,” Chhabra says.

The company is aiming for an initial typing speed of 30 or so words per minute. That’s slower than most people type, but he says the speed will improve as users spend more time with the cap.

#Beanie #Designed #Read #Thoughtswearables,neuroscience,artificial intelligence,brain-computer interfaces

Val Kilmer AI deepfake film As Deep as the Grave has just released its first trailer. The internet has responded with overwhelming disgust.

A widely recognised actor known for his roles in films such as Top Gun, Batman Forever, and Kiss Kiss Bang Bang, Kilmer died from pneumonia last April at 65 years old. Upcoming film As Deep as the Grave has now used generative AI to create a digital puppet in Kilmer’s likeness, having it portray a character appearing in “a significant part” of the historical film.

As Deep as the Grave follows married archaeologists Ann Axtell Morris (Abigail Lawrie) and Earl H. Morris (Tom Felton), who conducted fieldwork in the U.S. southwest during the 1920s. Kilmer’s AI-generated likeness will be used to depict Father Fintan, a Catholic priest who is also a Native American spiritualist. The film also features Abigail Breslin, Wes Studi, and Finn Jones.

Though Kilmer was cast in As Deep as the Grave prior to his death, delays in production and issues with his health meant he never shot any scenes. Kilmer had previously given a tech-assisted performance in Top Gun: Maverick, which digitally altered his real voice. He also worked with UK company Sonantic to create an AI speaking voice based on his old recordings. However, As Deep as the Grave will be the first time his likeness and voice will be completely AI-generated in a film.

“Very fitting that this trailer includes a scene where a corpse is unceremoniously yanked out of the ground,” read one of the top comments on As Deep as the Grave‘s trailer at time of writing.

CGI likenesses of deceased actors have been used in feature films before. In 2016, Rogue One: A Star Wars Story gained attention for using CGI and motion capture to resurrect Peter Cushing and portray a younger Carrie Fisher for a few minutes of the film. In 2015, Furious 7 used similar techniques to insert Paul Walker into the remainder of the film after he died mid-shoot. Though Furious 7 largely received a pass due to the circumstances, Rogue One received criticism regarding the ethics of its CGI Cushing. Using generative AI to completely create a performance out of nothing appears to go a step even further, completely removing any actors from the process.

Writer and director Coerte Voorhees told Variety that he chose to use AI rather than recast the role due to budget constraints, and that Kilmer’s children gave the project their blessing. Even so, online commenters have labelled it disgusting and disrespectful, not only for digitally reanimating Kilmer but also for the damaging precedent As Deep as the Grave‘s use of AI could set for the film industry as a whole.

#Val #Kilmer #deepfake #Deep #Grave #trailer #sparks #outrage">Val Kilmer AI deepfake in ‘As Deep as the Grave’ trailer sparks outrage
                        Val Kilmer AI deepfake film As Deep as the Grave has just released its first trailer. The internet has responded with overwhelming disgust.A widely recognised actor known for his roles in films such as Top Gun, Batman Forever, and Kiss Kiss Bang Bang, Kilmer died from pneumonia last April at 65 years old. Upcoming film As Deep as the Grave has now used generative AI to create a digital puppet in Kilmer’s likeness, having it portray a character appearing in “a significant part” of the historical film.As Deep as the Grave follows married archaeologists Ann Axtell Morris (Abigail Lawrie) and Earl H. Morris (Tom Felton), who conducted fieldwork in the U.S. southwest during the 1920s. Kilmer’s AI-generated likeness will be used to depict Father Fintan, a Catholic priest who is also a Native American spiritualist. The film also features Abigail Breslin, Wes Studi, and Finn Jones.Though Kilmer was cast in As Deep as the Grave prior to his death, delays in production and issues with his health meant he never shot any scenes. Kilmer had previously given a tech-assisted performance in Top Gun: Maverick, which digitally altered his real voice. He also worked with UK company Sonantic to create an AI speaking voice based on his old recordings. However, As Deep as the Grave will be the first time his likeness and voice will be completely AI-generated in a film.“Very fitting that this trailer includes a scene where a corpse is unceremoniously yanked out of the ground,” read one of the top comments on As Deep as the Grave‘s trailer at time of writing.
CGI likenesses of deceased actors have been used in feature films before. In 2016, Rogue One: A Star Wars Story gained attention for using CGI and motion capture to resurrect Peter Cushing and portray a younger Carrie Fisher for a few minutes of the film. In 2015, Furious 7 used similar techniques to insert Paul Walker into the remainder of the film after he died mid-shoot. Though Furious 7 largely received a pass due to the circumstances, Rogue One received criticism regarding the ethics of its CGI Cushing. Using generative AI to completely create a performance out of nothing appears to go a step even further, completely removing any actors from the process.Writer and director Coerte Voorhees told Variety that he chose to use AI rather than recast the role due to budget constraints, and that Kilmer’s children gave the project their blessing. Even so, online commenters have labelled it disgusting and disrespectful, not only for digitally reanimating Kilmer but also for the damaging precedent As Deep as the Grave‘s use of AI could set for the film industry as a whole.



                            
                    
                
                    #Val #Kilmer #deepfake #Deep #Grave #trailer #sparks #outrage

Val Kilmer AI deepfake film As Deep as the Grave has just released its first trailer. The internet has responded with overwhelming disgust.

A widely recognised actor known for his roles in films such as Top Gun, Batman Forever, and Kiss Kiss Bang Bang, Kilmer died from pneumonia last April at 65 years old. Upcoming film As Deep as the Grave has now used generative AI to create a digital puppet in Kilmer’s likeness, having it portray a character appearing in “a significant part” of the historical film.

As Deep as the Grave follows married archaeologists Ann Axtell Morris (Abigail Lawrie) and Earl H. Morris (Tom Felton), who conducted fieldwork in the U.S. southwest during the 1920s. Kilmer’s AI-generated likeness will be used to depict Father Fintan, a Catholic priest who is also a Native American spiritualist. The film also features Abigail Breslin, Wes Studi, and Finn Jones.

Though Kilmer was cast in As Deep as the Grave prior to his death, delays in production and issues with his health meant he never shot any scenes. Kilmer had previously given a tech-assisted performance in Top Gun: Maverick, which digitally altered his real voice. He also worked with UK company Sonantic to create an AI speaking voice based on his old recordings. However, As Deep as the Grave will be the first time his likeness and voice will be completely AI-generated in a film.

“Very fitting that this trailer includes a scene where a corpse is unceremoniously yanked out of the ground,” read one of the top comments on As Deep as the Grave‘s trailer at time of writing.

CGI likenesses of deceased actors have been used in feature films before. In 2016, Rogue One: A Star Wars Story gained attention for using CGI and motion capture to resurrect Peter Cushing and portray a younger Carrie Fisher for a few minutes of the film. In 2015, Furious 7 used similar techniques to insert Paul Walker into the remainder of the film after he died mid-shoot. Though Furious 7 largely received a pass due to the circumstances, Rogue One received criticism regarding the ethics of its CGI Cushing. Using generative AI to completely create a performance out of nothing appears to go a step even further, completely removing any actors from the process.

Writer and director Coerte Voorhees told Variety that he chose to use AI rather than recast the role due to budget constraints, and that Kilmer’s children gave the project their blessing. Even so, online commenters have labelled it disgusting and disrespectful, not only for digitally reanimating Kilmer but also for the damaging precedent As Deep as the Grave‘s use of AI could set for the film industry as a whole.

#Val #Kilmer #deepfake #Deep #Grave #trailer #sparks #outrage">Val Kilmer AI deepfake in ‘As Deep as the Grave’ trailer sparks outrage

Val Kilmer AI deepfake film As Deep as the Grave has just released its first trailer. The internet has responded with overwhelming disgust.

A widely recognised actor known for his roles in films such as Top Gun, Batman Forever, and Kiss Kiss Bang Bang, Kilmer died from pneumonia last April at 65 years old. Upcoming film As Deep as the Grave has now used generative AI to create a digital puppet in Kilmer’s likeness, having it portray a character appearing in “a significant part” of the historical film.

As Deep as the Grave follows married archaeologists Ann Axtell Morris (Abigail Lawrie) and Earl H. Morris (Tom Felton), who conducted fieldwork in the U.S. southwest during the 1920s. Kilmer’s AI-generated likeness will be used to depict Father Fintan, a Catholic priest who is also a Native American spiritualist. The film also features Abigail Breslin, Wes Studi, and Finn Jones.

Though Kilmer was cast in As Deep as the Grave prior to his death, delays in production and issues with his health meant he never shot any scenes. Kilmer had previously given a tech-assisted performance in Top Gun: Maverick, which digitally altered his real voice. He also worked with UK company Sonantic to create an AI speaking voice based on his old recordings. However, As Deep as the Grave will be the first time his likeness and voice will be completely AI-generated in a film.

“Very fitting that this trailer includes a scene where a corpse is unceremoniously yanked out of the ground,” read one of the top comments on As Deep as the Grave‘s trailer at time of writing.

CGI likenesses of deceased actors have been used in feature films before. In 2016, Rogue One: A Star Wars Story gained attention for using CGI and motion capture to resurrect Peter Cushing and portray a younger Carrie Fisher for a few minutes of the film. In 2015, Furious 7 used similar techniques to insert Paul Walker into the remainder of the film after he died mid-shoot. Though Furious 7 largely received a pass due to the circumstances, Rogue One received criticism regarding the ethics of its CGI Cushing. Using generative AI to completely create a performance out of nothing appears to go a step even further, completely removing any actors from the process.

Writer and director Coerte Voorhees told Variety that he chose to use AI rather than recast the role due to budget constraints, and that Kilmer’s children gave the project their blessing. Even so, online commenters have labelled it disgusting and disrespectful, not only for digitally reanimating Kilmer but also for the damaging precedent As Deep as the Grave‘s use of AI could set for the film industry as a whole.

#Val #Kilmer #deepfake #Deep #Grave #trailer #sparks #outrage

Post Comment