×
vivo X300 Ultra India Launch Expected in May: Specs, Price, Features
	
After launching in China, the Vivo X300 Ultra is now expected to go global on April 24. Although vivo hasn’t officially confirmed the timeline, signs point toward an upcoming rollout. India may not have to wait much longer, as the launch is expected soon after, with the phone aiming to stand out mainly for its camera capabilities. Europe is expected to get the phone first, with other regions likely to follow soon after.



Key Specifications of Vivo X300 Ultra







From a specifications standpoint, the Vivo X300 Ultra will take the X300 Pro’s features to the next level. It will come with a 6.82-inch 2K LTPO OLED display with a 144 Hz refresh rate. It may also pack a 7,000mAh battery and use the Snapdragon 7 Elite Gen 5 chipset. Even though these are solid features, they are not the primary reason this phone stands out.



One of the standout features of the Vivo X300 Ultra is its camera system. The device is said to be equipped with two 200MP cameras, with one serving as the main camera and the other as the periscope telephoto lens. In addition, the device will have a 50MP ultrawide camera and a 50MP front-facing selfie camera. A separate teleconverter module will be available for this device, enabling users to capture high-zoom images.



India Launch Timeline and Availability






The Vivo X300 Ultra is expected to launch in India around May, with sources suggesting a May 7 release date. However, the official release date has yet to be confirmed, and the launch is expected soon after this. The phone has already been showcased in India, with a recent picture posted showing it in the hands of Indian cricketer Shreyas Iyer.



Expected Price and Market Positioning



The upcoming Vivo X300 Ultra will likely have a high price point, particularly when considering international markets. In China, Vivo sells the 16GB + 1TB variant for about Rs 1.3 lakh, but in Europe, the company could price it at around EUR 1,900 (roughly Rs 2.08 lakh). At this price, it goes beyond devices like the iPhone 17 Pro Max and Samsung Galaxy S26 Ultra.





#vivo #X300 #Ultra #India #Launch #Expected #Specs #Price #FeaturesVivo

vivo X300 Ultra India Launch Expected in May: Specs, Price, Features

After launching in China, the Vivo X300 Ultra is now expected to go global on April 24. Although vivo hasn’t officially confirmed the timeline, signs point toward an upcoming rollout. India may not have to wait much longer, as the launch is expected soon after, with the phone aiming to stand out mainly for its camera capabilities. Europe is expected to get the phone first, with other regions likely to follow soon after.

Key Specifications of Vivo X300 Ultra

vivo X300 Ultra India Launch Expected in May: Specs, Price, Features
	
After launching in China, the Vivo X300 Ultra is now expected to go global on April 24. Although vivo hasn’t officially confirmed the timeline, signs point toward an upcoming rollout. India may not have to wait much longer, as the launch is expected soon after, with the phone aiming to stand out mainly for its camera capabilities. Europe is expected to get the phone first, with other regions likely to follow soon after.



Key Specifications of Vivo X300 Ultra







From a specifications standpoint, the Vivo X300 Ultra will take the X300 Pro’s features to the next level. It will come with a 6.82-inch 2K LTPO OLED display with a 144 Hz refresh rate. It may also pack a 7,000mAh battery and use the Snapdragon 7 Elite Gen 5 chipset. Even though these are solid features, they are not the primary reason this phone stands out.



One of the standout features of the Vivo X300 Ultra is its camera system. The device is said to be equipped with two 200MP cameras, with one serving as the main camera and the other as the periscope telephoto lens. In addition, the device will have a 50MP ultrawide camera and a 50MP front-facing selfie camera. A separate teleconverter module will be available for this device, enabling users to capture high-zoom images.



India Launch Timeline and Availability






The Vivo X300 Ultra is expected to launch in India around May, with sources suggesting a May 7 release date. However, the official release date has yet to be confirmed, and the launch is expected soon after this. The phone has already been showcased in India, with a recent picture posted showing it in the hands of Indian cricketer Shreyas Iyer.



Expected Price and Market Positioning



The upcoming Vivo X300 Ultra will likely have a high price point, particularly when considering international markets. In China, Vivo sells the 16GB + 1TB variant for about Rs 1.3 lakh, but in Europe, the company could price it at around EUR 1,900 (roughly Rs 2.08 lakh). At this price, it goes beyond devices like the iPhone 17 Pro Max and Samsung Galaxy S26 Ultra.





#vivo #X300 #Ultra #India #Launch #Expected #Specs #Price #FeaturesVivo

From a specifications standpoint, the Vivo X300 Ultra will take the X300 Pro’s features to the next level. It will come with a 6.82-inch 2K LTPO OLED display with a 144 Hz refresh rate. It may also pack a 7,000mAh battery and use the Snapdragon 7 Elite Gen 5 chipset. Even though these are solid features, they are not the primary reason this phone stands out.

One of the standout features of the Vivo X300 Ultra is its camera system. The device is said to be equipped with two 200MP cameras, with one serving as the main camera and the other as the periscope telephoto lens. In addition, the device will have a 50MP ultrawide camera and a 50MP front-facing selfie camera. A separate teleconverter module will be available for this device, enabling users to capture high-zoom images.

India Launch Timeline and Availability

Vivo X300 Ultra india

The Vivo X300 Ultra is expected to launch in India around May, with sources suggesting a May 7 release date. However, the official release date has yet to be confirmed, and the launch is expected soon after this. The phone has already been showcased in India, with a recent picture posted showing it in the hands of Indian cricketer Shreyas Iyer.

Expected Price and Market Positioning

The upcoming Vivo X300 Ultra will likely have a high price point, particularly when considering international markets. In China, Vivo sells the 16GB + 1TB variant for about Rs 1.3 lakh, but in Europe, the company could price it at around EUR 1,900 (roughly Rs 2.08 lakh). At this price, it goes beyond devices like the iPhone 17 Pro Max and Samsung Galaxy S26 Ultra.

#vivo #X300 #Ultra #India #Launch #Expected #Specs #Price #FeaturesVivo

After launching in China, the Vivo X300 Ultra is now expected to go global on April 24. Although vivo hasn’t officially confirmed the timeline, signs point toward an upcoming rollout. India may not have to wait much longer, as the launch is expected soon after, with the phone aiming to stand out mainly for its camera capabilities. Europe is expected to get the phone first, with other regions likely to follow soon after.

Key Specifications of Vivo X300 Ultra

From a specifications standpoint, the Vivo X300 Ultra will take the X300 Pro’s features to the next level. It will come with a 6.82-inch 2K LTPO OLED display with a 144 Hz refresh rate. It may also pack a 7,000mAh battery and use the Snapdragon 7 Elite Gen 5 chipset. Even though these are solid features, they are not the primary reason this phone stands out.

One of the standout features of the Vivo X300 Ultra is its camera system. The device is said to be equipped with two 200MP cameras, with one serving as the main camera and the other as the periscope telephoto lens. In addition, the device will have a 50MP ultrawide camera and a 50MP front-facing selfie camera. A separate teleconverter module will be available for this device, enabling users to capture high-zoom images.

India Launch Timeline and Availability

Vivo X300 Ultra india

The Vivo X300 Ultra is expected to launch in India around May, with sources suggesting a May 7 release date. However, the official release date has yet to be confirmed, and the launch is expected soon after this. The phone has already been showcased in India, with a recent picture posted showing it in the hands of Indian cricketer Shreyas Iyer.

Expected Price and Market Positioning

The upcoming Vivo X300 Ultra will likely have a high price point, particularly when considering international markets. In China, Vivo sells the 16GB + 1TB variant for about Rs 1.3 lakh, but in Europe, the company could price it at around EUR 1,900 (roughly Rs 2.08 lakh). At this price, it goes beyond devices like the iPhone 17 Pro Max and Samsung Galaxy S26 Ultra.

Source link
#vivo #X300 #Ultra #India #Launch #Expected #Specs #Price #Features

Previous post

Before The Superman Sequel, Adria Arjona Starred In This Underrated James Gunn Action Movie – SlashFilm

Next post

Deadspin | Capitals star Alex Ovechkin ‘pretty sure’ he will play again <div id=""><section id="0" class=" w-full"><div class="xl:container mx-0 !px-4 py-0 pb-4 !mx-0 !px-0"><img src="https://images.deadspin.com/tr:w-900/28717197.jpg" srcset="https://images.deadspin.com/tr:w-900/28717197.jpg" alt="NHL: Pittsburgh Penguins at Washington Capitals" class="w-full" fetchpriority="high" loading="eager"/><span class="text-0.8 leading-tight">Apr 12, 2026; Washington, District of Columbia, USA; Washington Capitals left wing Alex Ovechkin speaks with the media after the Capitals’ game against the Pittsburgh Penguins at Capital One Arena. Mandatory Credit: Geoff Burke-Imagn Images<!-- --> <!-- --> </span></div></section><section id="section-1"> <p>Alex Ovechkin told reporters Thursday morning that he’s “pretty sure” he will return for a 22nd NHL season.</p> </section><section id="section-2"> <p>The NHL’s all-time leading goal scorer addressed his playing future as his Washington Capitals teammates cleaned out their lockers. The team failed to qualify for the postseason. </p> </section><section id="section-3"> <p>Ovechkin, who will turn 41 before the start of next season, concluded playing on the final season of a five-year, $47.5 million contract following Washington’s 2-1 victory over the Columbus Blue Jackets on Tuesday.</p> </section><section id="section-4"> <p>“I’m pretty sure it’s not my last game. I hope it’s not my last game against Columbus. How I said, I have to make a decision to see where we’re at,” the Capitals captain said. “Team, family. Obviously my family is going to support me. Kids have already asked me, ‘Dad are you staying or no’? I told them we’ll see.</p> </section><br/><section id="section-5"> <p>“They’re excited. They want me to come back because they love the city and the team.”</p> </section> <section id="section-6"> <p>With Wayne Gretzky in attendance, Ovechkin surpassed the Hall of Fame member when he broke his NHL goal record with career goal No. 895 in a game against the New York Islanders on April 6, 2025. Ovechkin has upped his goal total to 929 after scoring a team-leading 32 times this season.</p> </section><section id="section-7"> <p>Ovechkin has a club-best 64 points this season to push his career total to 1,687, which ranks 10th all-time in NHL history.</p> </section><section id="section-8"> <p>He has won the Maurice “Rocket” Richard Trophy for leading the NHL in goals a league-record nine times since being selected by Washington with the top overall pick in the 2004 NHL Draft. </p> </section><section id="section-9"> <p>Ovechkin guided the Capitals to a Stanley Cup title in 2018 and is a three-time Hart Trophy recipient as the NHL MVP.</p> </section><br/><section id="section-10"> <p>–Field Level Media</p> </section> </div> #Deadspin #Capitals #star #Alex #Ovechkin #pretty #play

Speech-to-text capability is now baked into all modern computers. But what if you didn’t have to dictate to your computer? What if you could type just by thinking?

Silicon Valley startup Sabi is emerging from stealth with that goal. The company is developing a brain wearable that decodes a person’s internal speech into words on a computer screen. CEO Rahul Chhabra says its first product, a brain-reading beanie, will be available by the end of the year. The company is also designing a baseball cap version.

The technology is known as a brain-computer interface, or BCI, a device that provides a direct communication pathway between the brain and an external device. While many companies such as Elon Musk’s Neuralink are developing surgically implanted BCIs for people with severe motor disabilities, Sabi’s device could allow anyone to become a cyborg.

It’s not exactly Musk’s vision of the future, which involves implanted brain chips to allow humans to merge with AI. But venture capitalist Vinod Khosla, who was an early investor in OpenAI, says a noninvasive, wearable device is the only path to getting lots of people to use BCI technology.

“The biggest and baddest application of BCI is if you can talk to your computer by thinking about it,” says Khosla, founder of Khosla Ventures, one of Sabi’s investors. “If you’re going to have a billion people use BCI for access to their computers every day, it can’t be invasive.”

Sabi’s brain-reading hat relies on EEG, or electroencephalography, which uses metal disks placed on the scalp to record the brain’s electrical activity. Decoding imagined speech from EEG is already possible, but it’s currently limited to small sets of words or commands rather than continuous, natural speech.

A very small chip shown on the pad of a finger to illustrate it's tiny scale

Photograph: Courtesy of Sabi

The drawback of a wearable system is that the sensors have to listen to the brain through a layer of skin and bone, which dampens neural signals. Surgically implanted devices pick up much stronger signals because they sit so close to neurons. Sabi thinks the way to boost accuracy with a wearable is by massively scaling up the number of sensors in its device. Most EEG devices have a dozen to a few hundred sensors. Sabi’s cap will have anywhere from 70,000 to 100,000 miniature sensors.

“Given that high-density sensing, it pinpoints exactly what and where neural activity is happening. We use that information to get much more reliable data to decode what a person is thinking,” Chhabra says.

The company is aiming for an initial typing speed of 30 or so words per minute. That’s slower than most people type, but he says the speed will improve as users spend more time with the cap.

#Beanie #Designed #Read #Thoughtswearables,neuroscience,artificial intelligence,brain-computer interfaces">This Beanie Is Designed to Read Your ThoughtsSpeech-to-text capability is now baked into all modern computers. But what if you didn’t have to dictate to your computer? What if you could type just by thinking?Silicon Valley startup Sabi is emerging from stealth with that goal. The company is developing a brain wearable that decodes a person’s internal speech into words on a computer screen. CEO Rahul Chhabra says its first product, a brain-reading beanie, will be available by the end of the year. The company is also designing a baseball cap version.The technology is known as a brain-computer interface, or BCI, a device that provides a direct communication pathway between the brain and an external device. While many companies such as Elon Musk’s Neuralink are developing surgically implanted BCIs for people with severe motor disabilities, Sabi’s device could allow anyone to become a cyborg.It’s not exactly Musk’s vision of the future, which involves implanted brain chips to allow humans to merge with AI. But venture capitalist Vinod Khosla, who was an early investor in OpenAI, says a noninvasive, wearable device is the only path to getting lots of people to use BCI technology.“The biggest and baddest application of BCI is if you can talk to your computer by thinking about it,” says Khosla, founder of Khosla Ventures, one of Sabi’s investors. “If you’re going to have a billion people use BCI for access to their computers every day, it can’t be invasive.”Sabi’s brain-reading hat relies on EEG, or electroencephalography, which uses metal disks placed on the scalp to record the brain’s electrical activity. Decoding imagined speech from EEG is already possible, but it’s currently limited to small sets of words or commands rather than continuous, natural speech.Photograph: Courtesy of SabiThe drawback of a wearable system is that the sensors have to listen to the brain through a layer of skin and bone, which dampens neural signals. Surgically implanted devices pick up much stronger signals because they sit so close to neurons. Sabi thinks the way to boost accuracy with a wearable is by massively scaling up the number of sensors in its device. Most EEG devices have a dozen to a few hundred sensors. Sabi’s cap will have anywhere from 70,000 to 100,000 miniature sensors.“Given that high-density sensing, it pinpoints exactly what and where neural activity is happening. We use that information to get much more reliable data to decode what a person is thinking,” Chhabra says.The company is aiming for an initial typing speed of 30 or so words per minute. That’s slower than most people type, but he says the speed will improve as users spend more time with the cap.#Beanie #Designed #Read #Thoughtswearables,neuroscience,artificial intelligence,brain-computer interfaces

modern computers. But what if you didn’t have to dictate to your computer? What if you could type just by thinking?

Silicon Valley startup Sabi is emerging from stealth with that goal. The company is developing a brain wearable that decodes a person’s internal speech into words on a computer screen. CEO Rahul Chhabra says its first product, a brain-reading beanie, will be available by the end of the year. The company is also designing a baseball cap version.

The technology is known as a brain-computer interface, or BCI, a device that provides a direct communication pathway between the brain and an external device. While many companies such as Elon Musk’s Neuralink are developing surgically implanted BCIs for people with severe motor disabilities, Sabi’s device could allow anyone to become a cyborg.

It’s not exactly Musk’s vision of the future, which involves implanted brain chips to allow humans to merge with AI. But venture capitalist Vinod Khosla, who was an early investor in OpenAI, says a noninvasive, wearable device is the only path to getting lots of people to use BCI technology.

“The biggest and baddest application of BCI is if you can talk to your computer by thinking about it,” says Khosla, founder of Khosla Ventures, one of Sabi’s investors. “If you’re going to have a billion people use BCI for access to their computers every day, it can’t be invasive.”

Sabi’s brain-reading hat relies on EEG, or electroencephalography, which uses metal disks placed on the scalp to record the brain’s electrical activity. Decoding imagined speech from EEG is already possible, but it’s currently limited to small sets of words or commands rather than continuous, natural speech.

A very small chip shown on the pad of a finger to illustrate it's tiny scale

Photograph: Courtesy of Sabi

The drawback of a wearable system is that the sensors have to listen to the brain through a layer of skin and bone, which dampens neural signals. Surgically implanted devices pick up much stronger signals because they sit so close to neurons. Sabi thinks the way to boost accuracy with a wearable is by massively scaling up the number of sensors in its device. Most EEG devices have a dozen to a few hundred sensors. Sabi’s cap will have anywhere from 70,000 to 100,000 miniature sensors.

“Given that high-density sensing, it pinpoints exactly what and where neural activity is happening. We use that information to get much more reliable data to decode what a person is thinking,” Chhabra says.

The company is aiming for an initial typing speed of 30 or so words per minute. That’s slower than most people type, but he says the speed will improve as users spend more time with the cap.

#Beanie #Designed #Read #Thoughtswearables,neuroscience,artificial intelligence,brain-computer interfaces">This Beanie Is Designed to Read Your Thoughts

Speech-to-text capability is now baked into all modern computers. But what if you didn’t have to dictate to your computer? What if you could type just by thinking?

Silicon Valley startup Sabi is emerging from stealth with that goal. The company is developing a brain wearable that decodes a person’s internal speech into words on a computer screen. CEO Rahul Chhabra says its first product, a brain-reading beanie, will be available by the end of the year. The company is also designing a baseball cap version.

The technology is known as a brain-computer interface, or BCI, a device that provides a direct communication pathway between the brain and an external device. While many companies such as Elon Musk’s Neuralink are developing surgically implanted BCIs for people with severe motor disabilities, Sabi’s device could allow anyone to become a cyborg.

It’s not exactly Musk’s vision of the future, which involves implanted brain chips to allow humans to merge with AI. But venture capitalist Vinod Khosla, who was an early investor in OpenAI, says a noninvasive, wearable device is the only path to getting lots of people to use BCI technology.

“The biggest and baddest application of BCI is if you can talk to your computer by thinking about it,” says Khosla, founder of Khosla Ventures, one of Sabi’s investors. “If you’re going to have a billion people use BCI for access to their computers every day, it can’t be invasive.”

Sabi’s brain-reading hat relies on EEG, or electroencephalography, which uses metal disks placed on the scalp to record the brain’s electrical activity. Decoding imagined speech from EEG is already possible, but it’s currently limited to small sets of words or commands rather than continuous, natural speech.

A very small chip shown on the pad of a finger to illustrate it's tiny scale

Photograph: Courtesy of Sabi

The drawback of a wearable system is that the sensors have to listen to the brain through a layer of skin and bone, which dampens neural signals. Surgically implanted devices pick up much stronger signals because they sit so close to neurons. Sabi thinks the way to boost accuracy with a wearable is by massively scaling up the number of sensors in its device. Most EEG devices have a dozen to a few hundred sensors. Sabi’s cap will have anywhere from 70,000 to 100,000 miniature sensors.

“Given that high-density sensing, it pinpoints exactly what and where neural activity is happening. We use that information to get much more reliable data to decode what a person is thinking,” Chhabra says.

The company is aiming for an initial typing speed of 30 or so words per minute. That’s slower than most people type, but he says the speed will improve as users spend more time with the cap.

#Beanie #Designed #Read #Thoughtswearables,neuroscience,artificial intelligence,brain-computer interfaces

Val Kilmer AI deepfake film As Deep as the Grave has just released its first trailer. The internet has responded with overwhelming disgust.

A widely recognised actor known for his roles in films such as Top Gun, Batman Forever, and Kiss Kiss Bang Bang, Kilmer died from pneumonia last April at 65 years old. Upcoming film As Deep as the Grave has now used generative AI to create a digital puppet in Kilmer’s likeness, having it portray a character appearing in “a significant part” of the historical film.

As Deep as the Grave follows married archaeologists Ann Axtell Morris (Abigail Lawrie) and Earl H. Morris (Tom Felton), who conducted fieldwork in the U.S. southwest during the 1920s. Kilmer’s AI-generated likeness will be used to depict Father Fintan, a Catholic priest who is also a Native American spiritualist. The film also features Abigail Breslin, Wes Studi, and Finn Jones.

Though Kilmer was cast in As Deep as the Grave prior to his death, delays in production and issues with his health meant he never shot any scenes. Kilmer had previously given a tech-assisted performance in Top Gun: Maverick, which digitally altered his real voice. He also worked with UK company Sonantic to create an AI speaking voice based on his old recordings. However, As Deep as the Grave will be the first time his likeness and voice will be completely AI-generated in a film.

“Very fitting that this trailer includes a scene where a corpse is unceremoniously yanked out of the ground,” read one of the top comments on As Deep as the Grave‘s trailer at time of writing.

CGI likenesses of deceased actors have been used in feature films before. In 2016, Rogue One: A Star Wars Story gained attention for using CGI and motion capture to resurrect Peter Cushing and portray a younger Carrie Fisher for a few minutes of the film. In 2015, Furious 7 used similar techniques to insert Paul Walker into the remainder of the film after he died mid-shoot. Though Furious 7 largely received a pass due to the circumstances, Rogue One received criticism regarding the ethics of its CGI Cushing. Using generative AI to completely create a performance out of nothing appears to go a step even further, completely removing any actors from the process.

Writer and director Coerte Voorhees told Variety that he chose to use AI rather than recast the role due to budget constraints, and that Kilmer’s children gave the project their blessing. Even so, online commenters have labelled it disgusting and disrespectful, not only for digitally reanimating Kilmer but also for the damaging precedent As Deep as the Grave‘s use of AI could set for the film industry as a whole.

#Val #Kilmer #deepfake #Deep #Grave #trailer #sparks #outrage">Val Kilmer AI deepfake in ‘As Deep as the Grave’ trailer sparks outrage
                        Val Kilmer AI deepfake film As Deep as the Grave has just released its first trailer. The internet has responded with overwhelming disgust.A widely recognised actor known for his roles in films such as Top Gun, Batman Forever, and Kiss Kiss Bang Bang, Kilmer died from pneumonia last April at 65 years old. Upcoming film As Deep as the Grave has now used generative AI to create a digital puppet in Kilmer’s likeness, having it portray a character appearing in “a significant part” of the historical film.As Deep as the Grave follows married archaeologists Ann Axtell Morris (Abigail Lawrie) and Earl H. Morris (Tom Felton), who conducted fieldwork in the U.S. southwest during the 1920s. Kilmer’s AI-generated likeness will be used to depict Father Fintan, a Catholic priest who is also a Native American spiritualist. The film also features Abigail Breslin, Wes Studi, and Finn Jones.Though Kilmer was cast in As Deep as the Grave prior to his death, delays in production and issues with his health meant he never shot any scenes. Kilmer had previously given a tech-assisted performance in Top Gun: Maverick, which digitally altered his real voice. He also worked with UK company Sonantic to create an AI speaking voice based on his old recordings. However, As Deep as the Grave will be the first time his likeness and voice will be completely AI-generated in a film.“Very fitting that this trailer includes a scene where a corpse is unceremoniously yanked out of the ground,” read one of the top comments on As Deep as the Grave‘s trailer at time of writing.
CGI likenesses of deceased actors have been used in feature films before. In 2016, Rogue One: A Star Wars Story gained attention for using CGI and motion capture to resurrect Peter Cushing and portray a younger Carrie Fisher for a few minutes of the film. In 2015, Furious 7 used similar techniques to insert Paul Walker into the remainder of the film after he died mid-shoot. Though Furious 7 largely received a pass due to the circumstances, Rogue One received criticism regarding the ethics of its CGI Cushing. Using generative AI to completely create a performance out of nothing appears to go a step even further, completely removing any actors from the process.Writer and director Coerte Voorhees told Variety that he chose to use AI rather than recast the role due to budget constraints, and that Kilmer’s children gave the project their blessing. Even so, online commenters have labelled it disgusting and disrespectful, not only for digitally reanimating Kilmer but also for the damaging precedent As Deep as the Grave‘s use of AI could set for the film industry as a whole.



                            
                    
                
                    #Val #Kilmer #deepfake #Deep #Grave #trailer #sparks #outrage

Val Kilmer AI deepfake film As Deep as the Grave has just released its first trailer. The internet has responded with overwhelming disgust.

A widely recognised actor known for his roles in films such as Top Gun, Batman Forever, and Kiss Kiss Bang Bang, Kilmer died from pneumonia last April at 65 years old. Upcoming film As Deep as the Grave has now used generative AI to create a digital puppet in Kilmer’s likeness, having it portray a character appearing in “a significant part” of the historical film.

As Deep as the Grave follows married archaeologists Ann Axtell Morris (Abigail Lawrie) and Earl H. Morris (Tom Felton), who conducted fieldwork in the U.S. southwest during the 1920s. Kilmer’s AI-generated likeness will be used to depict Father Fintan, a Catholic priest who is also a Native American spiritualist. The film also features Abigail Breslin, Wes Studi, and Finn Jones.

Though Kilmer was cast in As Deep as the Grave prior to his death, delays in production and issues with his health meant he never shot any scenes. Kilmer had previously given a tech-assisted performance in Top Gun: Maverick, which digitally altered his real voice. He also worked with UK company Sonantic to create an AI speaking voice based on his old recordings. However, As Deep as the Grave will be the first time his likeness and voice will be completely AI-generated in a film.

“Very fitting that this trailer includes a scene where a corpse is unceremoniously yanked out of the ground,” read one of the top comments on As Deep as the Grave‘s trailer at time of writing.

CGI likenesses of deceased actors have been used in feature films before. In 2016, Rogue One: A Star Wars Story gained attention for using CGI and motion capture to resurrect Peter Cushing and portray a younger Carrie Fisher for a few minutes of the film. In 2015, Furious 7 used similar techniques to insert Paul Walker into the remainder of the film after he died mid-shoot. Though Furious 7 largely received a pass due to the circumstances, Rogue One received criticism regarding the ethics of its CGI Cushing. Using generative AI to completely create a performance out of nothing appears to go a step even further, completely removing any actors from the process.

Writer and director Coerte Voorhees told Variety that he chose to use AI rather than recast the role due to budget constraints, and that Kilmer’s children gave the project their blessing. Even so, online commenters have labelled it disgusting and disrespectful, not only for digitally reanimating Kilmer but also for the damaging precedent As Deep as the Grave‘s use of AI could set for the film industry as a whole.

#Val #Kilmer #deepfake #Deep #Grave #trailer #sparks #outrage">Val Kilmer AI deepfake in ‘As Deep as the Grave’ trailer sparks outrage

Val Kilmer AI deepfake film As Deep as the Grave has just released its first trailer. The internet has responded with overwhelming disgust.

A widely recognised actor known for his roles in films such as Top Gun, Batman Forever, and Kiss Kiss Bang Bang, Kilmer died from pneumonia last April at 65 years old. Upcoming film As Deep as the Grave has now used generative AI to create a digital puppet in Kilmer’s likeness, having it portray a character appearing in “a significant part” of the historical film.

As Deep as the Grave follows married archaeologists Ann Axtell Morris (Abigail Lawrie) and Earl H. Morris (Tom Felton), who conducted fieldwork in the U.S. southwest during the 1920s. Kilmer’s AI-generated likeness will be used to depict Father Fintan, a Catholic priest who is also a Native American spiritualist. The film also features Abigail Breslin, Wes Studi, and Finn Jones.

Though Kilmer was cast in As Deep as the Grave prior to his death, delays in production and issues with his health meant he never shot any scenes. Kilmer had previously given a tech-assisted performance in Top Gun: Maverick, which digitally altered his real voice. He also worked with UK company Sonantic to create an AI speaking voice based on his old recordings. However, As Deep as the Grave will be the first time his likeness and voice will be completely AI-generated in a film.

“Very fitting that this trailer includes a scene where a corpse is unceremoniously yanked out of the ground,” read one of the top comments on As Deep as the Grave‘s trailer at time of writing.

CGI likenesses of deceased actors have been used in feature films before. In 2016, Rogue One: A Star Wars Story gained attention for using CGI and motion capture to resurrect Peter Cushing and portray a younger Carrie Fisher for a few minutes of the film. In 2015, Furious 7 used similar techniques to insert Paul Walker into the remainder of the film after he died mid-shoot. Though Furious 7 largely received a pass due to the circumstances, Rogue One received criticism regarding the ethics of its CGI Cushing. Using generative AI to completely create a performance out of nothing appears to go a step even further, completely removing any actors from the process.

Writer and director Coerte Voorhees told Variety that he chose to use AI rather than recast the role due to budget constraints, and that Kilmer’s children gave the project their blessing. Even so, online commenters have labelled it disgusting and disrespectful, not only for digitally reanimating Kilmer but also for the damaging precedent As Deep as the Grave‘s use of AI could set for the film industry as a whole.

#Val #Kilmer #deepfake #Deep #Grave #trailer #sparks #outrage

Post Comment