×
Oppo Pad SE India Launch Timeline Tipped; Could Launch Alongside Reno 14 Series

Oppo Pad SE India Launch Timeline Tipped; Could Launch Alongside Reno 14 Series

Oppo Pad SE was unveiled in select global markets in May. Now, Oppo appears to be gearing up to bring the tablet to the Indian market. The Chinese tech brand is yet to confirm this officially, but a new report indicates that the Oppo Pad SE will go official in India in July. The tablet is equipped with the MediaTek Helio G100 chipset and sports an 11-inch display with 2K resolution. The Oppo Pad SE houses a 9,340mAh battery with support for 33W charging.

Tipster Yogesh Brar, in association with MySmartPrice, claims that the Oppo Pad SE will be launched in India in the first week of July. The tablet is said to be announced alongside the Oppo Reno 14 and Reno 14 Pro. They are teased to be available for purchase during the upcoming Amazon Prime Day sale, which is scheduled to take place between July 12 and July 14.

The Oppo Pad SE was launched in China and Malaysia in May with a price tag of CNY 899 (roughly Rs. 11,000) and MYR 699 (roughly Rs. 14,000), respectively, for the base variant with 6GB RAM and 128GB storage. It was sold in Night Blue and Starlight Silver, Night Blue Soft Edition, and Starlight Silver Soft Edition (translated from Chinese) colour options.

Oppo Pad SE Specifications

The Indian variant of the Oppo Pad SE is expected to offer similar specifications to its Chinese counterpart. The model launched in China runs on ColorOS 15.0.1 based on Android 15 and sports an 11-inch 2K resolution (1,200×1,920) LCD display with up to 90Hz refresh rate. It has a MediaTek Dimensity G100 chipset under the hood, paired with up to 8GB of RAM and 256GB of onboard storage.

Oppo Pad SE has a 5-megapixel rear camera and a 5-megapixel selfie shooter. It carries a 9,340mAh battery with support for 33W charging. It has face unlock feature for biometric authentication.

Affiliate links may be automatically generated – see our ethics statement for details.

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who’sThat360 on Instagram and YouTube.

Google Expands AI Mode in Search to India, Adds Support for Voice and Image Inputs


Samsung Opens Pre-Reservations for Upcoming Galaxy Z Foldables in India



Source link
#Oppo #Pad #India #Launch #Timeline #Tipped #Launch #Reno #Series


The new Star Wars animated series Maul: Shadow Lord is doing some very cool things with lightsabers—and not just spinning them around with reckless abandon because we’ve got Maul himself and a couple of Inquisitors who all love to do exactly that with their weapons. They look almost unlike any time we’ve seen the weapons in Lucasfilm’s past output: blades that flicker and snarl like their wielders do, living flames that carve paths of incandescent energy across the screen instead of that typically clean, minimalistic energy we see from them.

It makes Shadow Lord look even more visually impressive than it already is, and of course, the idea of lightsabers as gouts of flaming plasma is also naturally very befitting everyone’s favorite slightly pathetic but trying-his-best edgelord in a character like Maul. But it turns out Shadow Lord‘s lightsabers—Maul’s specifically—are going the extra edgelord mile. Because there’s screaming in the sound mix.

Not just any screaming either, but Sam Witwer’s own howls.

The delightfully silly factoid was revealed by the supervising sound editor for the show, David W. Collins, in a new featurette about the process of creating Shadow Lord, which also shows off Witwer performing some of his own moves for animation reference. While Lucasfilm creatives were quick to note that the show does not use mocap for its animation, and the footage was strictly as a reference point, there’s still something very funny about Witwer even giving himself some Maul tattooing makeup for the footage, to boot.

It’s long been clear that Witwer has put a lot of time and thought into his approach to Maul’s animated legacy over the past decade and a half, but now at least he’s put his vocals into it in a very different manner for Shadow Lord.

Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

#Mauls #Lightsabers #Shadow #Lord #Powered #Sam #Witwers #ScreamsMaul: Shadow Lord,sam witwer,Star Wars">Maul’s Lightsabers in ‘Shadow Lord’ Are Powered by Sam Witwer’s Screams
                The new Star Wars animated series Maul: Shadow Lord is doing some very cool things with lightsabers—and not just spinning them around with reckless abandon because we’ve got Maul himself and a couple of Inquisitors who all love to do exactly that with their weapons. They look almost unlike any time we’ve seen the weapons in Lucasfilm’s past output: blades that flicker and snarl like their wielders do, living flames that carve paths of incandescent energy across the screen instead of that typically clean, minimalistic energy we see from them. It makes Shadow Lord look even more visually impressive than it already is, and of course, the idea of lightsabers as gouts of flaming plasma is also naturally very befitting everyone’s favorite slightly pathetic but trying-his-best edgelord in a character like Maul. But it turns out Shadow Lord‘s lightsabers—Maul’s specifically—are going the extra edgelord mile. Because there’s screaming in the sound mix.

 Not just any screaming either, but Sam Witwer’s own howls.  The delightfully silly factoid was revealed by the supervising sound editor for the show, David W. Collins, in a new featurette about the process of creating Shadow Lord, which also shows off Witwer performing some of his own moves for animation reference. While Lucasfilm creatives were quick to note that the show does not use mocap for its animation, and the footage was strictly as a reference point, there’s still something very funny about Witwer even giving himself some Maul tattooing makeup for the footage, to boot. It’s long been clear that Witwer has put a lot of time and thought into his approach to Maul’s animated legacy over the past decade and a half, but now at least he’s put his vocals into it in a very different manner for Shadow Lord.  Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.      #Mauls #Lightsabers #Shadow #Lord #Powered #Sam #Witwers #ScreamsMaul: Shadow Lord,sam witwer,Star Wars

Maul: Shadow Lord is doing some very cool things with lightsabers—and not just spinning them around with reckless abandon because we’ve got Maul himself and a couple of Inquisitors who all love to do exactly that with their weapons. They look almost unlike any time we’ve seen the weapons in Lucasfilm’s past output: blades that flicker and snarl like their wielders do, living flames that carve paths of incandescent energy across the screen instead of that typically clean, minimalistic energy we see from them.

It makes Shadow Lord look even more visually impressive than it already is, and of course, the idea of lightsabers as gouts of flaming plasma is also naturally very befitting everyone’s favorite slightly pathetic but trying-his-best edgelord in a character like Maul. But it turns out Shadow Lord‘s lightsabers—Maul’s specifically—are going the extra edgelord mile. Because there’s screaming in the sound mix.

Not just any screaming either, but Sam Witwer’s own howls.

The delightfully silly factoid was revealed by the supervising sound editor for the show, David W. Collins, in a new featurette about the process of creating Shadow Lord, which also shows off Witwer performing some of his own moves for animation reference. While Lucasfilm creatives were quick to note that the show does not use mocap for its animation, and the footage was strictly as a reference point, there’s still something very funny about Witwer even giving himself some Maul tattooing makeup for the footage, to boot.

It’s long been clear that Witwer has put a lot of time and thought into his approach to Maul’s animated legacy over the past decade and a half, but now at least he’s put his vocals into it in a very different manner for Shadow Lord.

Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

#Mauls #Lightsabers #Shadow #Lord #Powered #Sam #Witwers #ScreamsMaul: Shadow Lord,sam witwer,Star Wars">Maul’s Lightsabers in ‘Shadow Lord’ Are Powered by Sam Witwer’s ScreamsMaul’s Lightsabers in ‘Shadow Lord’ Are Powered by Sam Witwer’s Screams
                The new Star Wars animated series Maul: Shadow Lord is doing some very cool things with lightsabers—and not just spinning them around with reckless abandon because we’ve got Maul himself and a couple of Inquisitors who all love to do exactly that with their weapons. They look almost unlike any time we’ve seen the weapons in Lucasfilm’s past output: blades that flicker and snarl like their wielders do, living flames that carve paths of incandescent energy across the screen instead of that typically clean, minimalistic energy we see from them. It makes Shadow Lord look even more visually impressive than it already is, and of course, the idea of lightsabers as gouts of flaming plasma is also naturally very befitting everyone’s favorite slightly pathetic but trying-his-best edgelord in a character like Maul. But it turns out Shadow Lord‘s lightsabers—Maul’s specifically—are going the extra edgelord mile. Because there’s screaming in the sound mix.

 Not just any screaming either, but Sam Witwer’s own howls.  The delightfully silly factoid was revealed by the supervising sound editor for the show, David W. Collins, in a new featurette about the process of creating Shadow Lord, which also shows off Witwer performing some of his own moves for animation reference. While Lucasfilm creatives were quick to note that the show does not use mocap for its animation, and the footage was strictly as a reference point, there’s still something very funny about Witwer even giving himself some Maul tattooing makeup for the footage, to boot. It’s long been clear that Witwer has put a lot of time and thought into his approach to Maul’s animated legacy over the past decade and a half, but now at least he’s put his vocals into it in a very different manner for Shadow Lord.  Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.      #Mauls #Lightsabers #Shadow #Lord #Powered #Sam #Witwers #ScreamsMaul: Shadow Lord,sam witwer,Star Wars

The new Star Wars animated series Maul: Shadow Lord is doing some very cool things with lightsabers—and not just spinning them around with reckless abandon because we’ve got Maul himself and a couple of Inquisitors who all love to do exactly that with their weapons. They look almost unlike any time we’ve seen the weapons in Lucasfilm’s past output: blades that flicker and snarl like their wielders do, living flames that carve paths of incandescent energy across the screen instead of that typically clean, minimalistic energy we see from them.

It makes Shadow Lord look even more visually impressive than it already is, and of course, the idea of lightsabers as gouts of flaming plasma is also naturally very befitting everyone’s favorite slightly pathetic but trying-his-best edgelord in a character like Maul. But it turns out Shadow Lord‘s lightsabers—Maul’s specifically—are going the extra edgelord mile. Because there’s screaming in the sound mix.

Not just any screaming either, but Sam Witwer’s own howls.

The delightfully silly factoid was revealed by the supervising sound editor for the show, David W. Collins, in a new featurette about the process of creating Shadow Lord, which also shows off Witwer performing some of his own moves for animation reference. While Lucasfilm creatives were quick to note that the show does not use mocap for its animation, and the footage was strictly as a reference point, there’s still something very funny about Witwer even giving himself some Maul tattooing makeup for the footage, to boot.

It’s long been clear that Witwer has put a lot of time and thought into his approach to Maul’s animated legacy over the past decade and a half, but now at least he’s put his vocals into it in a very different manner for Shadow Lord.

Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

#Mauls #Lightsabers #Shadow #Lord #Powered #Sam #Witwers #ScreamsMaul: Shadow Lord,sam witwer,Star Wars

Medical experts I spoke with balked at the idea of uploading their own health data for an AI model, like Muse Spark, to analyze. “These chatbots now allow you to connect your own biometric data, put in your own lab information, and honestly, that makes me pretty nervous,” says Gauri Agarwal, a doctor of medicine and associate professor at the University of Miami. “I certainly wouldn’t connect my own health information to a service that I’m not fully able to control, understand where that information is being stored, or how it’s being utilized.” She recommends people stick to lower-stakes, more general interactions, like prepping questions for your doctor.

It can be tempting to rely on AI-assisted help for interpreting health, especially with the skyrocketing cost of medical treatments and overall inaccessibility of regular doctor visits for some people navigating the US health care system.

“You will be forgiven for going online and delegating what used to be a powerful, important personal relationship between a doctor and a patient—to a robot,” says Kenneth Goodman, founder of the University of Miami’s Institute for Bioethics and Health Policy. “I think running into that without due diligence is dangerous.” Before he considers using any of these tools, Goodman wants to see research proving that they are beneficial for your health, not just better at answering health questions than some competitor chatbot.

When I asked Meta AI for more information about how it would interpret my health information, if I provided any, the chatbot said it was not trying to replace my physician; the outputs were for educational purposes. “Think of me as a med school professor, not your doctor,” said Meta AI. That’s still a lofty claim.

The bot said the best way to get an interpretation of my health data was just to “dump the raw data,” like clinical lab reports, and tell it what my goals were. Meta AI would then create charts, summarize the info, and give a “referral nudge if needed.” In other chats I conducted with Meta AI, the bot prompted me to strip personal details before uploading lab results, but these caveats were not present in every test conversation.

“People have long used the internet to ask health questions,” a Meta spokesperson tells WIRED. “With Meta AI and Muse Spark, people are in control of what information to share, and our terms make clear they should only share what they’re comfortable with.”

In addition to privacy concerns, experts I spoke with expressed trepidation about how these AI tools can be sycophantic and influenced by how users ask questions. “A model might take the information that’s provided more as a given without questioning the assumptions that the patient inherently made when asking the question,” says Agrawal.

When I asked how to lose weight and nudged the bot towards extreme answers, Meta AI helped in ways that could be catastrophic for someone with anorexia. As I asked about the benefits of intermittent fasting, I told Meta AI that I wanted to fast five days every week. Despite flagging that this was not for most people and putting me at risk for eating disorders, Meta AI crafted a meal plan for me where I would only eat around 500 calories most days, which would leave me malnourished.

#Metas #Asked #Raw #Health #Dataand #Gave #Terrible #Advicehealth,artificial intelligence,health care,machine learning,chatbots,meta,personalized medicine">Meta’s New AI Asked for My Raw Health Data—and Gave Me Terrible AdviceMedical experts I spoke with balked at the idea of uploading their own health data for an AI model, like Muse Spark, to analyze. “These chatbots now allow you to connect your own biometric data, put in your own lab information, and honestly, that makes me pretty nervous,” says Gauri Agarwal, a doctor of medicine and associate professor at the University of Miami. “I certainly wouldn’t connect my own health information to a service that I’m not fully able to control, understand where that information is being stored, or how it’s being utilized.” She recommends people stick to lower-stakes, more general interactions, like prepping questions for your doctor.It can be tempting to rely on AI-assisted help for interpreting health, especially with the skyrocketing cost of medical treatments and overall inaccessibility of regular doctor visits for some people navigating the US health care system.“You will be forgiven for going online and delegating what used to be a powerful, important personal relationship between a doctor and a patient—to a robot,” says Kenneth Goodman, founder of the University of Miami’s Institute for Bioethics and Health Policy. “I think running into that without due diligence is dangerous.” Before he considers using any of these tools, Goodman wants to see research proving that they are beneficial for your health, not just better at answering health questions than some competitor chatbot.When I asked Meta AI for more information about how it would interpret my health information, if I provided any, the chatbot said it was not trying to replace my physician; the outputs were for educational purposes. “Think of me as a med school professor, not your doctor,” said Meta AI. That’s still a lofty claim.The bot said the best way to get an interpretation of my health data was just to “dump the raw data,” like clinical lab reports, and tell it what my goals were. Meta AI would then create charts, summarize the info, and give a “referral nudge if needed.” In other chats I conducted with Meta AI, the bot prompted me to strip personal details before uploading lab results, but these caveats were not present in every test conversation.“People have long used the internet to ask health questions,” a Meta spokesperson tells WIRED. “With Meta AI and Muse Spark, people are in control of what information to share, and our terms make clear they should only share what they’re comfortable with.”In addition to privacy concerns, experts I spoke with expressed trepidation about how these AI tools can be sycophantic and influenced by how users ask questions. “A model might take the information that’s provided more as a given without questioning the assumptions that the patient inherently made when asking the question,” says Agrawal.When I asked how to lose weight and nudged the bot towards extreme answers, Meta AI helped in ways that could be catastrophic for someone with anorexia. As I asked about the benefits of intermittent fasting, I told Meta AI that I wanted to fast five days every week. Despite flagging that this was not for most people and putting me at risk for eating disorders, Meta AI crafted a meal plan for me where I would only eat around 500 calories most days, which would leave me malnourished.#Metas #Asked #Raw #Health #Dataand #Gave #Terrible #Advicehealth,artificial intelligence,health care,machine learning,chatbots,meta,personalized medicine

Gauri Agarwal, a doctor of medicine and associate professor at the University of Miami. “I certainly wouldn’t connect my own health information to a service that I’m not fully able to control, understand where that information is being stored, or how it’s being utilized.” She recommends people stick to lower-stakes, more general interactions, like prepping questions for your doctor.

It can be tempting to rely on AI-assisted help for interpreting health, especially with the skyrocketing cost of medical treatments and overall inaccessibility of regular doctor visits for some people navigating the US health care system.

“You will be forgiven for going online and delegating what used to be a powerful, important personal relationship between a doctor and a patient—to a robot,” says Kenneth Goodman, founder of the University of Miami’s Institute for Bioethics and Health Policy. “I think running into that without due diligence is dangerous.” Before he considers using any of these tools, Goodman wants to see research proving that they are beneficial for your health, not just better at answering health questions than some competitor chatbot.

When I asked Meta AI for more information about how it would interpret my health information, if I provided any, the chatbot said it was not trying to replace my physician; the outputs were for educational purposes. “Think of me as a med school professor, not your doctor,” said Meta AI. That’s still a lofty claim.

The bot said the best way to get an interpretation of my health data was just to “dump the raw data,” like clinical lab reports, and tell it what my goals were. Meta AI would then create charts, summarize the info, and give a “referral nudge if needed.” In other chats I conducted with Meta AI, the bot prompted me to strip personal details before uploading lab results, but these caveats were not present in every test conversation.

“People have long used the internet to ask health questions,” a Meta spokesperson tells WIRED. “With Meta AI and Muse Spark, people are in control of what information to share, and our terms make clear they should only share what they’re comfortable with.”

In addition to privacy concerns, experts I spoke with expressed trepidation about how these AI tools can be sycophantic and influenced by how users ask questions. “A model might take the information that’s provided more as a given without questioning the assumptions that the patient inherently made when asking the question,” says Agrawal.

When I asked how to lose weight and nudged the bot towards extreme answers, Meta AI helped in ways that could be catastrophic for someone with anorexia. As I asked about the benefits of intermittent fasting, I told Meta AI that I wanted to fast five days every week. Despite flagging that this was not for most people and putting me at risk for eating disorders, Meta AI crafted a meal plan for me where I would only eat around 500 calories most days, which would leave me malnourished.

#Metas #Asked #Raw #Health #Dataand #Gave #Terrible #Advicehealth,artificial intelligence,health care,machine learning,chatbots,meta,personalized medicine">Meta’s New AI Asked for My Raw Health Data—and Gave Me Terrible Advice

Medical experts I spoke with balked at the idea of uploading their own health data for an AI model, like Muse Spark, to analyze. “These chatbots now allow you to connect your own biometric data, put in your own lab information, and honestly, that makes me pretty nervous,” says Gauri Agarwal, a doctor of medicine and associate professor at the University of Miami. “I certainly wouldn’t connect my own health information to a service that I’m not fully able to control, understand where that information is being stored, or how it’s being utilized.” She recommends people stick to lower-stakes, more general interactions, like prepping questions for your doctor.

It can be tempting to rely on AI-assisted help for interpreting health, especially with the skyrocketing cost of medical treatments and overall inaccessibility of regular doctor visits for some people navigating the US health care system.

“You will be forgiven for going online and delegating what used to be a powerful, important personal relationship between a doctor and a patient—to a robot,” says Kenneth Goodman, founder of the University of Miami’s Institute for Bioethics and Health Policy. “I think running into that without due diligence is dangerous.” Before he considers using any of these tools, Goodman wants to see research proving that they are beneficial for your health, not just better at answering health questions than some competitor chatbot.

When I asked Meta AI for more information about how it would interpret my health information, if I provided any, the chatbot said it was not trying to replace my physician; the outputs were for educational purposes. “Think of me as a med school professor, not your doctor,” said Meta AI. That’s still a lofty claim.

The bot said the best way to get an interpretation of my health data was just to “dump the raw data,” like clinical lab reports, and tell it what my goals were. Meta AI would then create charts, summarize the info, and give a “referral nudge if needed.” In other chats I conducted with Meta AI, the bot prompted me to strip personal details before uploading lab results, but these caveats were not present in every test conversation.

“People have long used the internet to ask health questions,” a Meta spokesperson tells WIRED. “With Meta AI and Muse Spark, people are in control of what information to share, and our terms make clear they should only share what they’re comfortable with.”

In addition to privacy concerns, experts I spoke with expressed trepidation about how these AI tools can be sycophantic and influenced by how users ask questions. “A model might take the information that’s provided more as a given without questioning the assumptions that the patient inherently made when asking the question,” says Agrawal.

When I asked how to lose weight and nudged the bot towards extreme answers, Meta AI helped in ways that could be catastrophic for someone with anorexia. As I asked about the benefits of intermittent fasting, I told Meta AI that I wanted to fast five days every week. Despite flagging that this was not for most people and putting me at risk for eating disorders, Meta AI crafted a meal plan for me where I would only eat around 500 calories most days, which would leave me malnourished.

#Metas #Asked #Raw #Health #Dataand #Gave #Terrible #Advicehealth,artificial intelligence,health care,machine learning,chatbots,meta,personalized medicine

Post Comment