Google is bringing more AI muscle to India’s fight against digital fraud, rolling out on-device scam detection for Pixel 9 devices and new screen-sharing alerts for financial apps.
Digital fraud continues to rise in India as more people come online for the first time and increasingly rely on smartphones for payments, shopping, and accessing government services. Fraud involving digital transactions accounted for more than half of all reported bank fraud in 2024 — 13,516 cases resulting in losses of ₹5.2 billion (about $58.61 million), according to the Reserve Bank of India (RBI). Online scams caused an estimated ₹70 billion (roughly $789 million) in losses in the first five months of 2025, the Ministry of Home Affairs said. Many incidents likely go unreported, either because victims are unsure how to file a complaint or wish to avoid additional scrutiny.
On Thursday, Google announced the expansion of its real-time scam-detection feature, which uses Gemini Nano to analyze calls on-device and flag potential fraud without recording audio or sending data to Google’s servers. The feature is off by default and applies only to calls from unknown numbers, and it plays a beep during the conversation to notify participants. It debuted in the U.S. in March as a beta for English-speaking Pixel 9 users.
Google confirmed to TechCrunch that its on-device scam detection will initially work only on Pixel 9 and later models in India and will be limited to English-speaking users, with its warning also English only. That restricts its reach in a market where Android accounts for nearly 96% of smartphones, per Statcounter, but Pixel devices held less than 1% share in 2024. The language limitation is also notable in a country where most users primarily rely on non-English languages — an audience that Google and others like Amazon have acknowledged by adding support for Indian languages across their services in recent years.
The tech giant did say it was working to bring scam detection to non-Pixel Android phones, as well, without offering a timeline.
Google also announced a pilot in India with financial apps Navi, Paytm, and Google Pay aimed at limiting screen-sharing scams, in which fraudsters persuade victims to share their screens to obtain one-time passwords, PINs, and other credentials during a call. The feature was first announced at Google I/O in May and initially tested in the U.K.
Users with devices running Android 11 or later will be able to access the alerts, which include a one-tap option to end the call and stop screen sharing. Google confirmed to TechCrunch that it plans to add more app partners and the feature will display alerts in Indian languages as well but did not provide details.
Techcrunch event
San Francisco
|
October 13-15, 2026
For several months, Google has also been using its Play Protect service to restrict predatory loan apps in India by blocking the sideloading of third-party apps that request sensitive permissions often exploited for fraud. The company said the service blocked more than 115 million such installation attempts this year. Google Pay, meanwhile, surfaces more than a million warnings each week for transactions flagged as potentially fraudulent, according to the company.
Google is also running its DigiKavach awareness campaign on digital fraud, which it said has reached more than 250 million people. The company has worked with the Reserve Bank of India to publish a public list of authorized digital lending apps and their associated non-banking financial companies to help limit malicious actors.
Earlier this year, Google launched a Safety Charter in India to expand its AI-driven fraud detection and security efforts, part of a broader plan to deploy more AI tools in the country to address rising fraud.
Yet Google still faces significant gaps in curbing digital fraud in India. The company — like Apple — has been questioned for allowing fake and misleading apps to appear on its app store despite review processes meant to block fraudulent submissions.
In recent years, police and security researchers have flagged investment and loan apps used in scams that remained available on the Play Store until intervention. These cases underscore the challenges Google faces in policing an ecosystem that dominates the country’s smartphone market.
Source link
#Google #steps #scam #protection #India #gaps #remain #TechCrunch

![Your Doctor Is Most Likely Consulting This Free AI Chatbot, Report Says
How would you like it if, when stumped or just in need of some help with an unfamiliar situation, your doctor consulted a free, ad-supported AI chatbot? That’s not actually a hypothetical. They probably are doing that, a new report from NBC News says. It’s called OpenEvidence, and NBC says it was “used by about 65% of U.S. doctors across almost 27 million clinical encounters in April alone.” An earlier Bloomberg report on OpenEvidence from seven months ago said it had signed up 50% of American doctors at the time—so reported growth is rapid.
The OpenEvidence homepage trumpets the bot as “America’s Official Medical Knowledge Platform,” and says healthcare professionals qualify for unlimited free use, but non-doctors can try it for free without creating accounts. It gives long, detailed answers with extensive citations that superficially look—to me, a non-doctor—trustworthy and credible. NBC interviewed doctors for its story, and apparently pressed them on how often they actually click those links to the sources of information, and “most said they only do so when they get an unexpected result,” NBC’s report says.
While it’s free, OpenEvidence is not a charity. It’s a Miami-headquartered tech unicorn with a billionaire founder named David Nadler, and as of January it boasted a billion valuation. NBC says it’s backed by some of the all stars of Sand Hill Road: Sequoia Capital and Andreessen Horowitz, along with Google Ventures, Thrive Capital, and Nvidia.
And its revenue comes from ads (for now), which NBC says are often for “pharmaceutical and medical device companies.” I’m not capable of stress testing such a piece of software, but I kicked the tires slightly by asking Claude to generate doctor’s notes that are very bad and irresponsible (I said it was just a movie prop). ©OpenEvidence When I told OpenEvidence those were my notes and asked it to make sure they were good, thankfully, it confirmed that they were bad, saying in part:
“This clinical documentation raises serious patient safety concerns. The presentation described contains multiple red flags for subarachnoid hemorrhage (SAH) that appear to have been insufficiently weighted, and the current management plan could result in significant harm.” So that’s somewhat comforting. On the other hand, according to NBC: “[…]some healthcare providers were quick to point out that OpenEvidence occasionally flubbed or exaggerated its answers, particularly on rare conditions or in ‘edge’ cases.” NBC’s report also clocked some worries within the medical community and elsewhere, in particular, a “lack of rigorous scientific studies on the tool’s patient impact,” and signs that OpenEvidence might be stunting the intellectual development of recent med school grads: “One midcareer doctor in Missouri, who requested anonymity given the limited number of providers in their medical field in the country, said he was already seeing the detrimental effects of OpenEvidence on students’ ability to sort signals from noise. ‘My worry is that when we introduce a new tool, any kind of tool that is doing part of your skills that you had trained up for a while beforehand, you start losing those skills pretty quickly” At a recent doctor’s appointment, my doctor asked my permission to use an AI tool on their phone (I don’t know if it was OpenEvidence). I didn’t know what to say other than yes. Do I want that for my doctor’s appointment? Not especially. But if my doctor has come to rely on a tool like this, then what am I supposed to do? Take away their crutch? #Doctor #Consulting #Free #Chatbot #ReportArtificial intelligence,doctors,Medicine Your Doctor Is Most Likely Consulting This Free AI Chatbot, Report Says
How would you like it if, when stumped or just in need of some help with an unfamiliar situation, your doctor consulted a free, ad-supported AI chatbot? That’s not actually a hypothetical. They probably are doing that, a new report from NBC News says. It’s called OpenEvidence, and NBC says it was “used by about 65% of U.S. doctors across almost 27 million clinical encounters in April alone.” An earlier Bloomberg report on OpenEvidence from seven months ago said it had signed up 50% of American doctors at the time—so reported growth is rapid.
The OpenEvidence homepage trumpets the bot as “America’s Official Medical Knowledge Platform,” and says healthcare professionals qualify for unlimited free use, but non-doctors can try it for free without creating accounts. It gives long, detailed answers with extensive citations that superficially look—to me, a non-doctor—trustworthy and credible. NBC interviewed doctors for its story, and apparently pressed them on how often they actually click those links to the sources of information, and “most said they only do so when they get an unexpected result,” NBC’s report says.
While it’s free, OpenEvidence is not a charity. It’s a Miami-headquartered tech unicorn with a billionaire founder named David Nadler, and as of January it boasted a billion valuation. NBC says it’s backed by some of the all stars of Sand Hill Road: Sequoia Capital and Andreessen Horowitz, along with Google Ventures, Thrive Capital, and Nvidia.
And its revenue comes from ads (for now), which NBC says are often for “pharmaceutical and medical device companies.” I’m not capable of stress testing such a piece of software, but I kicked the tires slightly by asking Claude to generate doctor’s notes that are very bad and irresponsible (I said it was just a movie prop). ©OpenEvidence When I told OpenEvidence those were my notes and asked it to make sure they were good, thankfully, it confirmed that they were bad, saying in part:
“This clinical documentation raises serious patient safety concerns. The presentation described contains multiple red flags for subarachnoid hemorrhage (SAH) that appear to have been insufficiently weighted, and the current management plan could result in significant harm.” So that’s somewhat comforting. On the other hand, according to NBC: “[…]some healthcare providers were quick to point out that OpenEvidence occasionally flubbed or exaggerated its answers, particularly on rare conditions or in ‘edge’ cases.” NBC’s report also clocked some worries within the medical community and elsewhere, in particular, a “lack of rigorous scientific studies on the tool’s patient impact,” and signs that OpenEvidence might be stunting the intellectual development of recent med school grads: “One midcareer doctor in Missouri, who requested anonymity given the limited number of providers in their medical field in the country, said he was already seeing the detrimental effects of OpenEvidence on students’ ability to sort signals from noise. ‘My worry is that when we introduce a new tool, any kind of tool that is doing part of your skills that you had trained up for a while beforehand, you start losing those skills pretty quickly” At a recent doctor’s appointment, my doctor asked my permission to use an AI tool on their phone (I don’t know if it was OpenEvidence). I didn’t know what to say other than yes. Do I want that for my doctor’s appointment? Not especially. But if my doctor has come to rely on a tool like this, then what am I supposed to do? Take away their crutch? #Doctor #Consulting #Free #Chatbot #ReportArtificial intelligence,doctors,Medicine](https://gizmodo.com/app/uploads/2026/05/Screenshot-2026-05-13-at-8.02.01 PM.jpg)
Post Comment