World, the biometric ID verification project co-founded by Sam Altman, released the newest version of its app today, debuting several new features, including an encrypted chat integration and an expanded, Venmo-like capability for sending and requesting crypto.
World was created by the startup Tools for Humanity in 2019, and originally launched its app in 2023. The company says that, in a world roiled by AI-generated digital fakery, it hopes to create digital “proof of human” tools that can help separate the humans from the bots.
During a small gathering at World’s headquarters in San Francisco on Thursday, Altman and World’s co-founder and CEO, Alex Blania, briefly introduced the new version of the app (which developers have termed a “super app”) before the product team took over to explain the new features. During his remarks, Altman said that the concept for World grew out of conversations he and Blania had had about the need to create a new kind of economic model. That model, based around web3 principles, is what World has been trying to accomplish through its verification network. “It’s really hard to both identify unique people and do that in a privacy-preserving way,” said Altman.
World Chat, the app’s new messenger, seems designed to do just that. It uses end-to-end encryption to keep users’ conversations safe (this encryption is described as being equivalent to Signal, the privacy-focused messenger), and also leverages color-coded speech bubbles to alert users to whether the person they’re talking to has been verified by World’s system or not, the company said. The idea is to incentivize verification, giving people the power to know whether the person they’re talking to is who they say they are. Chat was originally launched in beta in March.
The other big feature reveal on Thursday was an expanded digital payment system that allows app users to send and receive cryptocurrency. World app has functioned as a digital wallet for some time, but the newest version of the app includes broader capabilities. Using virtual bank accounts, users can also receive paychecks directly into World App and make deposits from their bank accounts, both of which can then be converted into crypto. You don’t have be verified by World’s authentication system to use these features.
Tiago Sada, World’s chief product officer, told TechCrunch that part of the reason chat was added was to create a more interactive experience for users. “What we kept hearing from people is that they wanted a more social World app,” Sada said. World Chat is designed to fill that need, creating what Sada says is a secure way to communicate. “It took a lot of work to make this feature-rich messenger that is similar to a WhatsApp or a Telegram, but with encryption and security of something that is a lot closer to Signal,” Sada said.
World (which was originally called Worldcoin) deploys a unique authentication process: interested humans get their eyes scanned at one of the company’s offices, where the Orb—a large verification device—converts the person’s iris into a unique and encrypted digital code. That code, the verified World ID, can then be used by the person to interact with World’s ecosystem of services, which are available through its app.
Techcrunch event
San Francisco
|
October 13-15, 2026
The addition of more social-friendly features is clearly meant to drive broader adoption of the app, which makes sense since scaling verification is the company’s main challenge. Altman has said that he would like the project to scan a billion people’s eyes, but Tools for Humanity claims to have scanned less than 20 million people.
Since standing in long lines at a corporate office to have your eyeballs scanned by a giant metallic ball may seem slightly less than enticing to some users, the company has already sought to make its verification process less cumbersome. In April, Tools for Humanity announced its Orb Minis—hand-held, phone-like devices—that allow users to scan their own eyes from the comfort of their homes. Blania previously told TechCrunch that, eventually, the company would like to turn the Orb Minis into a mobile point-of-sale device or sell its ID sensor tech to device manufacturers. If the company takes such steps, it would drop the barrier to verification significantly, potentially inspiring much more widespread adoption.
Source link
#World #launches #super #app #including #crypto #pay #encrypted #chat #features #TechCrunch

![Your Doctor Is Most Likely Consulting This Free AI Chatbot, Report Says
How would you like it if, when stumped or just in need of some help with an unfamiliar situation, your doctor consulted a free, ad-supported AI chatbot? That’s not actually a hypothetical. They probably are doing that, a new report from NBC News says. It’s called OpenEvidence, and NBC says it was “used by about 65% of U.S. doctors across almost 27 million clinical encounters in April alone.” An earlier Bloomberg report on OpenEvidence from seven months ago said it had signed up 50% of American doctors at the time—so reported growth is rapid.
The OpenEvidence homepage trumpets the bot as “America’s Official Medical Knowledge Platform,” and says healthcare professionals qualify for unlimited free use, but non-doctors can try it for free without creating accounts. It gives long, detailed answers with extensive citations that superficially look—to me, a non-doctor—trustworthy and credible. NBC interviewed doctors for its story, and apparently pressed them on how often they actually click those links to the sources of information, and “most said they only do so when they get an unexpected result,” NBC’s report says.
While it’s free, OpenEvidence is not a charity. It’s a Miami-headquartered tech unicorn with a billionaire founder named David Nadler, and as of January it boasted a billion valuation. NBC says it’s backed by some of the all stars of Sand Hill Road: Sequoia Capital and Andreessen Horowitz, along with Google Ventures, Thrive Capital, and Nvidia.
And its revenue comes from ads (for now), which NBC says are often for “pharmaceutical and medical device companies.” I’m not capable of stress testing such a piece of software, but I kicked the tires slightly by asking Claude to generate doctor’s notes that are very bad and irresponsible (I said it was just a movie prop). ©OpenEvidence When I told OpenEvidence those were my notes and asked it to make sure they were good, thankfully, it confirmed that they were bad, saying in part:
“This clinical documentation raises serious patient safety concerns. The presentation described contains multiple red flags for subarachnoid hemorrhage (SAH) that appear to have been insufficiently weighted, and the current management plan could result in significant harm.” So that’s somewhat comforting. On the other hand, according to NBC: “[…]some healthcare providers were quick to point out that OpenEvidence occasionally flubbed or exaggerated its answers, particularly on rare conditions or in ‘edge’ cases.” NBC’s report also clocked some worries within the medical community and elsewhere, in particular, a “lack of rigorous scientific studies on the tool’s patient impact,” and signs that OpenEvidence might be stunting the intellectual development of recent med school grads: “One midcareer doctor in Missouri, who requested anonymity given the limited number of providers in their medical field in the country, said he was already seeing the detrimental effects of OpenEvidence on students’ ability to sort signals from noise. ‘My worry is that when we introduce a new tool, any kind of tool that is doing part of your skills that you had trained up for a while beforehand, you start losing those skills pretty quickly” At a recent doctor’s appointment, my doctor asked my permission to use an AI tool on their phone (I don’t know if it was OpenEvidence). I didn’t know what to say other than yes. Do I want that for my doctor’s appointment? Not especially. But if my doctor has come to rely on a tool like this, then what am I supposed to do? Take away their crutch? #Doctor #Consulting #Free #Chatbot #ReportArtificial intelligence,doctors,Medicine Your Doctor Is Most Likely Consulting This Free AI Chatbot, Report Says
How would you like it if, when stumped or just in need of some help with an unfamiliar situation, your doctor consulted a free, ad-supported AI chatbot? That’s not actually a hypothetical. They probably are doing that, a new report from NBC News says. It’s called OpenEvidence, and NBC says it was “used by about 65% of U.S. doctors across almost 27 million clinical encounters in April alone.” An earlier Bloomberg report on OpenEvidence from seven months ago said it had signed up 50% of American doctors at the time—so reported growth is rapid.
The OpenEvidence homepage trumpets the bot as “America’s Official Medical Knowledge Platform,” and says healthcare professionals qualify for unlimited free use, but non-doctors can try it for free without creating accounts. It gives long, detailed answers with extensive citations that superficially look—to me, a non-doctor—trustworthy and credible. NBC interviewed doctors for its story, and apparently pressed them on how often they actually click those links to the sources of information, and “most said they only do so when they get an unexpected result,” NBC’s report says.
While it’s free, OpenEvidence is not a charity. It’s a Miami-headquartered tech unicorn with a billionaire founder named David Nadler, and as of January it boasted a billion valuation. NBC says it’s backed by some of the all stars of Sand Hill Road: Sequoia Capital and Andreessen Horowitz, along with Google Ventures, Thrive Capital, and Nvidia.
And its revenue comes from ads (for now), which NBC says are often for “pharmaceutical and medical device companies.” I’m not capable of stress testing such a piece of software, but I kicked the tires slightly by asking Claude to generate doctor’s notes that are very bad and irresponsible (I said it was just a movie prop). ©OpenEvidence When I told OpenEvidence those were my notes and asked it to make sure they were good, thankfully, it confirmed that they were bad, saying in part:
“This clinical documentation raises serious patient safety concerns. The presentation described contains multiple red flags for subarachnoid hemorrhage (SAH) that appear to have been insufficiently weighted, and the current management plan could result in significant harm.” So that’s somewhat comforting. On the other hand, according to NBC: “[…]some healthcare providers were quick to point out that OpenEvidence occasionally flubbed or exaggerated its answers, particularly on rare conditions or in ‘edge’ cases.” NBC’s report also clocked some worries within the medical community and elsewhere, in particular, a “lack of rigorous scientific studies on the tool’s patient impact,” and signs that OpenEvidence might be stunting the intellectual development of recent med school grads: “One midcareer doctor in Missouri, who requested anonymity given the limited number of providers in their medical field in the country, said he was already seeing the detrimental effects of OpenEvidence on students’ ability to sort signals from noise. ‘My worry is that when we introduce a new tool, any kind of tool that is doing part of your skills that you had trained up for a while beforehand, you start losing those skills pretty quickly” At a recent doctor’s appointment, my doctor asked my permission to use an AI tool on their phone (I don’t know if it was OpenEvidence). I didn’t know what to say other than yes. Do I want that for my doctor’s appointment? Not especially. But if my doctor has come to rely on a tool like this, then what am I supposed to do? Take away their crutch? #Doctor #Consulting #Free #Chatbot #ReportArtificial intelligence,doctors,Medicine](https://gizmodo.com/app/uploads/2026/05/Screenshot-2026-05-13-at-8.02.01 PM.jpg)
Post Comment