Battlelines are being drawn between the major AI labs and the popular applications that rely on them.
This week, both Anthropic and OpenAI took shots at two leading AI apps: Windsurf, one of the most popular vibe coding tools, and Granola, a buzzy AI app for taking meeting notes.
”With less than five days of notice, Anthropic decided to cut off nearly all of our first-party capacity to all Claude 3.x models,” Windsurf CEO Varun Mohan wrote on X this week, noting that “we wanted to pay them for the full capacity.” An additional statement on Windsurf’s website said: “We are concerned that Anthropic’s conduct will harm many in the industry, not just Windsurf.”
Here, Mohan’s company is collateral damage in Anthropic’s rivalry with OpenAI, which has reportedly been in talks to acquire Windsurf for about $3 billion. The deal hasn’t been confirmed, but even the spectre of it happening was enough for Anthropic to cut off one of the most popular apps that it powers. After a spokesperson told TechCrunch’s Maxwell Zeff that Anthropic was “prioritizing capacity for sustainable partnerships,” co-founder Jared Kaplan put it more bluntly.
“We really are just trying to enable our customers who are going to sustainably be working with us in the future,” Kaplan told Zeff. “I think it would be odd for us to be selling Claude to OpenAI.”
Meanwhile, OpenAI sent its own warning shot this week to the budding AI app ecosystem. It announced a “record mode” for ChatGPT — initially only for enterprise accounts — that transcribes calls and generates meeting notes. This is the core use case of Granola, one of my favorite AI tools that recently raised $43 million in additional funding and released a mobile app.
Given how quickly Granola has evolved to do more than summarize meetings, I suspect that the company isn’t at risk of extinction. Still, it will be harder to grow when hundreds of millions of ChatGPT users eventually have access to its main functionality.
It’s unclear how the tension between the product ambitions of OpenAI and Anthropic and the needs of their API customers will settle out. When I interviewed Anthropic’s chief product officer, Mike Krieger, back in March, the company had just announced its own Claude coding competitor to Windsurf and Cursor, which coincidentally raised $900 million this week. I asked Krieger the obvious question: how does Anthropic think about competing with its API customers? He didn’t really have an answer.
“I think this is a really delicate question for all of the labs and one that I’m trying to approach really thoughtfully,” Krieger told me at the time. “Hopefully, we’ll all be able to navigate the occasionally closer adjacencies.”
AI investor Zak Kukoff put it well this week: “At some point model providers are going to need to decide if they want to be stable platforms or compete for every vertical.”
Ultimately, this week served as a wake-up call for the many startups building businesses on the backs of AI models; if you are successful enough, you run the risk of being copied by your model provider. A lot of companies are thinking through this risk right now, especially as OpenAI builds a new team to help its API customers “translate abstract ideas into production applications.”
“You have to wonder if the recent moves by the big AI labs to more directly compete with the app layer will be one giant tailwind for incumbents like Google, Amazon, MSFT, etc.,” Michael Mignano, a Granola board member, wrote this week. “If developers can’t trust the labs, maybe it’s better to trust the big guys like they did for cloud?”
A different take on AI and job loss
This week, I heard two CEOs contradict the growing fear that AI will destroy jobs en masse, at least when it comes to engineering roles.
The first was Sundar Pichai, whom I watched speak at Bloomberg’s tech conference in San Francisco. He downplayed Dario Amodei’s doomerism fear about job loss, correctly pointing out that “we’ve made predictions like that for the last 20 years about technology and automation, and it hasn’t quite played out that way.” He went so far as to say, “I expect we will grow from our current engineering base into next year,” because AI “allows us to do more.”
The next day, I walked down the street to the Moscone Center to see Snowflake CEO Sridhar Ramaswamy, who had just spoken to a room of 4,000 developers with AI pioneer Andrew Ng. I asked Ramaswamy if AI had changed his hiring plans, and he said he agreed with a ranking of hiring desirability for engineers that Ng had just described onstage, with the top being experienced engineers who leverage AI tools, followed by early-career engineers who are all-in on AI. He noted that new graduates who avoid AI tools are at the bottom of the desirability ranking and may struggle to find jobs.
If anything, it’s the middle of the workforce — those who are in the middle of their careers and hesitant to adopt AI tools — that is the most in danger of near-term displacement, Ramaswamy argued. “Companies tend to accrete middle management, so there’s very much a push to get more people who are doing. How do we get them as leveraged as possible? Snowflake has historically been a little top-heavy on the engineering side, so we are balancing that out.”
“Oh, man, the girls are fighting, aren’t they?” – Rep. Alexandria Ocasio-Cortez commenting on what was the best day on Twitter in years.
“Maybe there’s a world where you have one AI in the sky. Maybe you actually have a bunch of domain-specific agents that require a bunch of specific work to make it happen. I think the evidence has really been shifting towards this menagerie of different models.” – OpenAI’s Greg Brockman speaking at the AI Engineer’s World Fair.
“Give it a year. We’ll be doing a billion queries a week if we can sustain this growth rate.” – Perplexity CEO Aravind Srinivas onstage at Bloomberg’s tech conference.
“We were accidentally cash flow positive in Q1, which was cool.” – Substack CEO Chris Best speaking at The Information’s creator economy summit.
- As part of a broader leadership reshuffling, Microsoft’s CEO of LinkedIn, Ryan Roslansky, is now also leading the Office portfolio of products.
- After a short stint as a distinguished AI engineer at Meta, Rohan Anil is leaving to join Anthropic. Richard Fontaine, CEO of the Center for a New American Security, is also joining the board of Anthropic’s controlling trust.
- Tesla’s head of Optimus, Milan Kovac, is leaving to spend “time with family,” according to Elon Musk.
- Christian Szegedy, a co-founder of xAI, is leaving to be the chief scientist of an AI startup called Morph.
- Gary Briggs will serve as the interim chief marketing officer of OpenAI while Kate Rouch takes medical leave.
- Palo Alto Networks CEO Nikesh Arora, who was also an early Google executive, is joining Uber’s board. Andrew Macdonald is also being promoted to become the company’s president and chief operating officer.
If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.
As always, I welcome your feedback, especially if you’ll be attending WWDC next week as well, or if you have a story idea to share. You can respond here or ping me securely on Signal.
Source link
#Popular #apps #caught #crosshairs #Anthropic #OpenAI
![Your Doctor Is Most Likely Consulting This Free AI Chatbot, Report Says
How would you like it if, when stumped or just in need of some help with an unfamiliar situation, your doctor consulted a free, ad-supported AI chatbot? That’s not actually a hypothetical. They probably are doing that, a new report from NBC News says. It’s called OpenEvidence, and NBC says it was “used by about 65% of U.S. doctors across almost 27 million clinical encounters in April alone.” An earlier Bloomberg report on OpenEvidence from seven months ago said it had signed up 50% of American doctors at the time—so reported growth is rapid.
The OpenEvidence homepage trumpets the bot as “America’s Official Medical Knowledge Platform,” and says healthcare professionals qualify for unlimited free use, but non-doctors can try it for free without creating accounts. It gives long, detailed answers with extensive citations that superficially look—to me, a non-doctor—trustworthy and credible. NBC interviewed doctors for its story, and apparently pressed them on how often they actually click those links to the sources of information, and “most said they only do so when they get an unexpected result,” NBC’s report says.
While it’s free, OpenEvidence is not a charity. It’s a Miami-headquartered tech unicorn with a billionaire founder named David Nadler, and as of January it boasted a billion valuation. NBC says it’s backed by some of the all stars of Sand Hill Road: Sequoia Capital and Andreessen Horowitz, along with Google Ventures, Thrive Capital, and Nvidia.
And its revenue comes from ads (for now), which NBC says are often for “pharmaceutical and medical device companies.” I’m not capable of stress testing such a piece of software, but I kicked the tires slightly by asking Claude to generate doctor’s notes that are very bad and irresponsible (I said it was just a movie prop). ©OpenEvidence When I told OpenEvidence those were my notes and asked it to make sure they were good, thankfully, it confirmed that they were bad, saying in part:
“This clinical documentation raises serious patient safety concerns. The presentation described contains multiple red flags for subarachnoid hemorrhage (SAH) that appear to have been insufficiently weighted, and the current management plan could result in significant harm.” So that’s somewhat comforting. On the other hand, according to NBC: “[…]some healthcare providers were quick to point out that OpenEvidence occasionally flubbed or exaggerated its answers, particularly on rare conditions or in ‘edge’ cases.” NBC’s report also clocked some worries within the medical community and elsewhere, in particular, a “lack of rigorous scientific studies on the tool’s patient impact,” and signs that OpenEvidence might be stunting the intellectual development of recent med school grads: “One midcareer doctor in Missouri, who requested anonymity given the limited number of providers in their medical field in the country, said he was already seeing the detrimental effects of OpenEvidence on students’ ability to sort signals from noise. ‘My worry is that when we introduce a new tool, any kind of tool that is doing part of your skills that you had trained up for a while beforehand, you start losing those skills pretty quickly” At a recent doctor’s appointment, my doctor asked my permission to use an AI tool on their phone (I don’t know if it was OpenEvidence). I didn’t know what to say other than yes. Do I want that for my doctor’s appointment? Not especially. But if my doctor has come to rely on a tool like this, then what am I supposed to do? Take away their crutch? #Doctor #Consulting #Free #Chatbot #ReportArtificial intelligence,doctors,Medicine Your Doctor Is Most Likely Consulting This Free AI Chatbot, Report Says
How would you like it if, when stumped or just in need of some help with an unfamiliar situation, your doctor consulted a free, ad-supported AI chatbot? That’s not actually a hypothetical. They probably are doing that, a new report from NBC News says. It’s called OpenEvidence, and NBC says it was “used by about 65% of U.S. doctors across almost 27 million clinical encounters in April alone.” An earlier Bloomberg report on OpenEvidence from seven months ago said it had signed up 50% of American doctors at the time—so reported growth is rapid.
The OpenEvidence homepage trumpets the bot as “America’s Official Medical Knowledge Platform,” and says healthcare professionals qualify for unlimited free use, but non-doctors can try it for free without creating accounts. It gives long, detailed answers with extensive citations that superficially look—to me, a non-doctor—trustworthy and credible. NBC interviewed doctors for its story, and apparently pressed them on how often they actually click those links to the sources of information, and “most said they only do so when they get an unexpected result,” NBC’s report says.
While it’s free, OpenEvidence is not a charity. It’s a Miami-headquartered tech unicorn with a billionaire founder named David Nadler, and as of January it boasted a billion valuation. NBC says it’s backed by some of the all stars of Sand Hill Road: Sequoia Capital and Andreessen Horowitz, along with Google Ventures, Thrive Capital, and Nvidia.
And its revenue comes from ads (for now), which NBC says are often for “pharmaceutical and medical device companies.” I’m not capable of stress testing such a piece of software, but I kicked the tires slightly by asking Claude to generate doctor’s notes that are very bad and irresponsible (I said it was just a movie prop). ©OpenEvidence When I told OpenEvidence those were my notes and asked it to make sure they were good, thankfully, it confirmed that they were bad, saying in part:
“This clinical documentation raises serious patient safety concerns. The presentation described contains multiple red flags for subarachnoid hemorrhage (SAH) that appear to have been insufficiently weighted, and the current management plan could result in significant harm.” So that’s somewhat comforting. On the other hand, according to NBC: “[…]some healthcare providers were quick to point out that OpenEvidence occasionally flubbed or exaggerated its answers, particularly on rare conditions or in ‘edge’ cases.” NBC’s report also clocked some worries within the medical community and elsewhere, in particular, a “lack of rigorous scientific studies on the tool’s patient impact,” and signs that OpenEvidence might be stunting the intellectual development of recent med school grads: “One midcareer doctor in Missouri, who requested anonymity given the limited number of providers in their medical field in the country, said he was already seeing the detrimental effects of OpenEvidence on students’ ability to sort signals from noise. ‘My worry is that when we introduce a new tool, any kind of tool that is doing part of your skills that you had trained up for a while beforehand, you start losing those skills pretty quickly” At a recent doctor’s appointment, my doctor asked my permission to use an AI tool on their phone (I don’t know if it was OpenEvidence). I didn’t know what to say other than yes. Do I want that for my doctor’s appointment? Not especially. But if my doctor has come to rely on a tool like this, then what am I supposed to do? Take away their crutch? #Doctor #Consulting #Free #Chatbot #ReportArtificial intelligence,doctors,Medicine](https://gizmodo.com/app/uploads/2026/05/Screenshot-2026-05-13-at-8.02.01 PM.jpg)
Post Comment