Venture capital firm Atomico’s annual State of European Tech report is out and it shows investment is trending upwards. But this year’s edition goes beyond the usual assessment of the tech ecosystem; it has become a piece of advocacy that reflects a broader shift: European startups and investors are increasingly turning to lobbying.
“It’s no longer enough to show how far we’ve come. It’s critical, too, that we use those insights to point the way forward,” said the report’s author, Tom Wehmeier, who is also a partner at Atomico and the firm’s head of intelligence. This includes four policy recommendations with fairly self-explanatory names: Fix the friction, Fund the future, Empower talent, and Champion risk.
While Atomico is using answers from a wide range of respondents to advocate for these specific recommendations, it arguably has some authority to speak for more than itself. Founded in 2006 by Skype co-founder Niklas Zennström, its portfolio includes high-profile European companies such as Aiven, DeepL, Klarna, Pipedrive, Stripe, and Supercell.
Taking a page from Big Tech and from legacy industries, as well as from their U.S. peers, European tech companies of that scale are increasingly learning to lobby for themselves — at the company level, with public affairs hires, but also collectively, with open letters that European institutions have paid attention to.
This also explains why many of Atomico’s recommendations align with topics that are already very much in the air, both in the startup community and in the Brussels policy world — whether it is the 28th regime proposed by advocacy group EU-INC to create a pan-European company structure (currently, companies must navigate 27 different national regimes), calls for less regulation, or broader considerations on competitiveness that echo former European Central Bank president Mario Draghi’s 2024 report.
This buy-in at the highest levels is apparent in Atomico’s report, too. For the first time, its 2025 edition features a quote from the president of the European Commission, Ursula von der Leyen, saying that she wants “the future of AI to be made in Europe.” This high-level attention also explains why European tech lobbying is becoming more sophisticated.
On the 28th regime, for instance, Atomico warns that whether it will be a “regulation” or a “directive” is highly important. “This is the difference between having teeth or not, with the latter representing a continuation of the status quo where rules can be interpreted country to country, instead of the uniformity tech companies need to thrive,” the firm argues. (In EU law, regulations are directly binding across all member states, while directives allow each country to implement rules differently.)
Techcrunch event
San Francisco
|
October 13-15, 2026
This level of detail isn’t unprecedented. France Digitale, a French startup and investor association, published a “non-paper” on the 28th regime that wasn’t much different from what other lobbies may produce on other topics — just like publications from ESNA, the Europe Startup Nations Alliance. But Atomico’s take, also packaged as a video and a stage talk at tech conference Slush, is meant to reach both the tech ecosystem and policymakers.
Paradoxically, what might be missing is a sense of the various forces that could oppose efforts such as EU-INC. More broadly, some recommendations may feel out of touch to most people; after all, few Europeans wake up in the morning concerned about the lack of new homegrown trillion-dollar companies.
The counterargument is that society as a whole is affected by lackluster growth, but arguably, there is still more that European tech’s emerging lobbying could do to win hearts. According to Alexandru Voica, head of corporate affairs and policy at London-based AI unicorn Synthesia, that is one reason why large startups are becoming more vocal.
“Communications and policy are more important than 10 years ago because in Europe, there’s a deep distrust of the tech industry,” Voica wrote to TechCrunch. “A decade ago, [communications] was seen as something you could run out of marketing to help with product growth and brand awareness. Today, the work we do is much more focused on risk mitigation and reputation management, etc.”
European tech’s lobbying push also carries risks. If the movement becomes too closely tied to particular political parties, it could trigger backlash and undermine broader support. Still, regardless of politics, many will likely agree with Atomico’s central point: “Europe effectively stands at a crossroads.”
Source link
#European #tech #political #TechCrunch
![Your Doctor Is Most Likely Consulting This Free AI Chatbot, Report Says
How would you like it if, when stumped or just in need of some help with an unfamiliar situation, your doctor consulted a free, ad-supported AI chatbot? That’s not actually a hypothetical. They probably are doing that, a new report from NBC News says. It’s called OpenEvidence, and NBC says it was “used by about 65% of U.S. doctors across almost 27 million clinical encounters in April alone.” An earlier Bloomberg report on OpenEvidence from seven months ago said it had signed up 50% of American doctors at the time—so reported growth is rapid.
The OpenEvidence homepage trumpets the bot as “America’s Official Medical Knowledge Platform,” and says healthcare professionals qualify for unlimited free use, but non-doctors can try it for free without creating accounts. It gives long, detailed answers with extensive citations that superficially look—to me, a non-doctor—trustworthy and credible. NBC interviewed doctors for its story, and apparently pressed them on how often they actually click those links to the sources of information, and “most said they only do so when they get an unexpected result,” NBC’s report says.
While it’s free, OpenEvidence is not a charity. It’s a Miami-headquartered tech unicorn with a billionaire founder named David Nadler, and as of January it boasted a billion valuation. NBC says it’s backed by some of the all stars of Sand Hill Road: Sequoia Capital and Andreessen Horowitz, along with Google Ventures, Thrive Capital, and Nvidia.
And its revenue comes from ads (for now), which NBC says are often for “pharmaceutical and medical device companies.” I’m not capable of stress testing such a piece of software, but I kicked the tires slightly by asking Claude to generate doctor’s notes that are very bad and irresponsible (I said it was just a movie prop). ©OpenEvidence When I told OpenEvidence those were my notes and asked it to make sure they were good, thankfully, it confirmed that they were bad, saying in part:
“This clinical documentation raises serious patient safety concerns. The presentation described contains multiple red flags for subarachnoid hemorrhage (SAH) that appear to have been insufficiently weighted, and the current management plan could result in significant harm.” So that’s somewhat comforting. On the other hand, according to NBC: “[…]some healthcare providers were quick to point out that OpenEvidence occasionally flubbed or exaggerated its answers, particularly on rare conditions or in ‘edge’ cases.” NBC’s report also clocked some worries within the medical community and elsewhere, in particular, a “lack of rigorous scientific studies on the tool’s patient impact,” and signs that OpenEvidence might be stunting the intellectual development of recent med school grads: “One midcareer doctor in Missouri, who requested anonymity given the limited number of providers in their medical field in the country, said he was already seeing the detrimental effects of OpenEvidence on students’ ability to sort signals from noise. ‘My worry is that when we introduce a new tool, any kind of tool that is doing part of your skills that you had trained up for a while beforehand, you start losing those skills pretty quickly” At a recent doctor’s appointment, my doctor asked my permission to use an AI tool on their phone (I don’t know if it was OpenEvidence). I didn’t know what to say other than yes. Do I want that for my doctor’s appointment? Not especially. But if my doctor has come to rely on a tool like this, then what am I supposed to do? Take away their crutch? #Doctor #Consulting #Free #Chatbot #ReportArtificial intelligence,doctors,Medicine Your Doctor Is Most Likely Consulting This Free AI Chatbot, Report Says
How would you like it if, when stumped or just in need of some help with an unfamiliar situation, your doctor consulted a free, ad-supported AI chatbot? That’s not actually a hypothetical. They probably are doing that, a new report from NBC News says. It’s called OpenEvidence, and NBC says it was “used by about 65% of U.S. doctors across almost 27 million clinical encounters in April alone.” An earlier Bloomberg report on OpenEvidence from seven months ago said it had signed up 50% of American doctors at the time—so reported growth is rapid.
The OpenEvidence homepage trumpets the bot as “America’s Official Medical Knowledge Platform,” and says healthcare professionals qualify for unlimited free use, but non-doctors can try it for free without creating accounts. It gives long, detailed answers with extensive citations that superficially look—to me, a non-doctor—trustworthy and credible. NBC interviewed doctors for its story, and apparently pressed them on how often they actually click those links to the sources of information, and “most said they only do so when they get an unexpected result,” NBC’s report says.
While it’s free, OpenEvidence is not a charity. It’s a Miami-headquartered tech unicorn with a billionaire founder named David Nadler, and as of January it boasted a billion valuation. NBC says it’s backed by some of the all stars of Sand Hill Road: Sequoia Capital and Andreessen Horowitz, along with Google Ventures, Thrive Capital, and Nvidia.
And its revenue comes from ads (for now), which NBC says are often for “pharmaceutical and medical device companies.” I’m not capable of stress testing such a piece of software, but I kicked the tires slightly by asking Claude to generate doctor’s notes that are very bad and irresponsible (I said it was just a movie prop). ©OpenEvidence When I told OpenEvidence those were my notes and asked it to make sure they were good, thankfully, it confirmed that they were bad, saying in part:
“This clinical documentation raises serious patient safety concerns. The presentation described contains multiple red flags for subarachnoid hemorrhage (SAH) that appear to have been insufficiently weighted, and the current management plan could result in significant harm.” So that’s somewhat comforting. On the other hand, according to NBC: “[…]some healthcare providers were quick to point out that OpenEvidence occasionally flubbed or exaggerated its answers, particularly on rare conditions or in ‘edge’ cases.” NBC’s report also clocked some worries within the medical community and elsewhere, in particular, a “lack of rigorous scientific studies on the tool’s patient impact,” and signs that OpenEvidence might be stunting the intellectual development of recent med school grads: “One midcareer doctor in Missouri, who requested anonymity given the limited number of providers in their medical field in the country, said he was already seeing the detrimental effects of OpenEvidence on students’ ability to sort signals from noise. ‘My worry is that when we introduce a new tool, any kind of tool that is doing part of your skills that you had trained up for a while beforehand, you start losing those skills pretty quickly” At a recent doctor’s appointment, my doctor asked my permission to use an AI tool on their phone (I don’t know if it was OpenEvidence). I didn’t know what to say other than yes. Do I want that for my doctor’s appointment? Not especially. But if my doctor has come to rely on a tool like this, then what am I supposed to do? Take away their crutch? #Doctor #Consulting #Free #Chatbot #ReportArtificial intelligence,doctors,Medicine](https://gizmodo.com/app/uploads/2026/05/Screenshot-2026-05-13-at-8.02.01 PM.jpg)
Post Comment