For decades, Silicon Valley has valorized the college dropout. Founders like Bill Gates, Steve Jobs, and Mark Zuckerberg left school early to build companies and became billionaires.
That ethos was later institutionalized through initiatives like the Thiel Fellowship, which famously pays promising students $100,000 to leave college and start companies.
For many years, the famed accelerator Y Combinator also quietly reinforced that culture. While it never explicitly required students to drop out, many of its most successful alumni, including Dropbox’s Drew Houston, Reddit’s Steve Huffman, and Stripe’s John and Patrick Collison, joined the program young and left school behind to build their companies.
Now, YC is changing that narrative.
The accelerator has introduced a new application track called Early Decision, designed for students who want to start companies but don’t want to drop out. The program allows them to apply while still in school, get accepted and funded immediately, and defer their participation in YC until after they graduate. For example, a student applying in Fall 2025 could graduate in Spring 2026, then participate in YC’s Summer 2026 batch.
“It’s designed for graduating seniors who want to do a startup but also want to finish school first,” said YC managing partner Jared Friedman in the launch video.
Friedman added that the idea for Early Decision came from conversations with students. “Between AI Startup School last summer and the more than 20 university trips we’ve done over the past year, we’ve had a lot of opportunities to do that. One of YC’s most common pieces of advice is to ‘talk to your users,’ and we follow it ourselves,” he told TechCrunch over email.
Techcrunch event
San Francisco
|
October 27-29, 2025
In Silicon Valley culture, dropping out has been almost a rite of passage for aspiring founders Programs like the Thiel Fellowship have turned it into a movement (though it’s worth noting that Peter Thiel himself did not drop out but earned both undergraduate and law degrees from Stanford).
It’s why YC’s announcement is a meaningful break from that mythos that leaving school early is the optimal, or only, path to startup success. The timing is also significant, coming at a time when more young people are questioning both the cost of college and the tradeoffs of staying in school.
The new program also reflects a growing maturity in how YC thinks about long-term founder outcomes.
The accelerator has long been a magnet for college-aged builders. Founders of Loom, Instacart, Rappi, and Brex were in their teens or early twenties when they joined the program. But the decision to drop out was often implicit: do the program now or miss the opportunity.
Early Decision removes that pressure, offering a middle ground between academic completion and chasing entrepreneurship. The move could broaden YC’s applicant pool to include more cautious, deliberate student founders who are committed to startup life but unwilling to sacrifice education to get there.
In its announcement, YC highlights Sneha Sivakumar and Anushka Nijhawan, the co-founders of Spur, as a success story from this approach. Spur builds AI-powered quality-assurance testing tools, and the duo applied to YC through Early Decision in Fall 2023 while still in school. They graduated in May 2024, joined the Summer 2024 YC batch and have since raised $4.5 million.
YC notes that the program is open to both graduating students and those earlier in their academic journey. It’s a bet that some of the best founders of the next decade won’t need to choose between college and startups. They’ll do both.
The move also helps YC secure talent early in an increasingly competitive accelerator and seed funding landscape, giving students an option that competes with other programs like Thiel Fellowship, Neo Scholars, Founders Inc, as well as Big Tech internships and grad school pipelines.
Source link
#Combinator #launches #Early #Decision #students #graduate #build #TechCrunch
![Your Doctor Is Most Likely Consulting This Free AI Chatbot, Report Says
How would you like it if, when stumped or just in need of some help with an unfamiliar situation, your doctor consulted a free, ad-supported AI chatbot? That’s not actually a hypothetical. They probably are doing that, a new report from NBC News says. It’s called OpenEvidence, and NBC says it was “used by about 65% of U.S. doctors across almost 27 million clinical encounters in April alone.” An earlier Bloomberg report on OpenEvidence from seven months ago said it had signed up 50% of American doctors at the time—so reported growth is rapid.
The OpenEvidence homepage trumpets the bot as “America’s Official Medical Knowledge Platform,” and says healthcare professionals qualify for unlimited free use, but non-doctors can try it for free without creating accounts. It gives long, detailed answers with extensive citations that superficially look—to me, a non-doctor—trustworthy and credible. NBC interviewed doctors for its story, and apparently pressed them on how often they actually click those links to the sources of information, and “most said they only do so when they get an unexpected result,” NBC’s report says.
While it’s free, OpenEvidence is not a charity. It’s a Miami-headquartered tech unicorn with a billionaire founder named David Nadler, and as of January it boasted a billion valuation. NBC says it’s backed by some of the all stars of Sand Hill Road: Sequoia Capital and Andreessen Horowitz, along with Google Ventures, Thrive Capital, and Nvidia.
And its revenue comes from ads (for now), which NBC says are often for “pharmaceutical and medical device companies.” I’m not capable of stress testing such a piece of software, but I kicked the tires slightly by asking Claude to generate doctor’s notes that are very bad and irresponsible (I said it was just a movie prop). ©OpenEvidence When I told OpenEvidence those were my notes and asked it to make sure they were good, thankfully, it confirmed that they were bad, saying in part:
“This clinical documentation raises serious patient safety concerns. The presentation described contains multiple red flags for subarachnoid hemorrhage (SAH) that appear to have been insufficiently weighted, and the current management plan could result in significant harm.” So that’s somewhat comforting. On the other hand, according to NBC: “[…]some healthcare providers were quick to point out that OpenEvidence occasionally flubbed or exaggerated its answers, particularly on rare conditions or in ‘edge’ cases.” NBC’s report also clocked some worries within the medical community and elsewhere, in particular, a “lack of rigorous scientific studies on the tool’s patient impact,” and signs that OpenEvidence might be stunting the intellectual development of recent med school grads: “One midcareer doctor in Missouri, who requested anonymity given the limited number of providers in their medical field in the country, said he was already seeing the detrimental effects of OpenEvidence on students’ ability to sort signals from noise. ‘My worry is that when we introduce a new tool, any kind of tool that is doing part of your skills that you had trained up for a while beforehand, you start losing those skills pretty quickly” At a recent doctor’s appointment, my doctor asked my permission to use an AI tool on their phone (I don’t know if it was OpenEvidence). I didn’t know what to say other than yes. Do I want that for my doctor’s appointment? Not especially. But if my doctor has come to rely on a tool like this, then what am I supposed to do? Take away their crutch? #Doctor #Consulting #Free #Chatbot #ReportArtificial intelligence,doctors,Medicine Your Doctor Is Most Likely Consulting This Free AI Chatbot, Report Says
How would you like it if, when stumped or just in need of some help with an unfamiliar situation, your doctor consulted a free, ad-supported AI chatbot? That’s not actually a hypothetical. They probably are doing that, a new report from NBC News says. It’s called OpenEvidence, and NBC says it was “used by about 65% of U.S. doctors across almost 27 million clinical encounters in April alone.” An earlier Bloomberg report on OpenEvidence from seven months ago said it had signed up 50% of American doctors at the time—so reported growth is rapid.
The OpenEvidence homepage trumpets the bot as “America’s Official Medical Knowledge Platform,” and says healthcare professionals qualify for unlimited free use, but non-doctors can try it for free without creating accounts. It gives long, detailed answers with extensive citations that superficially look—to me, a non-doctor—trustworthy and credible. NBC interviewed doctors for its story, and apparently pressed them on how often they actually click those links to the sources of information, and “most said they only do so when they get an unexpected result,” NBC’s report says.
While it’s free, OpenEvidence is not a charity. It’s a Miami-headquartered tech unicorn with a billionaire founder named David Nadler, and as of January it boasted a billion valuation. NBC says it’s backed by some of the all stars of Sand Hill Road: Sequoia Capital and Andreessen Horowitz, along with Google Ventures, Thrive Capital, and Nvidia.
And its revenue comes from ads (for now), which NBC says are often for “pharmaceutical and medical device companies.” I’m not capable of stress testing such a piece of software, but I kicked the tires slightly by asking Claude to generate doctor’s notes that are very bad and irresponsible (I said it was just a movie prop). ©OpenEvidence When I told OpenEvidence those were my notes and asked it to make sure they were good, thankfully, it confirmed that they were bad, saying in part:
“This clinical documentation raises serious patient safety concerns. The presentation described contains multiple red flags for subarachnoid hemorrhage (SAH) that appear to have been insufficiently weighted, and the current management plan could result in significant harm.” So that’s somewhat comforting. On the other hand, according to NBC: “[…]some healthcare providers were quick to point out that OpenEvidence occasionally flubbed or exaggerated its answers, particularly on rare conditions or in ‘edge’ cases.” NBC’s report also clocked some worries within the medical community and elsewhere, in particular, a “lack of rigorous scientific studies on the tool’s patient impact,” and signs that OpenEvidence might be stunting the intellectual development of recent med school grads: “One midcareer doctor in Missouri, who requested anonymity given the limited number of providers in their medical field in the country, said he was already seeing the detrimental effects of OpenEvidence on students’ ability to sort signals from noise. ‘My worry is that when we introduce a new tool, any kind of tool that is doing part of your skills that you had trained up for a while beforehand, you start losing those skills pretty quickly” At a recent doctor’s appointment, my doctor asked my permission to use an AI tool on their phone (I don’t know if it was OpenEvidence). I didn’t know what to say other than yes. Do I want that for my doctor’s appointment? Not especially. But if my doctor has come to rely on a tool like this, then what am I supposed to do? Take away their crutch? #Doctor #Consulting #Free #Chatbot #ReportArtificial intelligence,doctors,Medicine](https://gizmodo.com/app/uploads/2026/05/Screenshot-2026-05-13-at-8.02.01 PM.jpg)
Post Comment