×
Amazon Delivery Drones Involve a Perilous 10-Foot Drop. Users Are Posting the Apparent Results
                It seemed like this might have been a rare week without a “robot doing dumb shit” story—but never fear, because Jeff Bezos’s squadron of airborne delivery drones is here to save the day. Prime Air, Amazon’s drone delivery service, has been rolling out in a number of US cities over the last few months and—surprise!—it looks like they kinda suck compared to their human equivalents. (And that’s really saying something considering those human equivalents’ propensity for running up to your front door, whacking a “Sorry we missed you!” sticker right below the note you left saying “BUZZER DOESN’T WORK—PLEASE CALL THIS NUMBER”, and then driving away.)* Still, whatever else your local Amazon driver might do, one thing they won’t do is casually drop your precious package onto concrete from 10 feet in the air. Amazon’s drones, however…  Several stories videos have surfaced of late, apparently showing Amazon-branded drones hovering above customers’ driveways/stoops/etc and then just dropping their cargo onto the ground below. In one video, YouTuber Tamara Hancock orders a plastic bottle of blue raspberry syrup—which is apparently a substance one can order—and watches as the drone dumps it unceremoniously on her driveway. She opens the package and, sure enough, the video depicts a smashed and leaking screw top.

 Given the unholy racket these things make, you can probably hear them approaching a mile off, so perhaps the best course of action is to just run outside and try to catch your package before it smashes into the ground. This isn’t the seamless service Amazon promised, but then again, it’s not all that different from waiting to hear the delivery driver approaching and then booking it outside to grab your package before he gets a chance to whack the dreaded “Sorry we missed you!” sticker on your door. Plus ça change, etc. Anyway, it’s not easy to see how this issue might be mitigated. The obvious answer is “hover closer to the ground”, but given delivery robots’ record in failing to detect obstacles in their path, it feels like that strategy would eventually result in a headline like “Florida Grandmother Beheaded by Drone As She Tries to Collect Her Order of Trump Memorabilia.”

 All jokes aside, the question of how drones actually avoid doing things like beheading grandmothers is, unsurprisingly, controversial. Last week Chad Butler, a former head of information security at Amazon’s commercial drone program, posted a video about the regulatory regime surrounding drones like the ones Amazon use, which are referred to as “beyond visual line of sight”, or BVLOS, drones. As the name suggests, these are drones that are able to fly autonomously beyond the line of sight of a human operator. Without a human directing them, drones need to be able to ensure they don’t fly into a wall.

  Butler explains that there are two competing schools of thought about how to do this. The first requires the use of a system called ADS-B, which maintains a consistent broadcast of the drone’s altitude, heading and air speed, creating a sort of virtual environment that lets every drone know where every other drone is. The second, championed by Amazon, is more like the technology used on ground-based robots—it uses onboard “detect and avoid” systems like camera and radar, which allow drones to “see” what’s around them and navigate themselves around obstacles. Amazon recently left the Commercial Drone Alliance, which advocates for the first system, and Butler actually endorses his former employer’s stance. He argues that if drones are constantly broadcasting an unencrypted record of their position, and they have no independent on-board methods to verify that position, then it becomes pretty easy for hackers to hijack them by simply spoofing a GPS signal. This scenario certainly sounds credible—and, frankly, kinda frightening. (Reassuringly, Butler says, “This is not a drone problem—it’s a design pattern problem, and I see it everywhere in AI and autonomous system design.” So that’s great.)

 Having said that, we’ve seen with ground-based robots that the use of on-board sensors alone is also, um, less than perfect—and if navigating a drone in two dimensions is hard, adding a whole other dimension seems to just increase the difficulty and the chance of things going wrong. And on that note, there does seem to be one straightforward way of avoiding the possibility of a hacked delivery drone delivering a bomb to the White House or something, which is just getting rid of the bloody things. However, capitalism will not abide such good sense, so I guess we’ll just see how this whole thing pans out. *To be clear, we don’t necessarily blame drivers working to insane schedules for doing this; it is frustrating, though.      #Amazon #Delivery #Drones #Involve #Perilous #10Foot #Drop #Users #Posting #Apparent #ResultsAmazon,Prime Air

Amazon Delivery Drones Involve a Perilous 10-Foot Drop. Users Are Posting the Apparent ResultsAmazon Delivery Drones Involve a Perilous 10-Foot Drop. Users Are Posting the Apparent Results
                It seemed like this might have been a rare week without a “robot doing dumb shit” story—but never fear, because Jeff Bezos’s squadron of airborne delivery drones is here to save the day. Prime Air, Amazon’s drone delivery service, has been rolling out in a number of US cities over the last few months and—surprise!—it looks like they kinda suck compared to their human equivalents. (And that’s really saying something considering those human equivalents’ propensity for running up to your front door, whacking a “Sorry we missed you!” sticker right below the note you left saying “BUZZER DOESN’T WORK—PLEASE CALL THIS NUMBER”, and then driving away.)* Still, whatever else your local Amazon driver might do, one thing they won’t do is casually drop your precious package onto concrete from 10 feet in the air. Amazon’s drones, however…  Several stories videos have surfaced of late, apparently showing Amazon-branded drones hovering above customers’ driveways/stoops/etc and then just dropping their cargo onto the ground below. In one video, YouTuber Tamara Hancock orders a plastic bottle of blue raspberry syrup—which is apparently a substance one can order—and watches as the drone dumps it unceremoniously on her driveway. She opens the package and, sure enough, the video depicts a smashed and leaking screw top.

 Given the unholy racket these things make, you can probably hear them approaching a mile off, so perhaps the best course of action is to just run outside and try to catch your package before it smashes into the ground. This isn’t the seamless service Amazon promised, but then again, it’s not all that different from waiting to hear the delivery driver approaching and then booking it outside to grab your package before he gets a chance to whack the dreaded “Sorry we missed you!” sticker on your door. Plus ça change, etc. Anyway, it’s not easy to see how this issue might be mitigated. The obvious answer is “hover closer to the ground”, but given delivery robots’ record in failing to detect obstacles in their path, it feels like that strategy would eventually result in a headline like “Florida Grandmother Beheaded by Drone As She Tries to Collect Her Order of Trump Memorabilia.”

 All jokes aside, the question of how drones actually avoid doing things like beheading grandmothers is, unsurprisingly, controversial. Last week Chad Butler, a former head of information security at Amazon’s commercial drone program, posted a video about the regulatory regime surrounding drones like the ones Amazon use, which are referred to as “beyond visual line of sight”, or BVLOS, drones. As the name suggests, these are drones that are able to fly autonomously beyond the line of sight of a human operator. Without a human directing them, drones need to be able to ensure they don’t fly into a wall.

  Butler explains that there are two competing schools of thought about how to do this. The first requires the use of a system called ADS-B, which maintains a consistent broadcast of the drone’s altitude, heading and air speed, creating a sort of virtual environment that lets every drone know where every other drone is. The second, championed by Amazon, is more like the technology used on ground-based robots—it uses onboard “detect and avoid” systems like camera and radar, which allow drones to “see” what’s around them and navigate themselves around obstacles. Amazon recently left the Commercial Drone Alliance, which advocates for the first system, and Butler actually endorses his former employer’s stance. He argues that if drones are constantly broadcasting an unencrypted record of their position, and they have no independent on-board methods to verify that position, then it becomes pretty easy for hackers to hijack them by simply spoofing a GPS signal. This scenario certainly sounds credible—and, frankly, kinda frightening. (Reassuringly, Butler says, “This is not a drone problem—it’s a design pattern problem, and I see it everywhere in AI and autonomous system design.” So that’s great.)

 Having said that, we’ve seen with ground-based robots that the use of on-board sensors alone is also, um, less than perfect—and if navigating a drone in two dimensions is hard, adding a whole other dimension seems to just increase the difficulty and the chance of things going wrong. And on that note, there does seem to be one straightforward way of avoiding the possibility of a hacked delivery drone delivering a bomb to the White House or something, which is just getting rid of the bloody things. However, capitalism will not abide such good sense, so I guess we’ll just see how this whole thing pans out. *To be clear, we don’t necessarily blame drivers working to insane schedules for doing this; it is frustrating, though.      #Amazon #Delivery #Drones #Involve #Perilous #10Foot #Drop #Users #Posting #Apparent #ResultsAmazon,Prime Air

It seemed like this might have been a rare week without a “robot doing dumb shit” story—but never fear, because Jeff Bezos’s squadron of airborne delivery drones is here to save the day. Prime Air, Amazon’s drone delivery service, has been rolling out in a number of US cities over the last few months and—surprise!—it looks like they kinda suck compared to their human equivalents. (And that’s really saying something considering those human equivalents’ propensity for running up to your front door, whacking a “Sorry we missed you!” sticker right below the note you left saying “BUZZER DOESN’T WORK—PLEASE CALL THIS NUMBER”, and then driving away.)*

Still, whatever else your local Amazon driver might do, one thing they won’t do is casually drop your precious package onto concrete from 10 feet in the air. Amazon’s drones, however…

Several stories videos have surfaced of late, apparently showing Amazon-branded drones hovering above customers’ driveways/stoops/etc and then just dropping their cargo onto the ground below. In one video, YouTuber Tamara Hancock orders a plastic bottle of blue raspberry syrup—which is apparently a substance one can order—and watches as the drone dumps it unceremoniously on her driveway. She opens the package and, sure enough, the video depicts a smashed and leaking screw top.

Given the unholy racket these things make, you can probably hear them approaching a mile off, so perhaps the best course of action is to just run outside and try to catch your package before it smashes into the ground. This isn’t the seamless service Amazon promised, but then again, it’s not all that different from waiting to hear the delivery driver approaching and then booking it outside to grab your package before he gets a chance to whack the dreaded “Sorry we missed you!” sticker on your door. Plus ça change, etc.

Anyway, it’s not easy to see how this issue might be mitigated. The obvious answer is “hover closer to the ground”, but given delivery robots’ record in failing to detect obstacles in their path, it feels like that strategy would eventually result in a headline like “Florida Grandmother Beheaded by Drone As She Tries to Collect Her Order of Trump Memorabilia.”

All jokes aside, the question of how drones actually avoid doing things like beheading grandmothers is, unsurprisingly, controversial. Last week Chad Butler, a former head of information security at Amazon’s commercial drone program, posted a video about the regulatory regime surrounding drones like the ones Amazon use, which are referred to as “beyond visual line of sight”, or BVLOS, drones. As the name suggests, these are drones that are able to fly autonomously beyond the line of sight of a human operator. Without a human directing them, drones need to be able to ensure they don’t fly into a wall.

Butler explains that there are two competing schools of thought about how to do this. The first requires the use of a system called ADS-B, which maintains a consistent broadcast of the drone’s altitude, heading and air speed, creating a sort of virtual environment that lets every drone know where every other drone is. The second, championed by Amazon, is more like the technology used on ground-based robots—it uses onboard “detect and avoid” systems like camera and radar, which allow drones to “see” what’s around them and navigate themselves around obstacles.

Amazon recently left the Commercial Drone Alliance, which advocates for the first system, and Butler actually endorses his former employer’s stance. He argues that if drones are constantly broadcasting an unencrypted record of their position, and they have no independent on-board methods to verify that position, then it becomes pretty easy for hackers to hijack them by simply spoofing a GPS signal. This scenario certainly sounds credible—and, frankly, kinda frightening. (Reassuringly, Butler says, “This is not a drone problem—it’s a design pattern problem, and I see it everywhere in AI and autonomous system design.” So that’s great.)

Having said that, we’ve seen with ground-based robots that the use of on-board sensors alone is also, um, less than perfect—and if navigating a drone in two dimensions is hard, adding a whole other dimension seems to just increase the difficulty and the chance of things going wrong. And on that note, there does seem to be one straightforward way of avoiding the possibility of a hacked delivery drone delivering a bomb to the White House or something, which is just getting rid of the bloody things. However, capitalism will not abide such good sense, so I guess we’ll just see how this whole thing pans out.

*To be clear, we don’t necessarily blame drivers working to insane schedules for doing this; it is frustrating, though.

#Amazon #Delivery #Drones #Involve #Perilous #10Foot #Drop #Users #Posting #Apparent #ResultsAmazon,Prime Air

It seemed like this might have been a rare week without a “robot doing dumb shit” story—but never fear, because Jeff Bezos’s squadron of airborne delivery drones is here to save the day. Prime Air, Amazon’s drone delivery service, has been rolling out in a number of US cities over the last few months and—surprise!—it looks like they kinda suck compared to their human equivalents. (And that’s really saying something considering those human equivalents’ propensity for running up to your front door, whacking a “Sorry we missed you!” sticker right below the note you left saying “BUZZER DOESN’T WORK—PLEASE CALL THIS NUMBER”, and then driving away.)*

Still, whatever else your local Amazon driver might do, one thing they won’t do is casually drop your precious package onto concrete from 10 feet in the air. Amazon’s drones, however…

Several stories videos have surfaced of late, apparently showing Amazon-branded drones hovering above customers’ driveways/stoops/etc and then just dropping their cargo onto the ground below. In one video, YouTuber Tamara Hancock orders a plastic bottle of blue raspberry syrup—which is apparently a substance one can order—and watches as the drone dumps it unceremoniously on her driveway. She opens the package and, sure enough, the video depicts a smashed and leaking screw top.

Given the unholy racket these things make, you can probably hear them approaching a mile off, so perhaps the best course of action is to just run outside and try to catch your package before it smashes into the ground. This isn’t the seamless service Amazon promised, but then again, it’s not all that different from waiting to hear the delivery driver approaching and then booking it outside to grab your package before he gets a chance to whack the dreaded “Sorry we missed you!” sticker on your door. Plus ça change, etc.

Anyway, it’s not easy to see how this issue might be mitigated. The obvious answer is “hover closer to the ground”, but given delivery robots’ record in failing to detect obstacles in their path, it feels like that strategy would eventually result in a headline like “Florida Grandmother Beheaded by Drone As She Tries to Collect Her Order of Trump Memorabilia.”

All jokes aside, the question of how drones actually avoid doing things like beheading grandmothers is, unsurprisingly, controversial. Last week Chad Butler, a former head of information security at Amazon’s commercial drone program, posted a video about the regulatory regime surrounding drones like the ones Amazon use, which are referred to as “beyond visual line of sight”, or BVLOS, drones. As the name suggests, these are drones that are able to fly autonomously beyond the line of sight of a human operator. Without a human directing them, drones need to be able to ensure they don’t fly into a wall.

Butler explains that there are two competing schools of thought about how to do this. The first requires the use of a system called ADS-B, which maintains a consistent broadcast of the drone’s altitude, heading and air speed, creating a sort of virtual environment that lets every drone know where every other drone is. The second, championed by Amazon, is more like the technology used on ground-based robots—it uses onboard “detect and avoid” systems like camera and radar, which allow drones to “see” what’s around them and navigate themselves around obstacles.

Amazon recently left the Commercial Drone Alliance, which advocates for the first system, and Butler actually endorses his former employer’s stance. He argues that if drones are constantly broadcasting an unencrypted record of their position, and they have no independent on-board methods to verify that position, then it becomes pretty easy for hackers to hijack them by simply spoofing a GPS signal. This scenario certainly sounds credible—and, frankly, kinda frightening. (Reassuringly, Butler says, “This is not a drone problem—it’s a design pattern problem, and I see it everywhere in AI and autonomous system design.” So that’s great.)

Having said that, we’ve seen with ground-based robots that the use of on-board sensors alone is also, um, less than perfect—and if navigating a drone in two dimensions is hard, adding a whole other dimension seems to just increase the difficulty and the chance of things going wrong. And on that note, there does seem to be one straightforward way of avoiding the possibility of a hacked delivery drone delivering a bomb to the White House or something, which is just getting rid of the bloody things. However, capitalism will not abide such good sense, so I guess we’ll just see how this whole thing pans out.

*To be clear, we don’t necessarily blame drivers working to insane schedules for doing this; it is frustrating, though.

Source link
#Amazon #Delivery #Drones #Involve #Perilous #10Foot #Drop #Users #Posting #Apparent #Results

Previous post

Deadspin | Chris Sale, Braves eager to add to Phillies’ woes <div id=""><section id="0" class=" w-full"><div class="xl:container mx-0 !px-4 py-0 pb-4 !mx-0 !px-0"><img src="https://images.deadspin.com/tr:w-900/28717071.jpg" srcset="https://images.deadspin.com/tr:w-900/28717071.jpg" alt="MLB: Cleveland Guardians at Atlanta Braves" class="w-full" fetchpriority="high" loading="eager"/><span class="text-0.8 leading-tight">Apr 12, 2026; Cumberland, Georgia, USA; Atlanta Braves pitcher Chris Sale (51) pitches against the Cleveland Guardians during the first inning at Truist Park. Mandatory Credit: Dale Zanine-Imagn Images<!-- --> <!-- --> </span></div></section><section id="section-1"> <p>A pair of talented left-handers will take the mound on Saturday night when Chris Sale and the Atlanta Braves visit Cristopher Sanchez and the scuffling Philadelphia Phillies.</p> </section><section id="section-2"> <p>Both pitchers are off to solid starts this season. Sale (3-1, 3.27 ERA), the 2024 National League Cy Young Award winner, remains a force for first-place Atlanta in the NL East. Philadelphia, meanwhile, continues to count on Sanchez (2-1, 2.01), the runner-up to Pittsburgh’s Paul Skenes in the 2025 NL Cy Young race.</p> </section><section id="section-3"> <p>Sale has allowed one or fewer runs in three of his four outings this season. He held the Cleveland Guardians to one run in six innings on Sunday, throwing a season-high 97 pitches in a 13-1 victory.</p> </section><section id="section-4"> <p>“Hall of Famers are just different, and that’s what he is,” Braves manager Walt Weiss said. “I think he ran it up to 99 (mph) tonight on a pitch, and he had some 98s. He’s just a marvel, really.”</p> </section><section id="section-5"> <p>Sanchez also is coming off a victory — 13-7 Monday over the Chicago Cubs in a game in which he gave up two runs over six innings. He hasn’t been particularly sharp this month, however, as he’s allowed 21 hits and seven walks in 16 1/3 innings.</p> </section><section id="section-6"> <p>“It can be better,” Sanchez said via the team’s interpreter. “It can get better. The changeup can definitely be better. The good thing is I’m coming into my sinker, so it’s helping a lot. And my slider is good, so I’m able to throw those pitches even if my changeup is a little (off).” </p> </section><section id="section-7"> <p>Sanchez has never defeated the Braves in seven career games (six starts), going 0-3 with a 3.58 ERA. He held them to three runs in 12 2/3 innings last season but couldn’t come away with a victory.</p> </section><br/><section id="section-8"> <p>Sale is just 2-2 with a 4.05 ERA in seven lifetime starts against the Phillies. However, he might be catching them at a good time, as they’ve been shut out three times while going 2-7 in their last nine games.</p> </section> <section id="section-9"> <p>That was the story of Friday’s series opener, as Martin Perez tossed six strong innings to help Atlanta breeze to a 9-0 win over Philadelphia.</p> </section><section id="section-10"> <p>“Everything can’t always be great or awesome,” Phillies slugger Kyle Schwarber said. “You’re going to have to fight through things. That’s the journey of the year. … There’s always a sense of urgency to go out there and win a baseball game. That’s the mindset we always have.”</p> </section><section id="section-11"> <p>Bryce Harper went 3-for-4 and J.T. Realmuto added two hits, but the rest of the Phillies’ lineup was a collective 1-for-25. </p> </section><section id="section-12"> <p>“We’ve got to turn this thing around, someway, somehow,” manager Rob Thomson said.</p> </section><section id="section-13"> <p>Austin Riley hit two home runs for Atlanta, while Michael Harris II and Dominic Smith also went deep for the visitors. Harris finished with three hits on the night, and Riley and Drake Baldwin collected two apiece as the Braves improved to 7-2 in their last nine games, including three wins in a row.</p> </section><section id="section-14"> <p>“Just a great, great night all around,” Weiss said. “Offense — Austin Riley heating up, hitting homers the other way, that’s a great sign for him. Just a great team win.”</p> </section><section id="section-15"> <p>–Field Level Media</p> </section></div> #Deadspin #Chris #Sale #Braves #eager #add #Phillies #woes

Next post

Abhishek Sharma scores his fastest IPL fifty in contest against Chennai Super Kings <div id="content-body-70878126" itemprop="articleBody"><p>Sunrisers Hyderabad opener Abhishek Sharma scored his fastest Indian Premier League half-century during the IPL 2026 match against Chennai Super Kings in Hyderabad on Saturday.</p><p>He smacked a 15-ball 50 to better his 16-ball fifty he had scored during the 2024 season against Mumbai Indians in Hyderabad.</p><p>This was also the fastest fifty by a SRH player.</p><p>The record for the fastest half-century is held by Rajasthan Royals’ batter Yashasvi Jaiswal, who scored one off 13 balls against Kolkata Knight Riders in 2023.</p><p>Sunrisers is looking to extend its winning run against CSK at the Uppal Stadium, which dates back to 2019.</p><p class="publish-time" id="end-of-article">Published on Apr 18, 2026</p></div> #Abhishek #Sharma #scores #fastest #IPL #fifty #contest #Chennai #Super #Kings

After two weeks of hearing from assorted witnesses that he was a lying snake, the jury finally heard from the lying snake himself: Sam Altman. At the end of the testimony, his lawyer William Savitt asked him how it felt to be accused of stealing a charity.

“We created, through a ton of hard work, this extremely large charity, and I agree you can’t steal it,” Altman said. “Mr. Musk did try to kill it, I guess. Twice.”

Altman was fully in “nice kid from St. Louis” mode, and did a passable impression of a man who was bewildered at what was happening to him. When he stepped down from the stand holding a stack of evidence binders, he even looked a little like a schoolboy. He seemed nervous at the beginning of his direct testimony, though he warmed up fairly quickly. Overall, he seemed to give credible testimony — and at times, it seemed like the jury liked him.

Throughout this trial I’ve had some difficulty imagining what the jury is making of all this because I am a little too familiar with the figures who are testifying. I have heard some audacious lies under oath, like when Elon Musk told us all he doesn’t lose his temper. (He then proceeded to lose his temper on cross-examination.) Or like when Shivon Zilis, the mother of several of his children, told us that she didn’t know Musk was starting xAI — which seemed to be directly contradicted by her text messages. Or when Greg “What will take me to $1B?” Brockman told us he was all about the mission. I certainly believe Altman isn’t trustworthy — I mean, The New Yorker published more than 17,000 words about how much he lies. But unlike with Musk, there are contemporaneous documents backing Altman’s version of the story. At least, mostly.

“My belief is he wanted to have long-term control”

After OpenAI’s Dota 2 win, discussions for a for-profit arm started in earnest. “Mr. Musk felt very strongly that if we were going to form a for-profit he needed to have total control over it initially,” Altman said. “He only trusted himself to make non-obvious decisions that were going to turn out to be correct.”

Altman testified that he was uncomfortable with Musk’s insistence on control, not just because Musk hadn’t been as involved as everyone else, but because OpenAI existed so no one person would control AGI. And at Y Combinator, the startup incubator where he was president, Altman had seen a lot of control fights; no one wanted to give up power when things were going well. With structures like supervoting shares, founders could retain control forever. Curiously, Altman’s example was not the most famous one (Mark Zuckerberg at Meta); it was Musk and SpaceX. When Altman asked Musk about succession plans for OpenAI, he got a particularly “hair-raising” answer: In the event of Musk’s death, Musk said, “I haven’t thought about it a ton, but maybe control should pass to my children.”

I don’t know about that. But I do know that I saw a 2017 email from Altman to Zilis in which he wrote, “I am worried about control. I don’t think any one person should have control of the world’s first AGI — in fact the whole reason we started OpenAI was so that wouldn’t happen.” He went on to say that he didn’t mind the idea of immediate control and was open to “creative structures” — which I understood to mean that, in order to placate Musk, Altman was willing to give him control up to specific milestones in company development.

“I read a vague, like, a lightweight threat in there”

“My belief is he wanted to have long-term control and that he would’ve had that had we agreed to the structure he wanted,” Altman said on the stand. This sounds basically right. In later video testimony from Sam Teller’s deposition, we heard that Musk no longer invests in anything he doesn’t control. This also fits with Musk’s long-term fixation on making sure he can’t get booted from his own company the way he got booted from PayPal.

Musk also tried to recruit Altman to Tesla. We saw texts between Altman and Teller, in which Teller told Altman that Musk was committed to beefing up Tesla’s AI no matter what, and that he hoped that Altman, Brockman, and Ilya Sutskever would want to join eventually. “I read a vague, like, a lightweight threat in there, that he’s gonna do this inside of Tesla with or without you,” Altman said. But he felt that Tesla was primarily a car company — allowing it to acquire OpenAI would betray OpenAI’s mission.

Later, in Teller’s testimony, we saw texts Teller sent to Zilis at 12:40AM on February 4th, 2018: “I don’t love OpenAI continuing without Elon,” he wrote. “Would rather disable it by recruiting the leaders.”

When Musk stopped his quarterly donations, OpenAI was operating on a “shoestring” with an “extremely short runway of cash.” OpenAI did have other donors, none of whom have sued it or joined Musk’s suit. (One donor in the exhibit that wasn’t called out to the courtroom was Alameda Research, the firm owned by Sam Bankman-Fried, who is now in prison for fraud and money laundering.) Musk’s resignation from the board meant “people wondered if he was gonna try to take, uh, vengeance out on us or something.” On the other hand, Altman said Musk had “demotivated some of our key researchers” and done “huge damage for a long time to the culture of the organization.” So it sure seems like some people were relieved to be rid of him.

I’ve seen some fairly shoddy lawyering from Musk’s side throughout this trial

We saw a lot of evidence that throughout the time Altman was setting up OpenAI’s for-profit arm, he kept Musk apprised of what was going on, either directly or through Zilis or Teller. At no point did Musk object, and whatever he said publicly about the Microsoft investments, there was plenty of evidence that privately he’d been made aware.

On the cross-examination, we were treated to more than 10 minutes of Steven Molo telling Altman that various and assorted people had called him a liar: Sutskever, Mira Murati, Helen Toner, Tasha McCauley, Daniela and Dario Amodei (former OpenAI employees and founders of Anthropic), employees at Altman’s first startup Loopt, that recent New Yorker article, a book called The Optimist, etc. Molo did score some points by asking Altman about testimony in the trial, which Altman said he wasn’t paying close attention to. Molo acted as though this was inconceivable. Surely someone had informed Altman of what was said?

It was a little funny and also a little tiresome. Altman kept his cool, though, seeming hurt and confused by the focus on whether he was a liar. It was also the most successful part of the cross, which declined in focus precipitously afterward. I’ve seen some fairly shoddy lawyering from Musk’s side throughout this trial, and today was pretty bad. At one point, when Molo was trying to capitalize on Altman being both CEO and on the company’s board, Altman said — truthfully — that CEOs are almost always on the boards of the companies they run.

(At this point in my notes, I had written, “Boy, Molo is not very good at this.”)

The point of this trial isn’t to win — it’s to punish Altman, Brockman, and OpenAI

There was also an unconvincing argument about fundraising in nonprofits, specifically that if Stanford could raise $3 billion a year, OpenAI should have remained a nonprofit. Okay, let’s just think about that for a minute. Stanford has a donor network of thousands of graduates. It’s a school, which has very different capital requirements. It is not competing with any reputable for-profit companies. But leave that all aside and assume that some fundraising genius took over at the OpenAI Foundation: $3 billion is the initial two Microsoft investments combined, and not enough to scale OpenAI to where it is now. If compute is the main bottleneck on building AI models, then Molo’s line of argument suggests OpenAI never would have managed to be successful as a nonprofit alone. He’s making the defense’s case for them.

But the thing is, Molo doesn’t actually have to be good at this job, because the point of this trial isn’t to win — though I’m sure Musk wouldn’t mind a win. The point is to punish Altman, Brockman, and OpenAI. Musk has done that pretty thoroughly — reinforcing in the public’s mind that Altman is a liar and a snake. This morning, I read an exclusive in The Wall Street Journal that assorted Republican AGs and the House Oversight committee wanted to look into Sam Altman’s investments. References to the trial are peppered throughout the article.

So yes, Altman was convincing on the stand. He may even win the suit. But it sure seems like Musk’s vengeance has just begun.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.
#Sam #Altman #winning #standAI,OpenAI">Sam Altman was winning on the stand, but it might not be enoughAfter two weeks of hearing from assorted witnesses that he was a lying snake, the jury finally heard from the lying snake himself: Sam Altman. At the end of the testimony, his lawyer William Savitt asked him how it felt to be accused of stealing a charity.“We created, through a ton of hard work, this extremely large charity, and I agree you can’t steal it,” Altman said. “Mr. Musk did try to kill it, I guess. Twice.”Altman was fully in “nice kid from St. Louis” mode, and did a passable impression of a man who was bewildered at what was happening to him. When he stepped down from the stand holding a stack of evidence binders, he even looked a little like a schoolboy. He seemed nervous at the beginning of his direct testimony, though he warmed up fairly quickly. Overall, he seemed to give credible testimony — and at times, it seemed like the jury liked him.Throughout this trial I’ve had some difficulty imagining what the jury is making of all this because I am a little too familiar with the figures who are testifying. I have heard some audacious lies under oath, like when Elon Musk told us all he doesn’t lose his temper. (He then proceeded to lose his temper on cross-examination.) Or like when Shivon Zilis, the mother of several of his children, told us that she didn’t know Musk was starting xAI — which seemed to be directly contradicted by her text messages. Or when Greg “What will take me to B?” Brockman told us he was all about the mission. I certainly believe Altman isn’t trustworthy — I mean, The New Yorker published more than 17,000 words about how much he lies. But unlike with Musk, there are contemporaneous documents backing Altman’s version of the story. At least, mostly.“My belief is he wanted to have long-term control”After OpenAI’s Dota 2 win, discussions for a for-profit arm started in earnest. “Mr. Musk felt very strongly that if we were going to form a for-profit he needed to have total control over it initially,” Altman said. “He only trusted himself to make non-obvious decisions that were going to turn out to be correct.”Altman testified that he was uncomfortable with Musk’s insistence on control, not just because Musk hadn’t been as involved as everyone else, but because OpenAI existed so no one person would control AGI. And at Y Combinator, the startup incubator where he was president, Altman had seen a lot of control fights; no one wanted to give up power when things were going well. With structures like supervoting shares, founders could retain control forever. Curiously, Altman’s example was not the most famous one (Mark Zuckerberg at Meta); it was Musk and SpaceX. When Altman asked Musk about succession plans for OpenAI, he got a particularly “hair-raising” answer: In the event of Musk’s death, Musk said, “I haven’t thought about it a ton, but maybe control should pass to my children.”I don’t know about that. But I do know that I saw a 2017 email from Altman to Zilis in which he wrote, “I am worried about control. I don’t think any one person should have control of the world’s first AGI — in fact the whole reason we started OpenAI was so that wouldn’t happen.” He went on to say that he didn’t mind the idea of immediate control and was open to “creative structures” — which I understood to mean that, in order to placate Musk, Altman was willing to give him control up to specific milestones in company development.“I read a vague, like, a lightweight threat in there”“My belief is he wanted to have long-term control and that he would’ve had that had we agreed to the structure he wanted,” Altman said on the stand. This sounds basically right. In later video testimony from Sam Teller’s deposition, we heard that Musk no longer invests in anything he doesn’t control. This also fits with Musk’s long-term fixation on making sure he can’t get booted from his own company the way he got booted from PayPal.Musk also tried to recruit Altman to Tesla. We saw texts between Altman and Teller, in which Teller told Altman that Musk was committed to beefing up Tesla’s AI no matter what, and that he hoped that Altman, Brockman, and Ilya Sutskever would want to join eventually. “I read a vague, like, a lightweight threat in there, that he’s gonna do this inside of Tesla with or without you,” Altman said. But he felt that Tesla was primarily a car company — allowing it to acquire OpenAI would betray OpenAI’s mission.Later, in Teller’s testimony, we saw texts Teller sent to Zilis at 12:40AM on February 4th, 2018: “I don’t love OpenAI continuing without Elon,” he wrote. “Would rather disable it by recruiting the leaders.”When Musk stopped his quarterly donations, OpenAI was operating on a “shoestring” with an “extremely short runway of cash.” OpenAI did have other donors, none of whom have sued it or joined Musk’s suit. (One donor in the exhibit that wasn’t called out to the courtroom was Alameda Research, the firm owned by Sam Bankman-Fried, who is now in prison for fraud and money laundering.) Musk’s resignation from the board meant “people wondered if he was gonna try to take, uh, vengeance out on us or something.” On the other hand, Altman said Musk had “demotivated some of our key researchers” and done “huge damage for a long time to the culture of the organization.” So it sure seems like some people were relieved to be rid of him.I’ve seen some fairly shoddy lawyering from Musk’s side throughout this trialWe saw a lot of evidence that throughout the time Altman was setting up OpenAI’s for-profit arm, he kept Musk apprised of what was going on, either directly or through Zilis or Teller. At no point did Musk object, and whatever he said publicly about the Microsoft investments, there was plenty of evidence that privately he’d been made aware.On the cross-examination, we were treated to more than 10 minutes of Steven Molo telling Altman that various and assorted people had called him a liar: Sutskever, Mira Murati, Helen Toner, Tasha McCauley, Daniela and Dario Amodei (former OpenAI employees and founders of Anthropic), employees at Altman’s first startup Loopt, that recent New Yorker article, a book called The Optimist, etc. Molo did score some points by asking Altman about testimony in the trial, which Altman said he wasn’t paying close attention to. Molo acted as though this was inconceivable. Surely someone had informed Altman of what was said?It was a little funny and also a little tiresome. Altman kept his cool, though, seeming hurt and confused by the focus on whether he was a liar. It was also the most successful part of the cross, which declined in focus precipitously afterward. I’ve seen some fairly shoddy lawyering from Musk’s side throughout this trial, and today was pretty bad. At one point, when Molo was trying to capitalize on Altman being both CEO and on the company’s board, Altman said — truthfully — that CEOs are almost always on the boards of the companies they run.(At this point in my notes, I had written, “Boy, Molo is not very good at this.”)The point of this trial isn’t to win — it’s to punish Altman, Brockman, and OpenAIThere was also an unconvincing argument about fundraising in nonprofits, specifically that if Stanford could raise  billion a year, OpenAI should have remained a nonprofit. Okay, let’s just think about that for a minute. Stanford has a donor network of thousands of graduates. It’s a school, which has very different capital requirements. It is not competing with any reputable for-profit companies. But leave that all aside and assume that some fundraising genius took over at the OpenAI Foundation:  billion is the initial two Microsoft investments combined, and not enough to scale OpenAI to where it is now. If compute is the main bottleneck on building AI models, then Molo’s line of argument suggests OpenAI never would have managed to be successful as a nonprofit alone. He’s making the defense’s case for them.But the thing is, Molo doesn’t actually have to be good at this job, because the point of this trial isn’t to win — though I’m sure Musk wouldn’t mind a win. The point is to punish Altman, Brockman, and OpenAI. Musk has done that pretty thoroughly — reinforcing in the public’s mind that Altman is a liar and a snake. This morning, I read an exclusive in The Wall Street Journal that assorted Republican AGs and the House Oversight committee wanted to look into Sam Altman’s investments. References to the trial are peppered throughout the article.So yes, Altman was convincing on the stand. He may even win the suit. But it sure seems like Musk’s vengeance has just begun.Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.Elizabeth LopattoCloseElizabeth LopattoPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Elizabeth LopattoAICloseAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AIOpenAICloseOpenAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All OpenAI#Sam #Altman #winning #standAI,OpenAI

Elon Musk told us all he doesn’t lose his temper. (He then proceeded to lose his temper on cross-examination.) Or like when Shivon Zilis, the mother of several of his children, told us that she didn’t know Musk was starting xAI — which seemed to be directly contradicted by her text messages. Or when Greg “What will take me to $1B?” Brockman told us he was all about the mission. I certainly believe Altman isn’t trustworthy — I mean, The New Yorker published more than 17,000 words about how much he lies. But unlike with Musk, there are contemporaneous documents backing Altman’s version of the story. At least, mostly.

“My belief is he wanted to have long-term control”

After OpenAI’s Dota 2 win, discussions for a for-profit arm started in earnest. “Mr. Musk felt very strongly that if we were going to form a for-profit he needed to have total control over it initially,” Altman said. “He only trusted himself to make non-obvious decisions that were going to turn out to be correct.”

Altman testified that he was uncomfortable with Musk’s insistence on control, not just because Musk hadn’t been as involved as everyone else, but because OpenAI existed so no one person would control AGI. And at Y Combinator, the startup incubator where he was president, Altman had seen a lot of control fights; no one wanted to give up power when things were going well. With structures like supervoting shares, founders could retain control forever. Curiously, Altman’s example was not the most famous one (Mark Zuckerberg at Meta); it was Musk and SpaceX. When Altman asked Musk about succession plans for OpenAI, he got a particularly “hair-raising” answer: In the event of Musk’s death, Musk said, “I haven’t thought about it a ton, but maybe control should pass to my children.”

I don’t know about that. But I do know that I saw a 2017 email from Altman to Zilis in which he wrote, “I am worried about control. I don’t think any one person should have control of the world’s first AGI — in fact the whole reason we started OpenAI was so that wouldn’t happen.” He went on to say that he didn’t mind the idea of immediate control and was open to “creative structures” — which I understood to mean that, in order to placate Musk, Altman was willing to give him control up to specific milestones in company development.

“I read a vague, like, a lightweight threat in there”

“My belief is he wanted to have long-term control and that he would’ve had that had we agreed to the structure he wanted,” Altman said on the stand. This sounds basically right. In later video testimony from Sam Teller’s deposition, we heard that Musk no longer invests in anything he doesn’t control. This also fits with Musk’s long-term fixation on making sure he can’t get booted from his own company the way he got booted from PayPal.

Musk also tried to recruit Altman to Tesla. We saw texts between Altman and Teller, in which Teller told Altman that Musk was committed to beefing up Tesla’s AI no matter what, and that he hoped that Altman, Brockman, and Ilya Sutskever would want to join eventually. “I read a vague, like, a lightweight threat in there, that he’s gonna do this inside of Tesla with or without you,” Altman said. But he felt that Tesla was primarily a car company — allowing it to acquire OpenAI would betray OpenAI’s mission.

Later, in Teller’s testimony, we saw texts Teller sent to Zilis at 12:40AM on February 4th, 2018: “I don’t love OpenAI continuing without Elon,” he wrote. “Would rather disable it by recruiting the leaders.”

When Musk stopped his quarterly donations, OpenAI was operating on a “shoestring” with an “extremely short runway of cash.” OpenAI did have other donors, none of whom have sued it or joined Musk’s suit. (One donor in the exhibit that wasn’t called out to the courtroom was Alameda Research, the firm owned by Sam Bankman-Fried, who is now in prison for fraud and money laundering.) Musk’s resignation from the board meant “people wondered if he was gonna try to take, uh, vengeance out on us or something.” On the other hand, Altman said Musk had “demotivated some of our key researchers” and done “huge damage for a long time to the culture of the organization.” So it sure seems like some people were relieved to be rid of him.

I’ve seen some fairly shoddy lawyering from Musk’s side throughout this trial

We saw a lot of evidence that throughout the time Altman was setting up OpenAI’s for-profit arm, he kept Musk apprised of what was going on, either directly or through Zilis or Teller. At no point did Musk object, and whatever he said publicly about the Microsoft investments, there was plenty of evidence that privately he’d been made aware.

On the cross-examination, we were treated to more than 10 minutes of Steven Molo telling Altman that various and assorted people had called him a liar: Sutskever, Mira Murati, Helen Toner, Tasha McCauley, Daniela and Dario Amodei (former OpenAI employees and founders of Anthropic), employees at Altman’s first startup Loopt, that recent New Yorker article, a book called The Optimist, etc. Molo did score some points by asking Altman about testimony in the trial, which Altman said he wasn’t paying close attention to. Molo acted as though this was inconceivable. Surely someone had informed Altman of what was said?

It was a little funny and also a little tiresome. Altman kept his cool, though, seeming hurt and confused by the focus on whether he was a liar. It was also the most successful part of the cross, which declined in focus precipitously afterward. I’ve seen some fairly shoddy lawyering from Musk’s side throughout this trial, and today was pretty bad. At one point, when Molo was trying to capitalize on Altman being both CEO and on the company’s board, Altman said — truthfully — that CEOs are almost always on the boards of the companies they run.

(At this point in my notes, I had written, “Boy, Molo is not very good at this.”)

The point of this trial isn’t to win — it’s to punish Altman, Brockman, and OpenAI

There was also an unconvincing argument about fundraising in nonprofits, specifically that if Stanford could raise $3 billion a year, OpenAI should have remained a nonprofit. Okay, let’s just think about that for a minute. Stanford has a donor network of thousands of graduates. It’s a school, which has very different capital requirements. It is not competing with any reputable for-profit companies. But leave that all aside and assume that some fundraising genius took over at the OpenAI Foundation: $3 billion is the initial two Microsoft investments combined, and not enough to scale OpenAI to where it is now. If compute is the main bottleneck on building AI models, then Molo’s line of argument suggests OpenAI never would have managed to be successful as a nonprofit alone. He’s making the defense’s case for them.

But the thing is, Molo doesn’t actually have to be good at this job, because the point of this trial isn’t to win — though I’m sure Musk wouldn’t mind a win. The point is to punish Altman, Brockman, and OpenAI. Musk has done that pretty thoroughly — reinforcing in the public’s mind that Altman is a liar and a snake. This morning, I read an exclusive in The Wall Street Journal that assorted Republican AGs and the House Oversight committee wanted to look into Sam Altman’s investments. References to the trial are peppered throughout the article.

So yes, Altman was convincing on the stand. He may even win the suit. But it sure seems like Musk’s vengeance has just begun.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

#Sam #Altman #winning #standAI,OpenAI">Sam Altman was winning on the stand, but it might not be enough

After two weeks of hearing from assorted witnesses that he was a lying snake, the jury finally heard from the lying snake himself: Sam Altman. At the end of the testimony, his lawyer William Savitt asked him how it felt to be accused of stealing a charity.

“We created, through a ton of hard work, this extremely large charity, and I agree you can’t steal it,” Altman said. “Mr. Musk did try to kill it, I guess. Twice.”

Altman was fully in “nice kid from St. Louis” mode, and did a passable impression of a man who was bewildered at what was happening to him. When he stepped down from the stand holding a stack of evidence binders, he even looked a little like a schoolboy. He seemed nervous at the beginning of his direct testimony, though he warmed up fairly quickly. Overall, he seemed to give credible testimony — and at times, it seemed like the jury liked him.

Throughout this trial I’ve had some difficulty imagining what the jury is making of all this because I am a little too familiar with the figures who are testifying. I have heard some audacious lies under oath, like when Elon Musk told us all he doesn’t lose his temper. (He then proceeded to lose his temper on cross-examination.) Or like when Shivon Zilis, the mother of several of his children, told us that she didn’t know Musk was starting xAI — which seemed to be directly contradicted by her text messages. Or when Greg “What will take me to $1B?” Brockman told us he was all about the mission. I certainly believe Altman isn’t trustworthy — I mean, The New Yorker published more than 17,000 words about how much he lies. But unlike with Musk, there are contemporaneous documents backing Altman’s version of the story. At least, mostly.

“My belief is he wanted to have long-term control”

After OpenAI’s Dota 2 win, discussions for a for-profit arm started in earnest. “Mr. Musk felt very strongly that if we were going to form a for-profit he needed to have total control over it initially,” Altman said. “He only trusted himself to make non-obvious decisions that were going to turn out to be correct.”

Altman testified that he was uncomfortable with Musk’s insistence on control, not just because Musk hadn’t been as involved as everyone else, but because OpenAI existed so no one person would control AGI. And at Y Combinator, the startup incubator where he was president, Altman had seen a lot of control fights; no one wanted to give up power when things were going well. With structures like supervoting shares, founders could retain control forever. Curiously, Altman’s example was not the most famous one (Mark Zuckerberg at Meta); it was Musk and SpaceX. When Altman asked Musk about succession plans for OpenAI, he got a particularly “hair-raising” answer: In the event of Musk’s death, Musk said, “I haven’t thought about it a ton, but maybe control should pass to my children.”

I don’t know about that. But I do know that I saw a 2017 email from Altman to Zilis in which he wrote, “I am worried about control. I don’t think any one person should have control of the world’s first AGI — in fact the whole reason we started OpenAI was so that wouldn’t happen.” He went on to say that he didn’t mind the idea of immediate control and was open to “creative structures” — which I understood to mean that, in order to placate Musk, Altman was willing to give him control up to specific milestones in company development.

“I read a vague, like, a lightweight threat in there”

“My belief is he wanted to have long-term control and that he would’ve had that had we agreed to the structure he wanted,” Altman said on the stand. This sounds basically right. In later video testimony from Sam Teller’s deposition, we heard that Musk no longer invests in anything he doesn’t control. This also fits with Musk’s long-term fixation on making sure he can’t get booted from his own company the way he got booted from PayPal.

Musk also tried to recruit Altman to Tesla. We saw texts between Altman and Teller, in which Teller told Altman that Musk was committed to beefing up Tesla’s AI no matter what, and that he hoped that Altman, Brockman, and Ilya Sutskever would want to join eventually. “I read a vague, like, a lightweight threat in there, that he’s gonna do this inside of Tesla with or without you,” Altman said. But he felt that Tesla was primarily a car company — allowing it to acquire OpenAI would betray OpenAI’s mission.

Later, in Teller’s testimony, we saw texts Teller sent to Zilis at 12:40AM on February 4th, 2018: “I don’t love OpenAI continuing without Elon,” he wrote. “Would rather disable it by recruiting the leaders.”

When Musk stopped his quarterly donations, OpenAI was operating on a “shoestring” with an “extremely short runway of cash.” OpenAI did have other donors, none of whom have sued it or joined Musk’s suit. (One donor in the exhibit that wasn’t called out to the courtroom was Alameda Research, the firm owned by Sam Bankman-Fried, who is now in prison for fraud and money laundering.) Musk’s resignation from the board meant “people wondered if he was gonna try to take, uh, vengeance out on us or something.” On the other hand, Altman said Musk had “demotivated some of our key researchers” and done “huge damage for a long time to the culture of the organization.” So it sure seems like some people were relieved to be rid of him.

I’ve seen some fairly shoddy lawyering from Musk’s side throughout this trial

We saw a lot of evidence that throughout the time Altman was setting up OpenAI’s for-profit arm, he kept Musk apprised of what was going on, either directly or through Zilis or Teller. At no point did Musk object, and whatever he said publicly about the Microsoft investments, there was plenty of evidence that privately he’d been made aware.

On the cross-examination, we were treated to more than 10 minutes of Steven Molo telling Altman that various and assorted people had called him a liar: Sutskever, Mira Murati, Helen Toner, Tasha McCauley, Daniela and Dario Amodei (former OpenAI employees and founders of Anthropic), employees at Altman’s first startup Loopt, that recent New Yorker article, a book called The Optimist, etc. Molo did score some points by asking Altman about testimony in the trial, which Altman said he wasn’t paying close attention to. Molo acted as though this was inconceivable. Surely someone had informed Altman of what was said?

It was a little funny and also a little tiresome. Altman kept his cool, though, seeming hurt and confused by the focus on whether he was a liar. It was also the most successful part of the cross, which declined in focus precipitously afterward. I’ve seen some fairly shoddy lawyering from Musk’s side throughout this trial, and today was pretty bad. At one point, when Molo was trying to capitalize on Altman being both CEO and on the company’s board, Altman said — truthfully — that CEOs are almost always on the boards of the companies they run.

(At this point in my notes, I had written, “Boy, Molo is not very good at this.”)

The point of this trial isn’t to win — it’s to punish Altman, Brockman, and OpenAI

There was also an unconvincing argument about fundraising in nonprofits, specifically that if Stanford could raise $3 billion a year, OpenAI should have remained a nonprofit. Okay, let’s just think about that for a minute. Stanford has a donor network of thousands of graduates. It’s a school, which has very different capital requirements. It is not competing with any reputable for-profit companies. But leave that all aside and assume that some fundraising genius took over at the OpenAI Foundation: $3 billion is the initial two Microsoft investments combined, and not enough to scale OpenAI to where it is now. If compute is the main bottleneck on building AI models, then Molo’s line of argument suggests OpenAI never would have managed to be successful as a nonprofit alone. He’s making the defense’s case for them.

But the thing is, Molo doesn’t actually have to be good at this job, because the point of this trial isn’t to win — though I’m sure Musk wouldn’t mind a win. The point is to punish Altman, Brockman, and OpenAI. Musk has done that pretty thoroughly — reinforcing in the public’s mind that Altman is a liar and a snake. This morning, I read an exclusive in The Wall Street Journal that assorted Republican AGs and the House Oversight committee wanted to look into Sam Altman’s investments. References to the trial are peppered throughout the article.

So yes, Altman was convincing on the stand. He may even win the suit. But it sure seems like Musk’s vengeance has just begun.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.
#Sam #Altman #winning #standAI,OpenAI
Early-stage venture firm A* on Tuesday announced a $450 million Fund III. The firm takes a generalist approach, backing companies across categories including AI applications, fintech, healthcare, and security.
The average check size for this fund will be between $3 million and $5 million, with the aim to back at least 30 startups. The capital will be deployed over the next two to three years, as with the firm’s previous funds. Limited partners include nonprofits, foundations, and endowments; Carnegie Mellon University is among the publicly named backers.

A*, founded in 2020 and run by Kevin Hartz and Bennet Siegel, previously raised a $315 million Fund II in 2024 and a $300 million Fund I in 2021. Hartz is a serial entrepreneur best known for co-founding Xoom, the international money-transfer service PayPal later acquired for $1.1 billion in 2015, and Eventbrite, the event-ticketing platform that went public in 2018. Siegel came up through Boston Consulting Group and Altamont Capital Partners before spending four years as a partner at Coatue Management.

The firm has also drawn attention for backing unusually young founders, even as the practice has become more common since. Hartz told TechCrunch last fall that close to 20% of the firm’s current portfolio involve teenage entrepreneurs. Among others of its investments, it has backed the fintech company Ramp and the AI firm Mercor.

This story was updated to clarify the name of the firm.

#Kevin #Hartzs #closed #fund #million #TechCrunchA* Capital,Kevin Hartz,Startups">Kevin Hartz’s A* just closed its third fund with 0 million | TechCrunch
Early-stage venture firm A* on Tuesday announced a 0 million Fund III. The firm takes a generalist approach, backing companies across categories including AI applications, fintech, healthcare, and security.The average check size for this fund will be between  million and  million, with the aim to back at least 30 startups. The capital will be deployed over the next two to three years, as with the firm’s previous funds. Limited partners include nonprofits, foundations, and endowments; Carnegie Mellon University is among the publicly named backers.

A*, founded in 2020 and run by Kevin Hartz and Bennet Siegel, previously raised a 5 million Fund II in 2024 and a 0 million Fund I in 2021. Hartz is a serial entrepreneur best known for co-founding Xoom, the international money-transfer service PayPal later acquired for .1 billion in 2015, and Eventbrite, the event-ticketing platform that went public in 2018. Siegel came up through Boston Consulting Group and Altamont Capital Partners before spending four years as a partner at Coatue Management.







The firm has also drawn attention for backing unusually young founders, even as the practice has become more common since. Hartz told TechCrunch last fall that close to 20% of the firm’s current portfolio involve teenage entrepreneurs. Among others of its investments, it has backed the fintech company Ramp and the AI firm Mercor.

This story was updated to clarify the name of the firm. 
#Kevin #Hartzs #closed #fund #million #TechCrunchA* Capital,Kevin Hartz,Startups

$450 million Fund III. The firm takes a generalist approach, backing companies across categories including AI applications, fintech, healthcare, and security.
The average check size for this fund will be between $3 million and $5 million, with the aim to back at least 30 startups. The capital will be deployed over the next two to three years, as with the firm’s previous funds. Limited partners include nonprofits, foundations, and endowments; Carnegie Mellon University is among the publicly named backers.

A*, founded in 2020 and run by Kevin Hartz and Bennet Siegel, previously raised a $315 million Fund II in 2024 and a $300 million Fund I in 2021. Hartz is a serial entrepreneur best known for co-founding Xoom, the international money-transfer service PayPal later acquired for $1.1 billion in 2015, and Eventbrite, the event-ticketing platform that went public in 2018. Siegel came up through Boston Consulting Group and Altamont Capital Partners before spending four years as a partner at Coatue Management.

The firm has also drawn attention for backing unusually young founders, even as the practice has become more common since. Hartz told TechCrunch last fall that close to 20% of the firm’s current portfolio involve teenage entrepreneurs. Among others of its investments, it has backed the fintech company Ramp and the AI firm Mercor.

This story was updated to clarify the name of the firm.

#Kevin #Hartzs #closed #fund #million #TechCrunchA* Capital,Kevin Hartz,Startups">Kevin Hartz’s A* just closed its third fund with $450 million | TechCrunch

Early-stage venture firm A* on Tuesday announced a $450 million Fund III. The firm takes a generalist approach, backing companies across categories including AI applications, fintech, healthcare, and security.
The average check size for this fund will be between $3 million and $5 million, with the aim to back at least 30 startups. The capital will be deployed over the next two to three years, as with the firm’s previous funds. Limited partners include nonprofits, foundations, and endowments; Carnegie Mellon University is among the publicly named backers.

A*, founded in 2020 and run by Kevin Hartz and Bennet Siegel, previously raised a $315 million Fund II in 2024 and a $300 million Fund I in 2021. Hartz is a serial entrepreneur best known for co-founding Xoom, the international money-transfer service PayPal later acquired for $1.1 billion in 2015, and Eventbrite, the event-ticketing platform that went public in 2018. Siegel came up through Boston Consulting Group and Altamont Capital Partners before spending four years as a partner at Coatue Management.

The firm has also drawn attention for backing unusually young founders, even as the practice has become more common since. Hartz told TechCrunch last fall that close to 20% of the firm’s current portfolio involve teenage entrepreneurs. Among others of its investments, it has backed the fintech company Ramp and the AI firm Mercor.

This story was updated to clarify the name of the firm.

#Kevin #Hartzs #closed #fund #million #TechCrunchA* Capital,Kevin Hartz,Startups

Post Comment