×
As China’s 996 culture spreads, South Korea’s tech sector grapples with 52-hour limit | TechCrunch

As China’s 996 culture spreads, South Korea’s tech sector grapples with 52-hour limit | TechCrunch

As the world races to stay ahead in the deep tech revolution — from AI and semiconductors to quantum computing — innovation has become the new currency of power. For many companies, that pressure has translated into heavier workloads and more intense work cultures. Yet they face a real dilemma: they can’t simply ease up while competitors across the globe push harder to win.

When I came across news about the intense “996” work culture — working 9 am to 9 pm, six days a week, a 72-hour work week — spreading from China to Silicon Valley, it made me wonder how different countries approach work hours and workplace cultures in the tech industry. I was especially curious about how things compare here in South Korea, where I’m currently based.

In South Korea, the standard workweek is 40 hours, with up to 12 hours of overtime, usually paid at 1.5 times the regular rate or more. Employers who violate these rules risk fines, executive imprisonment, and civil liability.

The 52-hour workweek, introduced in 2018 for large companies with over 300 employees and public institutions, was gradually extended to all businesses and fully took effect on January 1, 2025.

Earlier this year, South Korea rolled out a special extended work program that lets employees work beyond the 52-hour weekly limit, with both worker consent and government approval, up to 64 hours. For deep tech sectors like semiconductors, approval periods were temporarily extended from three to six months, though local media reports suggest that only a few companies actually took advantage of it. Looking ahead, the South Korean government plans to scale back these special exemptions and tighten working-hour regulations, even as some lawmakers argue that the current guidelines are sufficient, per the report.

TechCrunch spoke with several tech investors and founders based in South Korea about how the 52-hour workweek limit affects their businesses and their R&D projects as they try to compete with global companies.

“The 52-hour workweek is indeed a challenging factor when making investment decisions in deep tech sectors,” Yongkwan Lee, CEO of South Korea-based venture capital firm Bluepoint Partners, told TechCrunch. “This is particularly relevant when investing in globally competitive sectors like semiconductors, artificial intelligence, and quantum computing. Labor challenges are particularly complex in these sectors, where founders and teams often face intense workloads and long hours during critical growth phases.”

Techcrunch event

San Francisco
|
October 27-29, 2025

At Bluepoint, early-stage investments are often made before the underlying technologies are fully developed or products are ready for market. In this context, Lee noted that strict limits on working hours could potentially impact the pace at which key business milestones are reached.

In South Korea, 70.4% of employees at startup companies responded that they would be willing to work an additional 52 hours per week if adequate compensation is provided, per local reports.

Bohyung Kim, CTO of LeMong, a South Korean startup backed by LG Uplus that delivers agentic AI solutions to more than 13,000 small and medium-sized enterprises in the food and beverage sector, said the country’s 52-hour workweek system often feels more like a restriction than a protection.

“Engineers work to find practical solutions to complex problems,” Kim said. “Our work isn’t about completing predefined tasks within fixed hours. It’s about using creativity and deep focus to solve challenges and create new value. When an idea strikes or a technical breakthrough happens, the concept of time disappears. If a system forces you to stop at that moment, it breaks the flow and can actually reduce efficiency.”

Kim added that while short-term, intense focus is crucial as project deadlines approach or when refining key algorithms, rigid legal limits can sometimes get in the way, including depending on the kind of engineering role someone holds. “Even among engineers, production roles in manufacturing differ from R&D positions,” Kim explained. “In manufacturing, productivity is directly linked to working hours, so schedules need to account for industrial safety. Overtime should also be fairly compensated.”

When asked about workplace flexibility, Huiyong Lee, co-founder of LeMong, which makes comment management software, said he thinks figuring out a monthly average would be more practical than adhering strictly to the country’s 52-hour weekly limit. He noted that work intensity often varies depending on the stage of R&D and project timelines in deep tech companies.

“For companies like ours, intensive development efforts are often required for approximately two weeks prior to a product launch, after which the workload eases once the product stabilizes,” Lee said. “A system with monthly flexibility would allow us to work around 60 hours per week before a launch and 40 hours per week afterward, maintaining an average of 52 hours while ensuring operational efficiency,” Lee continued. “I also believe it is worth considering differentiated standards for deep tech and R&D-focused companies. At the same time, for startups with fewer than 10–20 employees, it is essential to establish more flexible criteria to accommodate their unique operational needs.”

Kim also noted that there is a clear link between performance and hours worked. High-performing team members often tend to put in longer hours, he said. But rather than seeking rewards for the extra time, these top performers focus on achieving results and advancing quickly within the company.

“Engineers are far more motivated to dive in when their efforts are recognized, whether through performance bonuses, stock options, or acknowledgment of technical contributions,” Kim said. “In high-tech, R&D, and IT industries, as well as in globally competitive firms where technical expertise is key, decisions about flexible work hours should be driven by market logic.”

Another Seoul-based venture capitalist, who invests in startups, downplayed the impact of the 52-hour workweek limit on investment decisions.

“At the moment, there don’t appear to be any major concerns. While it’s always difficult to predict how labor regulations or monitoring practices might evolve, many venture companies today do not strictly track employees’ working hours. To my understanding, there’s currently no requirement for companies to submit formal evidence proving that employees stay within the 52-hour weekly limit.”

If an employee were to file a complaint, the VC noted, “the absence of detailed time records could raise compliance questions. That said, most R&D or deeptech firms typically employee highly self-motivated professionals who manage their own schedules responsibly, so such cases seem relatively uncommon.”

The greater challenge likely lies in more labor-intensive industries, such as logistics, delivery, or manufacturing, where a large portion of workers earn close to the minimum wage. “In those sectors, the 52-hour workweek regulation can significantly increase labor costs due to mandatory overtime pay and paid leave. As a result, maintaining productivity and achieving economies of scale can become more difficult for businesses operating under tight margins,” this investor said.

How other countries work

To understand where South Korea’s 52-hour limit fits in the global landscape — and why its deep tech companies feel squeezed between competing pressures — it’s worth examining how other major tech hubs regulate working hours.

In Germany, the UK, and France, standard workweeks typically range from 33 to 48 hours. In Australia and Canada, the standard workweek is 38 and 40 hours, respectively, with mandatory overtime pay, offering a balance between labor rights and workplace flexibility.

In the U.S., the Fair Labor Standards Act (FLSA) sets a standard 40-hour workweek. Non-exempt employees earn time-and-a-half for any overtime, and there’s no limit on total hours. (In California, the rules only require double-time pay for certain overtime.)

In China, the standard work schedule is also 40 hours per week, or 8 hours a day. Overtime is paid at higher rates: roughly 150% of regular pay on weekdays, 200% on weekends, and 300% on public holidays. In Japan, the standard workweek is 40 hours, with limits of 45 hours of overtime per month and 370 hours per year under normal circumstances. Employers who exceed these limits can face fines and administrative penalties, as in other countries.

Singapore’s workweek is slightly longer at 44 hours, with a maximum of 72 overtime hours per month. If spread evenly, that’s roughly 62 hours per week. Overtime pay rates are similar: 1.5 times for weekdays, 2 times for rest days, and 3 times for public holidays.

South Korea’s 52-hour cap sits in the middle of this spectrum, stricter than the U.S. and Singapore but more flexible than much of Europe. Either way, for deep tech founders competing globally, the question isn’t just about the number — it’s about whether rigid weekly limits can accommodate the intense, uneven workflows that characterize early-stage R&D.

Source link
#Chinas #culture #spreads #South #Koreas #tech #sector #grapples #52hour #limit #TechCrunch

After two weeks of hearing from assorted witnesses that he was a lying snake, the jury finally heard from the lying snake himself: Sam Altman. At the end of the testimony, his lawyer William Savitt asked him how it felt to be accused of stealing a charity.

“We created, through a ton of hard work, this extremely large charity, and I agree you can’t steal it,” Altman said. “Mr. Musk did try to kill it, I guess. Twice.”

Altman was fully in “nice kid from St. Louis” mode, and did a passable impression of a man who was bewildered at what was happening to him. When he stepped down from the stand holding a stack of evidence binders, he even looked a little like a schoolboy. He seemed nervous at the beginning of his direct testimony, though he warmed up fairly quickly. Overall, he seemed to give credible testimony — and at times, it seemed like the jury liked him.

Throughout this trial I’ve had some difficulty imagining what the jury is making of all this because I am a little too familiar with the figures who are testifying. I have heard some audacious lies under oath, like when Elon Musk told us all he doesn’t lose his temper. (He then proceeded to lose his temper on cross-examination.) Or like when Shivon Zilis, the mother of several of his children, told us that she didn’t know Musk was starting xAI — which seemed to be directly contradicted by her text messages. Or when Greg “What will take me to $1B?” Brockman told us he was all about the mission. I certainly believe Altman isn’t trustworthy — I mean, The New Yorker published more than 17,000 words about how much he lies. But unlike with Musk, there are contemporaneous documents backing Altman’s version of the story. At least, mostly.

“My belief is he wanted to have long-term control”

After OpenAI’s Dota 2 win, discussions for a for-profit arm started in earnest. “Mr. Musk felt very strongly that if we were going to form a for-profit he needed to have total control over it initially,” Altman said. “He only trusted himself to make non-obvious decisions that were going to turn out to be correct.”

Altman testified that he was uncomfortable with Musk’s insistence on control, not just because Musk hadn’t been as involved as everyone else, but because OpenAI existed so no one person would control AGI. And at Y Combinator, the startup incubator where he was president, Altman had seen a lot of control fights; no one wanted to give up power when things were going well. With structures like supervoting shares, founders could retain control forever. Curiously, Altman’s example was not the most famous one (Mark Zuckerberg at Meta); it was Musk and SpaceX. When Altman asked Musk about succession plans for OpenAI, he got a particularly “hair-raising” answer: In the event of Musk’s death, Musk said, “I haven’t thought about it a ton, but maybe control should pass to my children.”

I don’t know about that. But I do know that I saw a 2017 email from Altman to Zilis in which he wrote, “I am worried about control. I don’t think any one person should have control of the world’s first AGI — in fact the whole reason we started OpenAI was so that wouldn’t happen.” He went on to say that he didn’t mind the idea of immediate control and was open to “creative structures” — which I understood to mean that, in order to placate Musk, Altman was willing to give him control up to specific milestones in company development.

“I read a vague, like, a lightweight threat in there”

“My belief is he wanted to have long-term control and that he would’ve had that had we agreed to the structure he wanted,” Altman said on the stand. This sounds basically right. In later video testimony from Sam Teller’s deposition, we heard that Musk no longer invests in anything he doesn’t control. This also fits with Musk’s long-term fixation on making sure he can’t get booted from his own company the way he got booted from PayPal.

Musk also tried to recruit Altman to Tesla. We saw texts between Altman and Teller, in which Teller told Altman that Musk was committed to beefing up Tesla’s AI no matter what, and that he hoped that Altman, Brockman, and Ilya Sutskever would want to join eventually. “I read a vague, like, a lightweight threat in there, that he’s gonna do this inside of Tesla with or without you,” Altman said. But he felt that Tesla was primarily a car company — allowing it to acquire OpenAI would betray OpenAI’s mission.

Later, in Teller’s testimony, we saw texts Teller sent to Zilis at 12:40AM on February 4th, 2018: “I don’t love OpenAI continuing without Elon,” he wrote. “Would rather disable it by recruiting the leaders.”

When Musk stopped his quarterly donations, OpenAI was operating on a “shoestring” with an “extremely short runway of cash.” OpenAI did have other donors, none of whom have sued it or joined Musk’s suit. (One donor in the exhibit that wasn’t called out to the courtroom was Alameda Research, the firm owned by Sam Bankman-Fried, who is now in prison for fraud and money laundering.) Musk’s resignation from the board meant “people wondered if he was gonna try to take, uh, vengeance out on us or something.” On the other hand, Altman said Musk had “demotivated some of our key researchers” and done “huge damage for a long time to the culture of the organization.” So it sure seems like some people were relieved to be rid of him.

I’ve seen some fairly shoddy lawyering from Musk’s side throughout this trial

We saw a lot of evidence that throughout the time Altman was setting up OpenAI’s for-profit arm, he kept Musk apprised of what was going on, either directly or through Zilis or Teller. At no point did Musk object, and whatever he said publicly about the Microsoft investments, there was plenty of evidence that privately he’d been made aware.

On the cross-examination, we were treated to more than 10 minutes of Steven Molo telling Altman that various and assorted people had called him a liar: Sutskever, Mira Murati, Helen Toner, Tasha McCauley, Daniela and Dario Amodei (former OpenAI employees and founders of Anthropic), employees at Altman’s first startup Loopt, that recent New Yorker article, a book called The Optimist, etc. Molo did score some points by asking Altman about testimony in the trial, which Altman said he wasn’t paying close attention to. Molo acted as though this was inconceivable. Surely someone had informed Altman of what was said?

It was a little funny and also a little tiresome. Altman kept his cool, though, seeming hurt and confused by the focus on whether he was a liar. It was also the most successful part of the cross, which declined in focus precipitously afterward. I’ve seen some fairly shoddy lawyering from Musk’s side throughout this trial, and today was pretty bad. At one point, when Molo was trying to capitalize on Altman being both CEO and on the company’s board, Altman said — truthfully — that CEOs are almost always on the boards of the companies they run.

(At this point in my notes, I had written, “Boy, Molo is not very good at this.”)

The point of this trial isn’t to win — it’s to punish Altman, Brockman, and OpenAI

There was also an unconvincing argument about fundraising in nonprofits, specifically that if Stanford could raise $3 billion a year, OpenAI should have remained a nonprofit. Okay, let’s just think about that for a minute. Stanford has a donor network of thousands of graduates. It’s a school, which has very different capital requirements. It is not competing with any reputable for-profit companies. But leave that all aside and assume that some fundraising genius took over at the OpenAI Foundation: $3 billion is the initial two Microsoft investments combined, and not enough to scale OpenAI to where it is now. If compute is the main bottleneck on building AI models, then Molo’s line of argument suggests OpenAI never would have managed to be successful as a nonprofit alone. He’s making the defense’s case for them.

But the thing is, Molo doesn’t actually have to be good at this job, because the point of this trial isn’t to win — though I’m sure Musk wouldn’t mind a win. The point is to punish Altman, Brockman, and OpenAI. Musk has done that pretty thoroughly — reinforcing in the public’s mind that Altman is a liar and a snake. This morning, I read an exclusive in The Wall Street Journal that assorted Republican AGs and the House Oversight committee wanted to look into Sam Altman’s investments. References to the trial are peppered throughout the article.

So yes, Altman was convincing on the stand. He may even win the suit. But it sure seems like Musk’s vengeance has just begun.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.
#Sam #Altman #winning #standAI,OpenAI">Sam Altman was winning on the stand, but it might not be enoughAfter two weeks of hearing from assorted witnesses that he was a lying snake, the jury finally heard from the lying snake himself: Sam Altman. At the end of the testimony, his lawyer William Savitt asked him how it felt to be accused of stealing a charity.“We created, through a ton of hard work, this extremely large charity, and I agree you can’t steal it,” Altman said. “Mr. Musk did try to kill it, I guess. Twice.”Altman was fully in “nice kid from St. Louis” mode, and did a passable impression of a man who was bewildered at what was happening to him. When he stepped down from the stand holding a stack of evidence binders, he even looked a little like a schoolboy. He seemed nervous at the beginning of his direct testimony, though he warmed up fairly quickly. Overall, he seemed to give credible testimony — and at times, it seemed like the jury liked him.Throughout this trial I’ve had some difficulty imagining what the jury is making of all this because I am a little too familiar with the figures who are testifying. I have heard some audacious lies under oath, like when Elon Musk told us all he doesn’t lose his temper. (He then proceeded to lose his temper on cross-examination.) Or like when Shivon Zilis, the mother of several of his children, told us that she didn’t know Musk was starting xAI — which seemed to be directly contradicted by her text messages. Or when Greg “What will take me to B?” Brockman told us he was all about the mission. I certainly believe Altman isn’t trustworthy — I mean, The New Yorker published more than 17,000 words about how much he lies. But unlike with Musk, there are contemporaneous documents backing Altman’s version of the story. At least, mostly.“My belief is he wanted to have long-term control”After OpenAI’s Dota 2 win, discussions for a for-profit arm started in earnest. “Mr. Musk felt very strongly that if we were going to form a for-profit he needed to have total control over it initially,” Altman said. “He only trusted himself to make non-obvious decisions that were going to turn out to be correct.”Altman testified that he was uncomfortable with Musk’s insistence on control, not just because Musk hadn’t been as involved as everyone else, but because OpenAI existed so no one person would control AGI. And at Y Combinator, the startup incubator where he was president, Altman had seen a lot of control fights; no one wanted to give up power when things were going well. With structures like supervoting shares, founders could retain control forever. Curiously, Altman’s example was not the most famous one (Mark Zuckerberg at Meta); it was Musk and SpaceX. When Altman asked Musk about succession plans for OpenAI, he got a particularly “hair-raising” answer: In the event of Musk’s death, Musk said, “I haven’t thought about it a ton, but maybe control should pass to my children.”I don’t know about that. But I do know that I saw a 2017 email from Altman to Zilis in which he wrote, “I am worried about control. I don’t think any one person should have control of the world’s first AGI — in fact the whole reason we started OpenAI was so that wouldn’t happen.” He went on to say that he didn’t mind the idea of immediate control and was open to “creative structures” — which I understood to mean that, in order to placate Musk, Altman was willing to give him control up to specific milestones in company development.“I read a vague, like, a lightweight threat in there”“My belief is he wanted to have long-term control and that he would’ve had that had we agreed to the structure he wanted,” Altman said on the stand. This sounds basically right. In later video testimony from Sam Teller’s deposition, we heard that Musk no longer invests in anything he doesn’t control. This also fits with Musk’s long-term fixation on making sure he can’t get booted from his own company the way he got booted from PayPal.Musk also tried to recruit Altman to Tesla. We saw texts between Altman and Teller, in which Teller told Altman that Musk was committed to beefing up Tesla’s AI no matter what, and that he hoped that Altman, Brockman, and Ilya Sutskever would want to join eventually. “I read a vague, like, a lightweight threat in there, that he’s gonna do this inside of Tesla with or without you,” Altman said. But he felt that Tesla was primarily a car company — allowing it to acquire OpenAI would betray OpenAI’s mission.Later, in Teller’s testimony, we saw texts Teller sent to Zilis at 12:40AM on February 4th, 2018: “I don’t love OpenAI continuing without Elon,” he wrote. “Would rather disable it by recruiting the leaders.”When Musk stopped his quarterly donations, OpenAI was operating on a “shoestring” with an “extremely short runway of cash.” OpenAI did have other donors, none of whom have sued it or joined Musk’s suit. (One donor in the exhibit that wasn’t called out to the courtroom was Alameda Research, the firm owned by Sam Bankman-Fried, who is now in prison for fraud and money laundering.) Musk’s resignation from the board meant “people wondered if he was gonna try to take, uh, vengeance out on us or something.” On the other hand, Altman said Musk had “demotivated some of our key researchers” and done “huge damage for a long time to the culture of the organization.” So it sure seems like some people were relieved to be rid of him.I’ve seen some fairly shoddy lawyering from Musk’s side throughout this trialWe saw a lot of evidence that throughout the time Altman was setting up OpenAI’s for-profit arm, he kept Musk apprised of what was going on, either directly or through Zilis or Teller. At no point did Musk object, and whatever he said publicly about the Microsoft investments, there was plenty of evidence that privately he’d been made aware.On the cross-examination, we were treated to more than 10 minutes of Steven Molo telling Altman that various and assorted people had called him a liar: Sutskever, Mira Murati, Helen Toner, Tasha McCauley, Daniela and Dario Amodei (former OpenAI employees and founders of Anthropic), employees at Altman’s first startup Loopt, that recent New Yorker article, a book called The Optimist, etc. Molo did score some points by asking Altman about testimony in the trial, which Altman said he wasn’t paying close attention to. Molo acted as though this was inconceivable. Surely someone had informed Altman of what was said?It was a little funny and also a little tiresome. Altman kept his cool, though, seeming hurt and confused by the focus on whether he was a liar. It was also the most successful part of the cross, which declined in focus precipitously afterward. I’ve seen some fairly shoddy lawyering from Musk’s side throughout this trial, and today was pretty bad. At one point, when Molo was trying to capitalize on Altman being both CEO and on the company’s board, Altman said — truthfully — that CEOs are almost always on the boards of the companies they run.(At this point in my notes, I had written, “Boy, Molo is not very good at this.”)The point of this trial isn’t to win — it’s to punish Altman, Brockman, and OpenAIThere was also an unconvincing argument about fundraising in nonprofits, specifically that if Stanford could raise  billion a year, OpenAI should have remained a nonprofit. Okay, let’s just think about that for a minute. Stanford has a donor network of thousands of graduates. It’s a school, which has very different capital requirements. It is not competing with any reputable for-profit companies. But leave that all aside and assume that some fundraising genius took over at the OpenAI Foundation:  billion is the initial two Microsoft investments combined, and not enough to scale OpenAI to where it is now. If compute is the main bottleneck on building AI models, then Molo’s line of argument suggests OpenAI never would have managed to be successful as a nonprofit alone. He’s making the defense’s case for them.But the thing is, Molo doesn’t actually have to be good at this job, because the point of this trial isn’t to win — though I’m sure Musk wouldn’t mind a win. The point is to punish Altman, Brockman, and OpenAI. Musk has done that pretty thoroughly — reinforcing in the public’s mind that Altman is a liar and a snake. This morning, I read an exclusive in The Wall Street Journal that assorted Republican AGs and the House Oversight committee wanted to look into Sam Altman’s investments. References to the trial are peppered throughout the article.So yes, Altman was convincing on the stand. He may even win the suit. But it sure seems like Musk’s vengeance has just begun.Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.Elizabeth LopattoCloseElizabeth LopattoPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Elizabeth LopattoAICloseAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AIOpenAICloseOpenAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All OpenAI#Sam #Altman #winning #standAI,OpenAI

Elon Musk told us all he doesn’t lose his temper. (He then proceeded to lose his temper on cross-examination.) Or like when Shivon Zilis, the mother of several of his children, told us that she didn’t know Musk was starting xAI — which seemed to be directly contradicted by her text messages. Or when Greg “What will take me to $1B?” Brockman told us he was all about the mission. I certainly believe Altman isn’t trustworthy — I mean, The New Yorker published more than 17,000 words about how much he lies. But unlike with Musk, there are contemporaneous documents backing Altman’s version of the story. At least, mostly.

“My belief is he wanted to have long-term control”

After OpenAI’s Dota 2 win, discussions for a for-profit arm started in earnest. “Mr. Musk felt very strongly that if we were going to form a for-profit he needed to have total control over it initially,” Altman said. “He only trusted himself to make non-obvious decisions that were going to turn out to be correct.”

Altman testified that he was uncomfortable with Musk’s insistence on control, not just because Musk hadn’t been as involved as everyone else, but because OpenAI existed so no one person would control AGI. And at Y Combinator, the startup incubator where he was president, Altman had seen a lot of control fights; no one wanted to give up power when things were going well. With structures like supervoting shares, founders could retain control forever. Curiously, Altman’s example was not the most famous one (Mark Zuckerberg at Meta); it was Musk and SpaceX. When Altman asked Musk about succession plans for OpenAI, he got a particularly “hair-raising” answer: In the event of Musk’s death, Musk said, “I haven’t thought about it a ton, but maybe control should pass to my children.”

I don’t know about that. But I do know that I saw a 2017 email from Altman to Zilis in which he wrote, “I am worried about control. I don’t think any one person should have control of the world’s first AGI — in fact the whole reason we started OpenAI was so that wouldn’t happen.” He went on to say that he didn’t mind the idea of immediate control and was open to “creative structures” — which I understood to mean that, in order to placate Musk, Altman was willing to give him control up to specific milestones in company development.

“I read a vague, like, a lightweight threat in there”

“My belief is he wanted to have long-term control and that he would’ve had that had we agreed to the structure he wanted,” Altman said on the stand. This sounds basically right. In later video testimony from Sam Teller’s deposition, we heard that Musk no longer invests in anything he doesn’t control. This also fits with Musk’s long-term fixation on making sure he can’t get booted from his own company the way he got booted from PayPal.

Musk also tried to recruit Altman to Tesla. We saw texts between Altman and Teller, in which Teller told Altman that Musk was committed to beefing up Tesla’s AI no matter what, and that he hoped that Altman, Brockman, and Ilya Sutskever would want to join eventually. “I read a vague, like, a lightweight threat in there, that he’s gonna do this inside of Tesla with or without you,” Altman said. But he felt that Tesla was primarily a car company — allowing it to acquire OpenAI would betray OpenAI’s mission.

Later, in Teller’s testimony, we saw texts Teller sent to Zilis at 12:40AM on February 4th, 2018: “I don’t love OpenAI continuing without Elon,” he wrote. “Would rather disable it by recruiting the leaders.”

When Musk stopped his quarterly donations, OpenAI was operating on a “shoestring” with an “extremely short runway of cash.” OpenAI did have other donors, none of whom have sued it or joined Musk’s suit. (One donor in the exhibit that wasn’t called out to the courtroom was Alameda Research, the firm owned by Sam Bankman-Fried, who is now in prison for fraud and money laundering.) Musk’s resignation from the board meant “people wondered if he was gonna try to take, uh, vengeance out on us or something.” On the other hand, Altman said Musk had “demotivated some of our key researchers” and done “huge damage for a long time to the culture of the organization.” So it sure seems like some people were relieved to be rid of him.

I’ve seen some fairly shoddy lawyering from Musk’s side throughout this trial

We saw a lot of evidence that throughout the time Altman was setting up OpenAI’s for-profit arm, he kept Musk apprised of what was going on, either directly or through Zilis or Teller. At no point did Musk object, and whatever he said publicly about the Microsoft investments, there was plenty of evidence that privately he’d been made aware.

On the cross-examination, we were treated to more than 10 minutes of Steven Molo telling Altman that various and assorted people had called him a liar: Sutskever, Mira Murati, Helen Toner, Tasha McCauley, Daniela and Dario Amodei (former OpenAI employees and founders of Anthropic), employees at Altman’s first startup Loopt, that recent New Yorker article, a book called The Optimist, etc. Molo did score some points by asking Altman about testimony in the trial, which Altman said he wasn’t paying close attention to. Molo acted as though this was inconceivable. Surely someone had informed Altman of what was said?

It was a little funny and also a little tiresome. Altman kept his cool, though, seeming hurt and confused by the focus on whether he was a liar. It was also the most successful part of the cross, which declined in focus precipitously afterward. I’ve seen some fairly shoddy lawyering from Musk’s side throughout this trial, and today was pretty bad. At one point, when Molo was trying to capitalize on Altman being both CEO and on the company’s board, Altman said — truthfully — that CEOs are almost always on the boards of the companies they run.

(At this point in my notes, I had written, “Boy, Molo is not very good at this.”)

The point of this trial isn’t to win — it’s to punish Altman, Brockman, and OpenAI

There was also an unconvincing argument about fundraising in nonprofits, specifically that if Stanford could raise $3 billion a year, OpenAI should have remained a nonprofit. Okay, let’s just think about that for a minute. Stanford has a donor network of thousands of graduates. It’s a school, which has very different capital requirements. It is not competing with any reputable for-profit companies. But leave that all aside and assume that some fundraising genius took over at the OpenAI Foundation: $3 billion is the initial two Microsoft investments combined, and not enough to scale OpenAI to where it is now. If compute is the main bottleneck on building AI models, then Molo’s line of argument suggests OpenAI never would have managed to be successful as a nonprofit alone. He’s making the defense’s case for them.

But the thing is, Molo doesn’t actually have to be good at this job, because the point of this trial isn’t to win — though I’m sure Musk wouldn’t mind a win. The point is to punish Altman, Brockman, and OpenAI. Musk has done that pretty thoroughly — reinforcing in the public’s mind that Altman is a liar and a snake. This morning, I read an exclusive in The Wall Street Journal that assorted Republican AGs and the House Oversight committee wanted to look into Sam Altman’s investments. References to the trial are peppered throughout the article.

So yes, Altman was convincing on the stand. He may even win the suit. But it sure seems like Musk’s vengeance has just begun.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

#Sam #Altman #winning #standAI,OpenAI">Sam Altman was winning on the stand, but it might not be enough

After two weeks of hearing from assorted witnesses that he was a lying snake, the jury finally heard from the lying snake himself: Sam Altman. At the end of the testimony, his lawyer William Savitt asked him how it felt to be accused of stealing a charity.

“We created, through a ton of hard work, this extremely large charity, and I agree you can’t steal it,” Altman said. “Mr. Musk did try to kill it, I guess. Twice.”

Altman was fully in “nice kid from St. Louis” mode, and did a passable impression of a man who was bewildered at what was happening to him. When he stepped down from the stand holding a stack of evidence binders, he even looked a little like a schoolboy. He seemed nervous at the beginning of his direct testimony, though he warmed up fairly quickly. Overall, he seemed to give credible testimony — and at times, it seemed like the jury liked him.

Throughout this trial I’ve had some difficulty imagining what the jury is making of all this because I am a little too familiar with the figures who are testifying. I have heard some audacious lies under oath, like when Elon Musk told us all he doesn’t lose his temper. (He then proceeded to lose his temper on cross-examination.) Or like when Shivon Zilis, the mother of several of his children, told us that she didn’t know Musk was starting xAI — which seemed to be directly contradicted by her text messages. Or when Greg “What will take me to $1B?” Brockman told us he was all about the mission. I certainly believe Altman isn’t trustworthy — I mean, The New Yorker published more than 17,000 words about how much he lies. But unlike with Musk, there are contemporaneous documents backing Altman’s version of the story. At least, mostly.

“My belief is he wanted to have long-term control”

After OpenAI’s Dota 2 win, discussions for a for-profit arm started in earnest. “Mr. Musk felt very strongly that if we were going to form a for-profit he needed to have total control over it initially,” Altman said. “He only trusted himself to make non-obvious decisions that were going to turn out to be correct.”

Altman testified that he was uncomfortable with Musk’s insistence on control, not just because Musk hadn’t been as involved as everyone else, but because OpenAI existed so no one person would control AGI. And at Y Combinator, the startup incubator where he was president, Altman had seen a lot of control fights; no one wanted to give up power when things were going well. With structures like supervoting shares, founders could retain control forever. Curiously, Altman’s example was not the most famous one (Mark Zuckerberg at Meta); it was Musk and SpaceX. When Altman asked Musk about succession plans for OpenAI, he got a particularly “hair-raising” answer: In the event of Musk’s death, Musk said, “I haven’t thought about it a ton, but maybe control should pass to my children.”

I don’t know about that. But I do know that I saw a 2017 email from Altman to Zilis in which he wrote, “I am worried about control. I don’t think any one person should have control of the world’s first AGI — in fact the whole reason we started OpenAI was so that wouldn’t happen.” He went on to say that he didn’t mind the idea of immediate control and was open to “creative structures” — which I understood to mean that, in order to placate Musk, Altman was willing to give him control up to specific milestones in company development.

“I read a vague, like, a lightweight threat in there”

“My belief is he wanted to have long-term control and that he would’ve had that had we agreed to the structure he wanted,” Altman said on the stand. This sounds basically right. In later video testimony from Sam Teller’s deposition, we heard that Musk no longer invests in anything he doesn’t control. This also fits with Musk’s long-term fixation on making sure he can’t get booted from his own company the way he got booted from PayPal.

Musk also tried to recruit Altman to Tesla. We saw texts between Altman and Teller, in which Teller told Altman that Musk was committed to beefing up Tesla’s AI no matter what, and that he hoped that Altman, Brockman, and Ilya Sutskever would want to join eventually. “I read a vague, like, a lightweight threat in there, that he’s gonna do this inside of Tesla with or without you,” Altman said. But he felt that Tesla was primarily a car company — allowing it to acquire OpenAI would betray OpenAI’s mission.

Later, in Teller’s testimony, we saw texts Teller sent to Zilis at 12:40AM on February 4th, 2018: “I don’t love OpenAI continuing without Elon,” he wrote. “Would rather disable it by recruiting the leaders.”

When Musk stopped his quarterly donations, OpenAI was operating on a “shoestring” with an “extremely short runway of cash.” OpenAI did have other donors, none of whom have sued it or joined Musk’s suit. (One donor in the exhibit that wasn’t called out to the courtroom was Alameda Research, the firm owned by Sam Bankman-Fried, who is now in prison for fraud and money laundering.) Musk’s resignation from the board meant “people wondered if he was gonna try to take, uh, vengeance out on us or something.” On the other hand, Altman said Musk had “demotivated some of our key researchers” and done “huge damage for a long time to the culture of the organization.” So it sure seems like some people were relieved to be rid of him.

I’ve seen some fairly shoddy lawyering from Musk’s side throughout this trial

We saw a lot of evidence that throughout the time Altman was setting up OpenAI’s for-profit arm, he kept Musk apprised of what was going on, either directly or through Zilis or Teller. At no point did Musk object, and whatever he said publicly about the Microsoft investments, there was plenty of evidence that privately he’d been made aware.

On the cross-examination, we were treated to more than 10 minutes of Steven Molo telling Altman that various and assorted people had called him a liar: Sutskever, Mira Murati, Helen Toner, Tasha McCauley, Daniela and Dario Amodei (former OpenAI employees and founders of Anthropic), employees at Altman’s first startup Loopt, that recent New Yorker article, a book called The Optimist, etc. Molo did score some points by asking Altman about testimony in the trial, which Altman said he wasn’t paying close attention to. Molo acted as though this was inconceivable. Surely someone had informed Altman of what was said?

It was a little funny and also a little tiresome. Altman kept his cool, though, seeming hurt and confused by the focus on whether he was a liar. It was also the most successful part of the cross, which declined in focus precipitously afterward. I’ve seen some fairly shoddy lawyering from Musk’s side throughout this trial, and today was pretty bad. At one point, when Molo was trying to capitalize on Altman being both CEO and on the company’s board, Altman said — truthfully — that CEOs are almost always on the boards of the companies they run.

(At this point in my notes, I had written, “Boy, Molo is not very good at this.”)

The point of this trial isn’t to win — it’s to punish Altman, Brockman, and OpenAI

There was also an unconvincing argument about fundraising in nonprofits, specifically that if Stanford could raise $3 billion a year, OpenAI should have remained a nonprofit. Okay, let’s just think about that for a minute. Stanford has a donor network of thousands of graduates. It’s a school, which has very different capital requirements. It is not competing with any reputable for-profit companies. But leave that all aside and assume that some fundraising genius took over at the OpenAI Foundation: $3 billion is the initial two Microsoft investments combined, and not enough to scale OpenAI to where it is now. If compute is the main bottleneck on building AI models, then Molo’s line of argument suggests OpenAI never would have managed to be successful as a nonprofit alone. He’s making the defense’s case for them.

But the thing is, Molo doesn’t actually have to be good at this job, because the point of this trial isn’t to win — though I’m sure Musk wouldn’t mind a win. The point is to punish Altman, Brockman, and OpenAI. Musk has done that pretty thoroughly — reinforcing in the public’s mind that Altman is a liar and a snake. This morning, I read an exclusive in The Wall Street Journal that assorted Republican AGs and the House Oversight committee wanted to look into Sam Altman’s investments. References to the trial are peppered throughout the article.

So yes, Altman was convincing on the stand. He may even win the suit. But it sure seems like Musk’s vengeance has just begun.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.
#Sam #Altman #winning #standAI,OpenAI
Early-stage venture firm A* on Tuesday announced a $450 million Fund III. The firm takes a generalist approach, backing companies across categories including AI applications, fintech, healthcare, and security.
The average check size for this fund will be between $3 million and $5 million, with the aim to back at least 30 startups. The capital will be deployed over the next two to three years, as with the firm’s previous funds. Limited partners include nonprofits, foundations, and endowments; Carnegie Mellon University is among the publicly named backers.

A*, founded in 2020 and run by Kevin Hartz and Bennet Siegel, previously raised a $315 million Fund II in 2024 and a $300 million Fund I in 2021. Hartz is a serial entrepreneur best known for co-founding Xoom, the international money-transfer service PayPal later acquired for $1.1 billion in 2015, and Eventbrite, the event-ticketing platform that went public in 2018. Siegel came up through Boston Consulting Group and Altamont Capital Partners before spending four years as a partner at Coatue Management.

The firm has also drawn attention for backing unusually young founders, even as the practice has become more common since. Hartz told TechCrunch last fall that close to 20% of the firm’s current portfolio involve teenage entrepreneurs. Among others of its investments, it has backed the fintech company Ramp and the AI firm Mercor.

This story was updated to clarify the name of the firm.

#Kevin #Hartzs #closed #fund #million #TechCrunchA* Capital,Kevin Hartz,Startups">Kevin Hartz’s A* just closed its third fund with 0 million | TechCrunch
Early-stage venture firm A* on Tuesday announced a 0 million Fund III. The firm takes a generalist approach, backing companies across categories including AI applications, fintech, healthcare, and security.The average check size for this fund will be between  million and  million, with the aim to back at least 30 startups. The capital will be deployed over the next two to three years, as with the firm’s previous funds. Limited partners include nonprofits, foundations, and endowments; Carnegie Mellon University is among the publicly named backers.

A*, founded in 2020 and run by Kevin Hartz and Bennet Siegel, previously raised a 5 million Fund II in 2024 and a 0 million Fund I in 2021. Hartz is a serial entrepreneur best known for co-founding Xoom, the international money-transfer service PayPal later acquired for .1 billion in 2015, and Eventbrite, the event-ticketing platform that went public in 2018. Siegel came up through Boston Consulting Group and Altamont Capital Partners before spending four years as a partner at Coatue Management.







The firm has also drawn attention for backing unusually young founders, even as the practice has become more common since. Hartz told TechCrunch last fall that close to 20% of the firm’s current portfolio involve teenage entrepreneurs. Among others of its investments, it has backed the fintech company Ramp and the AI firm Mercor.

This story was updated to clarify the name of the firm. 
#Kevin #Hartzs #closed #fund #million #TechCrunchA* Capital,Kevin Hartz,Startups

$450 million Fund III. The firm takes a generalist approach, backing companies across categories including AI applications, fintech, healthcare, and security.
The average check size for this fund will be between $3 million and $5 million, with the aim to back at least 30 startups. The capital will be deployed over the next two to three years, as with the firm’s previous funds. Limited partners include nonprofits, foundations, and endowments; Carnegie Mellon University is among the publicly named backers.

A*, founded in 2020 and run by Kevin Hartz and Bennet Siegel, previously raised a $315 million Fund II in 2024 and a $300 million Fund I in 2021. Hartz is a serial entrepreneur best known for co-founding Xoom, the international money-transfer service PayPal later acquired for $1.1 billion in 2015, and Eventbrite, the event-ticketing platform that went public in 2018. Siegel came up through Boston Consulting Group and Altamont Capital Partners before spending four years as a partner at Coatue Management.

The firm has also drawn attention for backing unusually young founders, even as the practice has become more common since. Hartz told TechCrunch last fall that close to 20% of the firm’s current portfolio involve teenage entrepreneurs. Among others of its investments, it has backed the fintech company Ramp and the AI firm Mercor.

This story was updated to clarify the name of the firm.

#Kevin #Hartzs #closed #fund #million #TechCrunchA* Capital,Kevin Hartz,Startups">Kevin Hartz’s A* just closed its third fund with $450 million | TechCrunch

Early-stage venture firm A* on Tuesday announced a $450 million Fund III. The firm takes a generalist approach, backing companies across categories including AI applications, fintech, healthcare, and security.
The average check size for this fund will be between $3 million and $5 million, with the aim to back at least 30 startups. The capital will be deployed over the next two to three years, as with the firm’s previous funds. Limited partners include nonprofits, foundations, and endowments; Carnegie Mellon University is among the publicly named backers.

A*, founded in 2020 and run by Kevin Hartz and Bennet Siegel, previously raised a $315 million Fund II in 2024 and a $300 million Fund I in 2021. Hartz is a serial entrepreneur best known for co-founding Xoom, the international money-transfer service PayPal later acquired for $1.1 billion in 2015, and Eventbrite, the event-ticketing platform that went public in 2018. Siegel came up through Boston Consulting Group and Altamont Capital Partners before spending four years as a partner at Coatue Management.

The firm has also drawn attention for backing unusually young founders, even as the practice has become more common since. Hartz told TechCrunch last fall that close to 20% of the firm’s current portfolio involve teenage entrepreneurs. Among others of its investments, it has backed the fintech company Ramp and the AI firm Mercor.

This story was updated to clarify the name of the firm.

#Kevin #Hartzs #closed #fund #million #TechCrunchA* Capital,Kevin Hartz,Startups

Post Comment