The first time I used ChatGPT to code, back in early 2023, I was reminded of “The Monkey’s Paw,” a classic horror story about an accursed talisman that grants wishes, but always by the most malevolent path — the desired outcome arrives after exacting a brutal cost elsewhere first. With the same humorless literalness, ChatGPT would implement the change I’d asked for, while also scrambling dozens of unrelated lines. The output was typically over-engineered, often barnacled with irrelevant fragments of code. There were some usable lines in the mix, but untangling the mess felt like a detour.
When I started using AI-assisted tools earlier this year, I felt decisively outmatched. The experience was like pair-programming with a savant intern — competent yet oddly deferential, still a tad too eager to please and make sweeping changes at my command. But when tasked with more localized changes, it nailed the job with enviable efficiency.
The trick is to keep the problem space constrained. I recently had it take a dozen lines of code, each running for 40 milliseconds in sequence — time stacking up — and run them all in parallel so the entire job finished in the time it used to take for just one. In a way, it’s like using a high-precision 3D printer to build an aircraft: use it to produce small custom parts, like hydraulic seals or O-rings, and it delivers flawlessly; ask it for something less localized like an entire cockpit, and you might get a cockpit-shaped death chamber with a nonfunctional dashboard and random knobs haphazardly strung together. The current crop of models is flexible enough for users with little-to-no coding experience to create products of varying quality through what’s called — in a billion-dollar buzzword — vibe-coding. (Google even released a separate app for it called Opal.)
Yet, one could argue that vibe-coding isn’t entirely new. As a tool for nonprofessionals, it continues a long lineage of no-code applications. As a mode of programming that involves less prefrontal cortex than spinal reflex, any honest programmer will admit to having engaged in a dishonorable practice known as “shotgun debugging.” Like mindlessly twisting a Rubik’s Cube and wishing the colors would magically align, a programmer, brain-fried after hours of fruitless debugging, starts arbitrarily tweaking code — deleting random lines, swapping a few variables, or flipping a Boolean condition — re-runs the program, and hopes for the correct outcome. Both vibe-coding and shotgun debugging are forms of intuitive flailing, substituting hunches and luck for deliberate reasoning and understanding.
We’ve used machines to take the load off cognition, but for the first time, we are offloading cognition itself to the machine.
As it happens, it’s not considered good form for a self-respecting programmer to engage in shotgun debugging. Soon, I came to see that the most productive form of AI-assisted coding may be an editorial one — much like how this essay took shape. My editor assigned this piece with a few guiding points, and the writer — yours truly — filed a serviceable draft that no sober editor would run as-is. (Before “prompt and pray,” there was “assign and wait.”)
Likewise, a vibe-coder — a responsible one, that is — must assume a kind of editorship. The sprawling blocks of code produced by AI first need structural edits, followed by line-level refinements. Through a volley of prompts — like successive rounds of edits — the editor-coder minimizes the delta between their vision and the output.
Often, what I find most useful about these tools isn’t even writing code but understanding it. When I recently had to navigate an unfamiliar codebase, I asked for it to explain its basic flow. The AI generated a flowchart of how the major components fit together, saving me an entire afternoon of spelunking through the code.
I’m of two minds about how much vibe-coding can do. The writer in me celebrates how it could undermine a particular kind of snobbery in Silicon Valley — the sickening smugness engineers often show toward nontechnical roles — by helping blur that spurious boundary. But the engineer in me sees that as facile lip service, because building a nontrivial, production-grade app without grindsome years of real-world software engineering experience is a tall order.
I’ve always thought the best metaphor for a large codebase is a city. In a codebase, there are literal pipelines — data pipelines, event queues, and message brokers — and traffic flows that require complex routing. Just as cities are divided into districts because no single person or team can manage all the complexity, so too are systems divided into units such as modules or microservices. Some parts are so old that it’s safer not to touch them, lest you blow something up — much like the unexploded bombs still buried beneath European cities. (Three World War II-era bombs were defused in Cologne, Germany, just this summer.)
If developing a new product feature is like opening a new airline lounge, a more involved project is like building a second terminal. In that sense, building an app through vibe-coding is like opening a pop-up store in the concourse — the point being that it’s self-contained and requires no integration.
Vibe-coding is good enough for a standalone program, but the knottiest problems in software engineering aren’t about building individual units but connecting them to interoperate. It’s one thing to renovate a single apartment unit and another to link a fire suppression system and emergency power across all floors so they activate in the right sequence.
These concerns extend well beyond the interior. The introduction of a single new node into a distributed system can just as easily disrupt the network, much like the mere existence of a new building can reshape its surroundings: its aerodynamic profile, how it alters sunlight for neighboring buildings, the rerouting of pedestrian traffic, and the countless ripple effects it triggers.
The security concerns around vibe-coding, in my estimation, are something of a bogeyman.
I’m not saying this is some lofty expertise, but rather the tacit, hard-earned kind — not just knowing how to execute, but knowing what to ask next. You can coax almost any answer out of AI when vibe-coding, but the real challenge is knowing the right sequence of questions to get where you need to go. Even if you’ve overseen an interior renovation, without standing at a construction site watching concrete being poured into a foundation, you can’t truly grasp how to create a building. Sure, you can use AI to patch together something that looks functional, but as the software saying goes: “If you think good architecture is expensive, try bad architecture.”
If you were to believe Linus Torvalds, the creator of Linux, there’s also a matter of “taste” in software. Good software architecture isn’t just drawn up in one stroke but emerges from countless sound — and tasteful — micro-decisions, something models can’t zero-shot. Such intuition can only be developed as a result of specific neural damage from a good number of 3AM on-call alerts.Perhaps these analogies will only go so far. A few months ago, an AI could reliably operate only on a single file. Now, it can understand context across multiple folders and, as I’m writing this, across multiple codebases. It’s as if the AI, tasked with its next chess move, went from viewing the board through the eyes of a single pawn to surveying the entire game with strategic insight. And unlike artistic taste, which has infinitely more parameters, “taste” in code might just be the sum of design patterns that an AI could absorb from O’Reilly software books and years of Hacker News feuds.
When the recent Tea app snafu exposed tens of thousands of its users’ driver’s licenses — a failure that a chorus of online commenters swiftly blamed on vibe-coding — it felt like the moment that vibe-coding skeptics had been praying for. As always, we could count on AI influencers on X to grace the timeline with their brilliant takes, and on a certain strain of tech critics — those with a hardened habit of ritual ambulance chasing — to reflexively anathematize any use of AI. In a strange inversion of their usual role as whipping boys, software engineers were suddenly elevated to guardians of security, cashing in on the moment to punch down on careless vibe-coders trespassing in their professional domain.
When it was revealed that vibe-coding likely wasn’t the cause, the incident revealed less about vibe-coding than it did about our enduring impulse to dichotomize technical mishaps into underdogs and bullies, the scammed and fraudsters, victims and perpetrators.
At the risk of appearing to legitimize AI hype merchants, the security concerns around vibe-coding, in my estimation, are something of a bogeyman — or at least the net effect may be non-negative, because AI can also help us write more secure code.
Sure, we’ll see blooper reels of “app slop” and insecure code snippets gleefully shared online, but I suspect many of those flaws could be fixed by simply adding “run a security audit for this pull request” to a checklist. Already, automated tools are flagging potential vulnerabilities. Personally, using these tools has let me generate far more tests than I would normally care to write.
Further, if a model is good enough, when you ask, “Hey, I need a database where I can store driver’s licenses,” an AI might respond:
“Sure, but you forgot to consider security, you idiot. Here’s code that encrypts driver’s license numbers at rest using AES-256-GCM. I’ve also set up a key management system for storing and rotating the encryption key and configured it so decrypting anything requires a two-person approval. Even if someone walks off with the data, they’d still need until the heat death of the universe to crack it. You’re welcome.”
In my day job, I’m a senior software engineer who works on backend mainly, on machine learning occasionally, and on frontend — if I must — reluctantly. In some parts of the role, AI has brought a considerable sense of ease. No more parsing long API docs when a model can tell me directly. No more ritual shaming from Stack Overflow moderators who deemed my question unworthy of asking. Instead, I now have a pair-programmer who doesn’t pass judgment on my career-endingly dumb questions.
The evolution of software engineering is a story of abstraction.
Unlike writing, I have little attachment to blocks of code and will readily let AI edit or regenerate them. But I am protective of my own words. I don’t use AI for writing because I fear losing those rare moments of gratification when I manage to arrange words where they were ordained to be.
For me, this goes beyond sentimental piety because, as a writer who doesn’t write in his mother tongue — “exophonic” is the fancy term — I know how quickly an acquired language can erode. I’ve seen its corrosive effects firsthand in programming. The first language I learned anew after AI arrived was Ruby, and I have a noticeably weaker grasp of its finer points than any other language I’ve used. Even with languages I once knew well, I can sense my fluency retreating.
David Heinemeier Hansson, the creator of Ruby on Rails, recently said that he doesn’t let AI write code for him and put it aptly: “I can literally feel competence draining out of my fingers.” Some of the trivial but routine tasks I could once do under general anesthesia now give me a migraine at the thought of doing them without AI.
Could AI be fatal to software engineering as a profession? If so, the world could at least savor the schadenfreude of watching a job-destroying profession automate itself into irrelevance. More likely in the meantime, the Jevons Paradox — greater efficiency fuels more consumption — will prevail, negating any productivity gain with a higher volume of work.
Another way to see this is as the natural progression of programming: the evolution of software engineering is a story of abstraction, taking us further from the bare metal to ever-higher conceptual layers. The path from assembly language to Python to AI, to illustrate, is like moving from giving instructions such as “rotate your body 60 degrees and go 10 feet,” to “turn right on 14th Street,” to simply telling a GPS, “take me home.”
As a programmer from what will later be seen as the pre-ChatGPT generation, I can’t help but wonder if something vital has been left behind as we ascend to the next level of abstraction. This is nothing new — it’s a familiar cycle playing out again. When C came along in the 1970s, assembly programmers might have seen it as a loss of finer control. Languages like Python, in turn, must look awfully slow and restrictive to a C programmer.
Hence it may be the easiest time in history to be a coder, but it’s perhaps harder than ever to grow into a software engineer. A good coder may write competent code, but a great coder knows how to solve a problem by not writing any code at all. And it’s hard to fathom gaining a sober grasp of computer science fundamentals without the torturous dorm-room hours spent hand-coding, say, Dijkstra’s algorithm or a red-black tree. If you’ve ever tried to learn programming by watching videos and failed, it’s because the only way to internalize it is by typing it out yourself. You can’t dunk a basketball by watching NBA highlight reels.
The jury is still out on whether AI-assisted coding speeds up the job at all; at least one well-publicized study suggests it may be slower. I believe it. But I also believe that for AI to be a true exponent in the equation of productivity, we need a skill I’ll call a kind of mental circuit breaker: the ability to notice when you’ve slipped into mindless autopilot and snap out of it. The key is to use AI just enough to get past an obstacle and then toggle back to exercising your gray matter again. Otherwise, you’ll lose the kernel of understanding behind the task’s purpose.
On optimistic days, I like to think that as certain abilities atrophy, we will adapt and develop new ones, as we’ve always done. But there’s often a creeping pessimism that this time is different. We’ve used machines to take the load off cognition, but for the first time, we are offloading cognition itself to the machine. I don’t know which way things will turn, but I know there has always been a certain hubris to believing that one’s own generation is the last to know how to actually think.
Whatever gains are made, there’s a real sense of loss in all this. In his 2023 New Yorker essay “A Coder Considers the Waning Days of the Craft,” James Somers nailed this feeling after finding himself “wanting to write a eulogy” for coding as “it became possible to achieve many of the same ends without the thinking and without the knowledge.” It has been less than two years since that essay was published, and the sentiments he articulated have only grown more resonant.
For one, I feel less motivated to learn new programming languages for fun. The pleasure of learning new syntax and the cachet of gaining fluency in niche languages like Haskell or Lisp have diminished, now that an AI can spew out code in any language. I wonder whether the motivation to learn a foreign language would erode if auto-translation apps became ubiquitous and flawless.
Software engineers love to complain about debugging, but beneath the grumbling, there was always a quiet pride in sharing war stories and their clever solutions. With AI, will there be room for that kind of shoptalk?
There are two types of software engineers: urban planners and miniaturists. Urban planners are the “big picture” type, more focused on the system operating at scale than with fussing over the fine details of code — in fact, they may rarely write code themselves. Miniaturists bring a horologist’s care for a fine watch to the inner workings of code. This new modality of coding may be a boon for urban planners, but leave the field inhospitable to miniaturists.
I once had the privilege of seeing a great doyen of programming in action. In college, I took a class with Brian W. Kernighan, a living legend credited with making “Hello, world” into a programming tradition and a member of the original Bell Labs team behind Unix. Right before our eyes, he would live-code on a bare-bones terminal, using a spartan code editor called vi — not vim, mind you — to build a parser for a complex syntax tree. Not only did he have no need for modern tools like IDEs, he also replied to email using an email client running in a terminal. There was a certain aesthetic to that.
Before long, programming may be seen as a mix of typing gestures and incantations that once qualified as a craft. Just as we look with awe at the old Bell Labs gang, the unglamorous work of manually debugging concurrency issues or writing web server code from scratch may be looked upon as heroic. Every so often, we might still see the old romantics lingering over each keystroke — an act that’s dignified, masterful, and hopelessly out of time.
12 Comments
Source link
#software #engineering #step #evolution
![Who is John Ternus, the incoming Apple CEO? | TechCrunch
After 15 years, Tim Cook will hand off the Apple CEO role to John Ternus, the company’s senior vice president of hardware engineering. Starting on September 1, Ternus will lead one of the world’s most valuable companies, but if you’re not a dedicated Apple enthusiast, you’ve probably never heard of this man, who has largely remained out of the spotlight until now.
How long has John Ternus worked at Apple?
Ternus has worked at Apple for nearly half of his life — now 51 years old, he has been with the company for 25 years.
He joined Apple’s product design team in 2001 as only his second job out of college (his first was at a small maker of virtual-reality devices called Virtual Research Systems). By 2013, Ternus was a VP of hardware engineering and was promoted to the SVP role in 2021.
Ternus — who is 15 years younger than Cook — was among the youngest of top Apple executives who had been rumored as a possible successor, implying that Apple could be looking for someone to lead the company for a long time. After all, Apple has only had two CEOs in this millennium, so it seems that leadership continuity is important to the company.
Ternus reports to Cook, who he considers a mentor, and leads all of hardware engineering at Apple. That’s a pretty big deal for a company that’s known for ubiquitous hardware like the iPhone and the MacBook.
In his 2024 commencement speech at his alma mater, the University of Pennsylvania’s engineering school, Ternus reflected on the lessons he learned at Apple, which perhaps can tell us a bit about his character — or at least a sanitized version of it.
“Always assume you’re as smart as anyone else in the room, but never assume that you know as much as they do,” Ternus said in the speech. “With this mindset, you’ll find the confidence you need to push forward, but more importantly, the humility to ask questions.”
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
In a tech ecosystem populated with abrasive egos, it’s refreshing to hear Ternus utter the word “humility.” Better yet, he doesn’t appear to have an X account.
Image Credits:Apple
What projects did John Ternus lead at Apple?
Ternus’ earliest project at Apple involved scrutinizing parts for the Apple Cinema Display, an early desktop monitor.
“At some point in my first year, I found myself at a supplier facility. I was far away from home. Well past midnight, I was using a magnifying glass to count the number of grooves on the head of a screw … and I was arguing with the supplier because these parts had 35 grooves. They were supposed to have 25,” Ternus recalled in his commencement speech. “I distinctly remember stepping back for a minute and thinking, ‘What the hell am I doing? Is this normal?’”
As Ternus climbed the corporate ladder, his responsibilities grew. He may no longer spend as much time analyzing screws, but he still seems to take pride in getting the little details right. In a recent interview, when Ternus was asked about his favorite memory of Steve Jobs, he mentioned the former Apple co-founder’s attention to craftsmanship.
“[Jobs] was moving a piece of furniture, a chest of drawers, and pulled it away from the wall and looked at the back and was just reflecting on, you know, that the carpenter who made it had made it beautiful,” Ternus said. “It finished the back as beautifully as the rest of it, even though nobody was going to see it, right? And I think about that all the time because I think that perfectly exemplifies what we do here.”
From there, he went on to lead the hardware development behind products across the Apple ecosystem, overseeing launches like AirPods, Apple Watch, and the Vision Pro. He also had a hand in major technical upgrades at Apple, like Apple’s transition from Intel chips to its own proprietary Apple silicon.
Most recently, Ternus was involved in the production of the MacBook Neo, Apple’s new, more affordable laptop model that lowers costs through some clever trade-offs in hardware design, like using an iPhone chip to power the device.
“We never want to ship junk. We want to ship great products that have that Apple experience, that Apple quality. To do that with the Neo required building something completely new from the ground up … leveraging both the technologies we’d been developing like Apple silicon, but also the kind of expertise that we’ve developed over many, many years of building Macs, and building phones, and building iPads, and all of these things,” Ternus told Tom’s Guide.
As CEO, Ternus will have to steer Apple through its challenge to catch up in the AI race and figure out what to do with the underlying tech behind the Vision Pro.
What else do we know about John Ternus?
Ternus was on the swim team at Penn. For his senior project, he built a feeding arm that people with quadriplegia could control with head movements.
According to public records of political donations, Ternus donated ,900 to Senator Chuck Schumer (D-NY) in 2021.
Otherwise, Ternus has maintained a relatively low profile.
#John #Ternus #incoming #Apple #CEO #TechCrunchApple,ceo,John Ternus,Tim Cook Who is John Ternus, the incoming Apple CEO? | TechCrunch
After 15 years, Tim Cook will hand off the Apple CEO role to John Ternus, the company’s senior vice president of hardware engineering. Starting on September 1, Ternus will lead one of the world’s most valuable companies, but if you’re not a dedicated Apple enthusiast, you’ve probably never heard of this man, who has largely remained out of the spotlight until now.
How long has John Ternus worked at Apple?
Ternus has worked at Apple for nearly half of his life — now 51 years old, he has been with the company for 25 years.
He joined Apple’s product design team in 2001 as only his second job out of college (his first was at a small maker of virtual-reality devices called Virtual Research Systems). By 2013, Ternus was a VP of hardware engineering and was promoted to the SVP role in 2021.
Ternus — who is 15 years younger than Cook — was among the youngest of top Apple executives who had been rumored as a possible successor, implying that Apple could be looking for someone to lead the company for a long time. After all, Apple has only had two CEOs in this millennium, so it seems that leadership continuity is important to the company.
Ternus reports to Cook, who he considers a mentor, and leads all of hardware engineering at Apple. That’s a pretty big deal for a company that’s known for ubiquitous hardware like the iPhone and the MacBook.
In his 2024 commencement speech at his alma mater, the University of Pennsylvania’s engineering school, Ternus reflected on the lessons he learned at Apple, which perhaps can tell us a bit about his character — or at least a sanitized version of it.
“Always assume you’re as smart as anyone else in the room, but never assume that you know as much as they do,” Ternus said in the speech. “With this mindset, you’ll find the confidence you need to push forward, but more importantly, the humility to ask questions.”
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
In a tech ecosystem populated with abrasive egos, it’s refreshing to hear Ternus utter the word “humility.” Better yet, he doesn’t appear to have an X account.
Image Credits:Apple
What projects did John Ternus lead at Apple?
Ternus’ earliest project at Apple involved scrutinizing parts for the Apple Cinema Display, an early desktop monitor.
“At some point in my first year, I found myself at a supplier facility. I was far away from home. Well past midnight, I was using a magnifying glass to count the number of grooves on the head of a screw … and I was arguing with the supplier because these parts had 35 grooves. They were supposed to have 25,” Ternus recalled in his commencement speech. “I distinctly remember stepping back for a minute and thinking, ‘What the hell am I doing? Is this normal?’”
As Ternus climbed the corporate ladder, his responsibilities grew. He may no longer spend as much time analyzing screws, but he still seems to take pride in getting the little details right. In a recent interview, when Ternus was asked about his favorite memory of Steve Jobs, he mentioned the former Apple co-founder’s attention to craftsmanship.
“[Jobs] was moving a piece of furniture, a chest of drawers, and pulled it away from the wall and looked at the back and was just reflecting on, you know, that the carpenter who made it had made it beautiful,” Ternus said. “It finished the back as beautifully as the rest of it, even though nobody was going to see it, right? And I think about that all the time because I think that perfectly exemplifies what we do here.”
From there, he went on to lead the hardware development behind products across the Apple ecosystem, overseeing launches like AirPods, Apple Watch, and the Vision Pro. He also had a hand in major technical upgrades at Apple, like Apple’s transition from Intel chips to its own proprietary Apple silicon.
Most recently, Ternus was involved in the production of the MacBook Neo, Apple’s new, more affordable laptop model that lowers costs through some clever trade-offs in hardware design, like using an iPhone chip to power the device.
“We never want to ship junk. We want to ship great products that have that Apple experience, that Apple quality. To do that with the Neo required building something completely new from the ground up … leveraging both the technologies we’d been developing like Apple silicon, but also the kind of expertise that we’ve developed over many, many years of building Macs, and building phones, and building iPads, and all of these things,” Ternus told Tom’s Guide.
As CEO, Ternus will have to steer Apple through its challenge to catch up in the AI race and figure out what to do with the underlying tech behind the Vision Pro.
What else do we know about John Ternus?
Ternus was on the swim team at Penn. For his senior project, he built a feeding arm that people with quadriplegia could control with head movements.
According to public records of political donations, Ternus donated ,900 to Senator Chuck Schumer (D-NY) in 2021.
Otherwise, Ternus has maintained a relatively low profile.
#John #Ternus #incoming #Apple #CEO #TechCrunchApple,ceo,John Ternus,Tim Cook](https://techcrunch.com/wp-content/uploads/2026/04/Apple-John-Ternus-Tim-Cook_Full-Bleed-Image.jpg.xlarge_2x.jpg?w=680)

Post Comment