(2024-12-10) Yegge The Death Of The Stubborn Developer

Steve Yegge: The Death of the Stubborn Developer. I wrote a blog post back in May called The Death of the Junior Developer. It made people mad. My thesis has since been corroborated by a bunch of big companies, and it is also happening in other industries, not just software. It is a real, actual problem, despite being quite inconvenient for almost everyone involved. 2024-06-24-YeggeTheDeathOfTheJuniorDeveloper

every nontrivial project implicitly has a task graph consisting of multiple tasks, complete with subtasks and dependencies.

Many of those graph nodes are leaf-node tasks, like “write an auth library” or “modernize these unit tests”. They tend to be fairly self-contained. We often give these leaf node tasks to junior developers because the scope is small.

The other kind are tasks that combine the leaf tasks in various ways, to deliver the actual project and all its subprojects.

Those higher-level, interior task-graph nodes often involve a lot of planning and coordination.

As of about May'2024, LLMs can now execute most of the leaf tasks and even some higher-level interior tasks, even on large software projects. Which is great. But what’s left over for humans is primarily the more difficult planning and coordination nodes. Which are not the kind of task that you typically give junior developers.

So what the hell are junior developers supposed to work on, exactly? How do they level up now?

My CTA: they had better learn chat-oriented programming (CHOP), “or else”.

It’s not really about junior devs

we all agree that it’s not the right term. There’s definitely a category of devs who are getting left behind, but it’s clearly much more nuanced than being all about junior devs.

One problem is that the circuit that lets junior developers grow into senior developers is broken or at least damaged. I’m borrowing an apt analogy from Dr. Matt Beane, who describes the same phenomenon happening with junior surgeons due to the emergence of surgery robots

some junior engineers pick this new stuff up and fly with it, basically upleveling themselves. And many senior engineers seem to be heading towards being left behind. So what is it, then?

I’ve recently talked to two polar-opposite companies

We’ve managed to narrow it down to a single principle: You are getting left behind if you do not adopt chat-based programming as your primary modality.

as time goes by, the holdouts are just looking stubborn.

Chat-oriented programming

Why am I specifically calling out chat-based programming? It’s because chatting with the LLM is how you get those task-graph leaf nodes 3D printed for you. It’s an integral part of the whole process.

remember, it’s chat with something that hallucinates wildly, so it’s not exactly the solid slam-dunk leap forward that we all wish it were.

In my blog post I called this phenomenon Chat-Oriented Programming, CHOP for short (or just chop). Chop isn’t just the future, it’s the present

The baseline productivity boost should continue to grow everywhere as foundation models, tools, and the art of chop all mature. And it’s not capped at 100% (2x). It can easily go to 10x, 20x or more in some situations.

Last I checked, I think ChatGPT was still the world’s most-used coding assistant, and it doesn’t do completions, only chop. Chop has a big following.

The only remaining room for differing opinions here is on how long chop will last. In my totally unproven but educated guess, it will be at least three years. In fact I think there’s a good chance it lasts up to ten years, given how long it took for assembly language to finally die.

Devil’s Advocate: Maybe there’s something better in the near term

There is another camp of very smart people

they lean towards thinking that chop will be short-lived, because it will be replaced with autonomous agents in various forms and flavors.

The basis for the disagreement is twofold, I think. First, chop is hard. You have to be a skilled computer user to make it work well. Not so much a skilled developer, as a skilled hacker

So a lot of people are worried that chop isn’t for the masses. And the masses have money

Chat-oriented programming is hard, but many folks believe that autonomous agents are right around the corner, and that they will handle all that heavy lifting for you. You’ll just need to write short prompts and spot-check the results, with much less toil involved.

some people claim that agents can take over the task graph entirely, perhaps at least for small businesses, allowing non-technical CEOs to launch apps themselves without having to hire any pesky developers.
I think those people are smoking some serious crack.

Industry-changing new technologies, the ones that change how we code, always grow the same way: incrementally. They emerge small but capable — just barely good enough to justify using them

So you’d expect all that to be happening for generic autonomous agents. Right?
Well, where is it, then?

But perhaps there could be some middle ground: something less ambitious than fully general autonomous software agents,

Finding a middle ground

One person with an opinion here that I greatly respect is Idan Gazit, who is Senior Director of Research over at GitHub Next. He gave a fascinating talk at ETLS in Vegas this year. Idan isn’t a huge fan of chop.

In essence, he’s an HCI guy at heart, so obviously he doesn’t want Copilot users to be screwing around with task-graph management and context wrangling. So he is looking for an AI modality that every developer can use, one that isn’t biased towards virtuosos.

Idan’s beef with chop, or his chop beef, so to speak, is fair enough: The LLM knows the answer, but it can’t actually do anything except inform you

You, the programmer, must perform all the labor of fetching context and integrating answers.

Chop’s toil feels an awful lot like it should be handled by your tools; I think everyone can agree on that.

Copilot Workspace is GitHub’s first foray in this ambitious direction. I hope they are successful with it.

Although I am rooting for them, I’m not banking on it myself. I’m assuming it’ll take years to create agents reliable enough to drop into a developer’s workflow

Chop comes with problems of its own

you can actually use chop today

The not so good news is that if you’re impatient, you do have to learn new stuff. Chop’s combination of intrinsic difficulty and newness creates its own knock-on problems. How do we teach it? How do you learn it? How do you interview for it? And of course, where are the metrics?

nobody knows how to teach chop. There are no resources. It’s day zero.

And for enterprises, nobody knows how to measure the impact

How can you justify a big ongoing investment in Code AI for your company if you’re not sure whether it’s working?

Metrics for Code AI

As luck would have it, the legendary Gene Kim and I have been conspiring to brew up some help here.

In our session, Gene developed a new superpower using chop, one that makes him the fastest YT video excerpt tweeter on earth. A low-tier power? You betcha.

Gene aims to do the same thing he did for Code AI that he did with CI/CD: The DORA metrics

Gene’s plan is to do the same for Code AI.

The idea is to define a set of rigorous industry-standard metrics to help companies know whether they’re doing the right things. These DORA-like metrics will take a group effort

one spoiler I’ll offer is that one of the most important metric-related dimensions we’ve unearthed is Optionality. Chop may not necessarily make you a lot faster at stuff you already know how to do well. But it makes you extraordinarily faster at things you’re not very good at — things that you’ve been putting off because you know it’s going to be more work than it’s worth.

Chop thus allows you to explore many options in parallel at each stage

Denouement

once you do adopt chop, you’ll find to your disappointment that its benefits are decidedly and uncomfortably nonuniform. Some people will see much larger benefit than others

My colleagues and I at Sourcegraph are firmly in the camp that believes that you are the agent who will wrangle the task graph, above the level of the leaf nodes

What does that mean in practice? It means two things, and this is true of any coding assistant. First you make the inputs to the LLM faster, by speeding up context fetching and prompt reuse. Then you make processing the outputs faster, e.g. with a smart-apply button

Despite their flaws and gaps, coding assistants can be a better experience than using raw Claude or Gemini/GPT, because they streamline a lot of the work with the LLM inputs and outputs. It all comes down to how well your coding assistant supports chat as a modality.

Keep in mind that while they may all do roughly the same thing, the way they present that functionality to you can be very different.

People often ask me “What’s one thing Coding Assistant X can do that others cannot?” I don’t think that’s the right question. If you view them more like car models

The right question is, “Which one is best for you?”


Edited:    |       |    Search Twitter for discussion

No Space passed/matched! - http://fluxent.com/wiki/2024-12-10-YeggeTheDeathOfTheStubbornDeveloper