(2025-10-07) Willison Vibe Engineering

Simon Willison: Vibe engineering. I feel like vibe coding is pretty well established now as covering the fast, loose and irresponsible way of building software with AI - entirely prompt-driven, and with no attention paid to how the code actually works.

This leaves us with a terminology gap: what should we call the other end of the spectrum, where seasoned professionals accelerate their work with LLMs while staying proudly and confidently accountable for the software they produce?
I propose we call this vibe engineering, with my tongue only partially in my cheek.

One of the lesser spoken truths of working productively with LLMs as a software engineer on non-toy-projects is that it’s difficult. There’s a lot of depth to understanding how to use the tools, there are plenty of traps to avoid, and the pace at which they can churn out working code raises the bar for what the human participant can and should be contributing.

The rise of coding agents - tools like Claude Code (released February 2025), OpenAI’s Codex CLI (April) and Gemini CLI (June) that can iterate on code, actively testing and modifying it until it achieves a specified goal, has dramatically increased the usefulness of LLMs for real-world coding problems.

I was skeptical of this at first but I’ve started running multiple agents myself now and it’s surprisingly effective, if mentally exhausting!

It’s also become clear to me that LLMs actively reward existing top tier software engineering practices:

Automated testing.

Test-first development is particularly effective with agents that can iterate in a loop.

Planning in advance. Sitting down to hack something together goes much better if you start with a high level plan.

Comprehensive documentation. Just like human programmers, an LLM can only keep a subset of the codebase in its context at once.

Good version control habits. Being able to undo mistakes and understand when and how something was changed is even more important when a coding agent might have made the changes.

Having effective automation in place. Continuous integration, automated formatting and linting, continuous deployment to a preview environment - all things that agentic coding tools can benefit from too.

A culture of code review

A very weird form of management. Getting good results out of a coding agent feels uncomfortably close to getting good results out of a human collaborator. You need to provide clear instructions, ensure they have the necessary context and provide actionable feedback on what they produce.

Really good manual QA (quality assurance).

Strong research skills.

The ability to ship to a preview environment

An instinct for what can be outsourced to AI and what you need to manually handle yourself. This is constantly evolving

An updated sense of estimation

If you’re going to really exploit the capabilities of these new tools, you need to be operating at the top of your game

AI tools amplify existing expertise

“Vibe engineering”, really?

Is this a stupid name? Yeah, probably.“

I’ve tried in the past to get terms like AI-assisted programming to stick, with approximately zero success. May as well try rubbing some vibes on it and see what happens.


Edited:    |       |    Search Twitter for discussion