(2025-12-03) Ford, Shipper: Anthropic's Newest Model Blew This Founder's Mind and Made Him Uncomfortable

Paul Ford and Dan Shipper Transcript: Anthropic's Newest Model Blew This Founder's Mind'And Made Him Uncomfortable.

Timestamps

  • Introduction: 00:01:57
  • How Claude Opus 4.5 made the future feel abruptly close: 00:03:28
  • The design principles that make Claude Code a powerful coding tool: 00:08:12
  • How Ford uses Claude Code to build real software: 00:10:57
  • Why collapsing job titles and roles can feel overwhelming: 00:20:12
  • Ford's take on using LLMs to write: 00:22:56
  • A metaphor for weathering existential moments of change: 00:24:09
  • What GLP-1s taught Ford about how people adapt to big shifts: 00:25:45
  • Why you should care what your LLM was trained on: 00:49:36
  • Ford prompts Claude Code to forecast the future of the consulting industry: 00:52:15
  • Recognize when an LLM is reflecting your assumptions back to you: 00:59:18
  • How large enterprises might adopt AI: 01:12:39

I think the thing that we are both super excited about is Claude Code

People don't know yet. People, it's like they just don't know it changed. Can you articulate it? I have my own thesis, but can you, what do you think it is? Its Opus 4.5 and Sonnet 4.5 inside of Claude Code was a step change.

We've been trying to wrap guardrails around the chaos of vibe coding, because it doesn't finish things. The last mile's really long. It tends to leave a lot of loose ends. And so we've been very, very involved in the space and stayed really connected to it. And then about two weeks ago, right, like something changed and they sort of released their models.

In a funny way, I think it doesn't represent some giant step change in the capability of an LLM. It feels like Sonnet and Opus are better, but they're not like 9,000 times better. But they added in a layer of kind of agent-style thoughtfulness to the product. So it's constantly evaluating its own outputs and then improving them, which leads to these really, really complex outcomes when it comes to writing code.

I have a set of benchmark projects... it's a database. It has a terrible name. It's called IPEDS. And a friend of mine asked if I could work with it like a year ago using AI, and it's a government-produced database of every college they have to fill out. And it's like, what are their majors and what's the gender and race breakdown at the school and what is tuition... transform it and put it on the web in a sort of modern way. And man, I just knocked it out. It wasn't easy. Like I still had to kind of know a lot of stuff, but it did a really great job and it built me a nice visualization with smart search

I've been using it to set up a pipeline to build little musical synthesizers just to see how that could work. And today I was like, hey, clone a TR-808 drum machine, and it did it in 20 minutes, right? And it's just sort of like, now I spent whole days creating that pipeline, right? But that used to be like the work of a company

what's really tricky is you go, wow, I'm powerful. And then you realize like, no, this is everybody now. Like you feel like you've captured something. Like you got the ultimate Pok'mon, but everyone's getting the same Pok'mon shoved into the mailbox

Dan Shipper: ....design principle that I think makes Claude Code so powerful is that anything that you can do on your computer Claude Code can do. And it has a set of tools that are below the level of features. They're like, they're low-level tools. They're like files, they're command line tools. It's bash, it's grep, it's like all this stuff. And what that allows you to do is it creates this system that is very composable and very flexible that you can build on top of and use in ways that they couldn't necessarily predict. And what's also really important is what that means, that the programs or the features of Claude Code are actually just prompts, they're slash commands and subagents

Paul Ford: ...like you're bundling stuff up as sentences. There's another aspect of, and it integrates with the existing system, so it's not like, it's not this world apart. It's an and. And actually what I found over the last week is where I normally would go to a command line and start typing. I start typing in English and forgetting I haven't gone into Claude, right? Like I'm just, it's so immediate because it's so much better at building and orchestrating

I wanted to deploy something I built, that weird database I was talking about earlier. And so I went to like Fly.io, which is a very fast deployment environment, and I was like, because I bet it'll be able to coordinate well here. And then I just was like, wait a minute, I have this random-ass server just like sitting somewhere that I use for scratch projects. Can you just SSH into that and just deploy this thing for me? And it was like, yeah, no problem. And it just jumped onto the box and looked around. It was like, oh, there's no, it's an Ubuntu server. Yeah, let me update your Nginx. Ooh, you need to get the certificate installed here. Let's go ahead and do that

my friend's dad was like, boy, I really need to make a searchable index of this one politician's newsletter for oppo research... I opened up Claude Code on my phone and literally between turkey and dessert, I built and shipped it. It was SQLite on the backend. It works just fine

what I'm finding is you gotta think not just like in terms of solving the problem, but in terms of like one level of abstraction up. Like I built a little, I had it build a little musical synthesizer for me that emulated like a Moog synth, something I know a reasonable amount about. And it did like an okay job and it had a lot of caveats and the remaining work on it would be hard. And I didn't do it, but then I was like, okay, one level up. You (Claude) need some more information about digital signal processing. So I'm going to go spider some books that are available for you online and I'm going to put them into a database. Whenever you have a question, search this little tiny SQLite database and refer to it. So then I gave it a reference source and then I was like, wait a minute, you keep writing code, Claude, you have to calm down because your code's okay, but it's not that great. I want you to go find all the open source libraries that are really good about digital signal processing, which is really edge-case-y. And I want you to make a list of them and I want you to only build based on those things. You should adapt and create a library, and then you should implement it based on that library.

And as like five or six things, five or six things at that level of abstraction unfolded, I'm now able to say, hey, make me a synth like this and come back 20 minutes later

Dan Shipper: if someone like you uses it, you can move to this level of abstraction where to some degree that code doesn't matter or it doesn't matter as much as it used to. How do you square that sort of craftsman mindset about code with what is now possible?

Paul Ford: I don't know this week. I mean, I think two weeks ago I would've been able to be like, but like, I gotta tell you, I mean, I've been watching all this stuff real closely and I've been, I know how LLMs work, I did the homework and so on and so forth, and I kind of knew we were headed in this direction. But again, it's like a step-change in product. It's not a step change in technology

you can instruct it to get better. You can be like, hey, if you were Claude, if you were, I would be like, if you're a really good engineer at Anthropic, take a look at this code base and tell me how to make it more efficient.

And so it's self-referential, which means it can accelerate. And so what I'm getting at is I no longer feel I can in good faith say, hey, calm down, and take it as it comes. Humans are, human skills are going to be relevant.

I don't know if this is going to be a really good time for everybody because you've got 600,000 jobs like Accenture alone, there's like 50 million devs in the world. There's a glass case to be made, which is, hey, everybody can clean up their roadmap. And it's a really great time for engineering to capture the value here and bring that acceleration to the organizations that they service. And everybody can have their thing. And that is really exciting and motivating.

But that just isn't how humans work, man. Humans are just like, humans want to type in the box and get a thing and if it kind of works, they'll be like, I did it just like you with your app, or me with my apps. Like they might be crap, you might be looking at this and you might have like app glaze all over it, just like we see with images and text, but you can't see it yet because it's so shocking except that it's software and it's like, it's not like there's no API glaze.

You know, it's funny, I'm building an AI company with a wonderful business partner I've worked with forever. I'm looking out, we have a nice office and we have a great team and we have clients and we work with them and we're doing what I just described. We are moving their roadmap along and we're bringing them tools much more cheaply and much more quickly than we used to be able to...

...people are very, very anchored to their disciplines, right? Like, I'm a front-end engineer, I'm a full-stack engineer, I'm a designer, I'm a product manager. And to see all of those categories blur and all of those rules change and all of the things that allow people to say where their value is, is frankly really overwhelming. And I don't want to devalue that emotional response because I've been kind of coming in and being like, hey, let's all do this together and let's move forward

Well you chose to jump in, right? You're like, I'm going to build infrastructure and community in order to address this change.

Writing is funny for me too because I'm like, I actually see the writing is because like, it doesn't write for me, I kind of don't get it to write for me. It just, it can't be me. Like I'm, I just am what I am as a writer, but I see a lot of people who aren't writers and my God, it's good for them. And I'm like, it gives them access to a world and to kind of entry into a more formal style of communication that they didn't have before. And so, like to me, writing is supposed to empower and like if the robot helps you, that's good. If the robot thinks for you, that's bad.

But I think what's wild to me is learning how hard it is for humans to metabolize change. For me, the moment, the one that blew my mind, the last time I felt this way, just like exactly like this was when my doctor put me on Manjaro very early. I needed it. And what's Mounjaro? It's like Ozempic. It's a GLP-1. Okay. Okay. So suddenly after a lifetime of not being able to lose weight, I lost like 70 pounds in a hurry. And I was very dangerously big. I'm still pretty big, but my health has changed. And it was really after a lifetime being told like, this is how this works. This is the only way it works. You can only do surgery. There is willpower and so on. So all these rules and this whole social system and things that I heard from doctors, and it was one day they went, eh, and it was really confusing.

just a year or two, which is how long it's going to take for like the idea that you can just have code by typing in a box and it's pretty advanced and it does things like ship apps is nowhere near enough time to process that. Like it's just nowhere near and it's actually going to look like that horizon. It might take a couple years for people to figure out that they can have any software they want anytime. I use the concept a lot, I call it latent software, like PDFs that describe procurement forms or Google spreadsheets that are floating around. Like my company Aboard is all about taking latent software and making it real and getting into people's hands.

And we just rebuilt the whole society over the last 30 years around software, right? Like software is eating the world was this whole idea and now it's eating itself. And so like I do, look, you're right, like are we going to be okay as a species? Eh, about as okay as we ever are.

I'm just very nervous about the human ability to tolerate change. And we've created the ultimate change engine that sits in the middle of our global economy and spews out change at an unbelievable rate. And we just created the number one change accelerator possible, which is to move software much, much faster. And so I don't think we're going to see, it is not going to be familiar. Parts of it will be very familiar, but I think parts will be very, very weird and it's going to be really, really strange to watch. (FutureShock)

I see Claude Code showing up and I am showing it to people in my world because similar to you, I'm like, whoa. And they're like, well, hold on a minute. And I'm like, no, and it's not me saying, I want you to use this, I literally just want to say, and it was like this when I was writing, I just want to show you so that you can figure out what to do next. And what I have found over and over in the course of my life is that merely by showing people, they tend to panic. They don't want this change.

Every product manager you know is now building their own app. And every engineer is building their own app without product managers. And the product managers are building without engineers and the designers are trying to figure out how to ship. And they're all really happy to get everybody out of their world, right. And they're pretty sure they're going to be able to capture the value of the revolution and they wanted to follow the rules that used to be there. But it won't.

we're about to find that everything we created is probably more disposable and less exciting than we thought it was two weeks from now. And so I am puzzled by that. I think this is going to be a rough one, deep down, an exciting one with an enormous amount of good things. And I can't, I'm so excited for everybody to have all the software they ever wanted because that's always been my dream. But now that it's here, I'm a little scared.

The promise of software, if you go back to the Xerox PARC days, even before Lisp programming language and so on, is that we would have sets of composable objects that could interact and that an average human being would be able to learn the system and build whatever they wanted. Okay. That was the whole point of like Alan Kay and the Dynabook in the seventies. If you don't know what this is like, it's very legible. It's essentially like a laptop that kids can use to build any software they want

Suddenly we have it, we have the fantasy of the seventies. I could sit, I can train anybody I think at this point to think algorithmically and structurally (Computational Thinking) enough about applications and there's going to be a lot of retooling around how we educate people about what software does. But I think in about two weeks you could start to build really, really meaningful stuff. And I think in about two years you can probably build just about anything. And that used to be the work of 20 years.

Sam Altman cracks me up, right? Because he wants to be Steve Jobs, but he is Steve Ballmer. He just kind of got the wrong Steve. And it's just like, here we go. Okay. Commerce, capitalism.

He's a really, really good salesman. He is a really good deal guy. He told us we were headed towards AI Jesus (AGI), and now we're getting shopping, right? Like he's a commerce guy. I don't actually, I think he's good at that. You know, I think Anthropic is funny if you compare the two companies, like OpenAI is very much Microsoft. Whatever you want, whatever you want, we're going to sell this to you and you're going to have it. God, yeah. Let me give you more. And Anthropic is Google

Dan Shipper: I thought you were going to say Anthropic is'

Paul Ford:... Apple. No, nobody's Apple because nobody's really, Claude Code is great, but it has nothing to do with human beings. It has to do with, it's still for engineers. You can't put anyone, you can't put a civilian in front of that interface. It makes no sense.

Now, could they get there? Maybe. I just don't think they even want to. I think they want to just accelerate, accelerate, accelerate engineering and let everybody go run off, and then they'll figure out how to productize along the way. Whereas like, I think OpenAI wants to make a play for the whole shebang. They want to be the operating system. And the Apple in the middle, the people like, what's it going to look like? The thing about Apple is it made the computer disappear. So who's going to make the LLM disappear, right?

We got group one, and then here is my, I'll actually give some advice, which is Silicon Valley in particular dropped this absolutely bizarre thing, told everybody it would solve every possible social ill and didn't really come with a plan. And there were real harms that emerged and people panicked. And the harm frameworks weren't clear. And I think what we gotta do, because I'm in there too, man, I love this stuff. I use it every day and then I go on Bluesky where like 80% of my feed is people saying how much they hate everything that I'm touching all day long. And I get it. I get it. Because they also hated the tech industry. I think you gotta just let them burn it out. There will be people who just hate this shit for the rest of their life.

...at the same time, I'm sitting here in my nice office in New York City, but I'm hearing from and working with children's health charities and scientists and real do-gooders and climate types who are like, this can accelerate our roadmap and we want to do it. We want to use these tools to achieve our mission

And at the same time, there are people who are like, I'm a professor, I teach research methods. I don't want this near my students. I need their brains to work. And I get that. I actually think that's right. Like good, okay, draw that line. Make them figure it out. They're going to go use it anyway. They know that. But if you want to put them in a box for a minute so that they actually learn the history of how to think and what to do, and you feel that that's important as an educator, I'm not going to second-guess you. I respect that. So I think it's trying to find a balance in all this, but ultimately the balance is like you're there with that prompt and it does something for you that's really useful. And kind of knowing what's good and what's bad about it, and then going on with your life.

You also got people coming in from the West Coast telling you how it must be done forevermore. Yeah. And that feels really bad. And they just dismiss your concerns, right? We're used to it. We're tech nerds. And we're used to nerds just kind of like stumbling in. Nerds never actually fully acknowledge how much power they have in a room. And so they're like, whoa, why is everybody so obsessed? It's just really cool technology. And then it's like, whoa. Because I was going to make my living as an illustrator and I was going to send my children to a... like we were going to go on vacation once. And they're like, well, whatever, UBI. And like that whole thing, that's how that comes across. It's just this tin ear on the West Coast.

But it doesn't help that'They all went to the White House and Kumbaya with Donald Trump, including Jensen Huang. I mean, it doesn't help the vulnerable people feel less vulnerable

what do you think are the actual real bad things that have happened or are happening or will happen that a reasonable person who loves this technology should care about?

What kind of society do we want to have to deal with the kind of change that is coming?

A 50 million-person underpinning of the entire global economy, the tech industry. You've got giant consulting firms, you've got tech integration firms and software companies. Their core product has been radically devalued. What do we think about that? Who gets to talk about that? Like who is going to, the AI folks are going to be like, it's great. It's the best thing ever. Everybody gets their software.

I think we have to start internalizing actually, horizons aside, this will change a lot of the ways that people do things. And it might change the way they make money and it might change what their lives are like

I had Claude make me a prediction model for the future of the consulting industry and write me little stories. Oh, dude, they were really sad. I was like, no, because I literally was like, okay, you know what, Paul? You get a little cynical. Just say mild bearish. Mild bearish. Okay. And it was like "Rahul thought that he had made a good choice by going to computer science." Like it was just one after the other... I tried to really hedge. I was like, 'Hey, it looks like AI might really change the consulting industry,' and I want you to make a Sankey chart...let's look at McKinsey. Everybody loves McKinsey, everybody's favorite company.

So $16 billion in revenue, 45,000 employees, headquarters New York City. In 2024 their revenue's about $16 billion. Now I didn't have to do deep research. It was just very hand-wavy. So I'm guessing all this is kind of wrong. Let's be clear'it's not precise. But it says that by 2035, McKinsey's revenues, if it loses digital services, are gonna get down to $4 billion. And so you can see that here. If we switch to $4 billion, the whole chart shrinks.

Right now we're making our money through corporate strategy, operations, and so on. So I had to write employee stories for each company. And so there's Alexandra'Stanford undergrad, Harvard MBA, McKinsey Associate, 27. She was on the partner track, billing $800 an hour to tell Fortune 500 CEOs what they already suspected but needed external validation to act on.
I gotta say, Claude just decided to burn the shit out of McKinsey. I'm not grinding an axe here. I was just like, 'Just write little stories about what's up.'

"The dirty secret of strategy consulting was that the frameworks weren't magic. They were structured thinking applied to ambiguous (ill-structured?) problems, and structured thinking turned out to be exactly what AI was good at. By 2027, a CEO could upload their company's data, describe their strategic question, and get a McKinsey-quality analysis in an hour, complete with market size and competitive dynamics and three options with trade-offs. It wasn't as polished, didn't come with the McKinsey name, but it was 90-95% as good at 1% of the price.""

Sometimes she missed the intellectual intensity, the feeling of being the smartest people in the room. Then she remembered that the smartest thing in every room now was the computer.

The mild bearish case is that an economic contraction won't have a sudden flowering of new opportunity and that people won't figure out what to do next. And they'll just be captured in this kind of shrinking world while robots do more for the rest of their lives.

Shipper: I have put all of our company financials into Claude and had it write our investor update, and it did a fucking phenomenal job.

Paul Ford: Yeah. I mean, that's so good. And anything kind of bureaucratic, it's just magical.

Shipper: 'there's infinitely many meaningful stories and we're looking at one of them,' but sort of treating it like it's the other one where there is a right answer and Claude just found the right answer. Because if you change your prompt slightly, Claude could write you a great story about why consulting businesses are gonna do really, really well.

Ford: If they had emphasized translation as opposed to chat, I think we'd be in a much better place with this technology and I think we'd have a better understanding of it... more like a GitHub commit log. Like you put this in and then I'and actually this is what Claude Code and other things end up looking like, which is here was our state, and then I evaluated it and I did a bunch of queries in my internal database and I transformed it into this new state.
I've saved the old state in case we wanna go back to it, but here we are now. So we have a whole new kind of context and we've actually changed the way that we're working. Where do you want to go from here? Well, I wanna do this and I wanna do that. Great. I'm gonna update the state again, and I'm gonna keep a really clear log and I'm gonna keep the relationships between where I was when we started doing this and where I am now.

LLMs, they are complicated. They're, it's really hard to learn how they work. I actually had ChatGPT write me a medieval quest in which a magic spell was said, tokenized, and sent through the different layers of the LLM. I highly recommend it, like find an analogy that works for you and then make it explain LLMs in the context of a quest or a journey. Because otherwise you don't, there's a lot of things that just go missing. Like the fact that there's zillions of layers happening and each layer is kind of like talking back and forth to the other layers and sort of your, it's not like your question is being answered. Your question is being broken up and spread across sort of like a zillion meta databases that are then coming back and forming something that looks like an answer, but without consciousness.

Dan Shipper Yeah. And my feeling about this is actually we are extremely well equipped to work with the way language models work. And we're much better equipped than we are to work with code and for people who are non-experts. And that's because the'and I think it's actually a good thing that they're anthropomorphized because we have models, very advanced models for how to deal with human beings. And human beings are like this. They are squishy. They do not necessarily give you the same answer today as they did yesterday

you get that sense from a people pleaser where like other people pleasers in my life. Like I can just see when they're kind of like doing that thing where they're just telling me what I want to hear and I'm like, stop. I just wanna know what you think, you know? Mm-hmm. And so I think we have a lot of basically innate biological machinery for dealing with this kind of interaction

Ford: When I'm talking about making it reproducible, that's me as a kind of programmer outliner type. I get that. But I think what's tricky and what's thorny is when you talk to businesses and orgs, ones that really wanna use it, not ones that are just trying to figure out what generative AI means. That lack of reproducibility is really scary because they need to know that something, you know what I think, here's what it sounds like you're saying to me and push back on this, Paul, it sounds like you wanted to work like a computer. But it doesn't work like a computer. It works like a new thing and you should get used to new things instead of expecting it to work like a computer.

The fantasy of this technology, which I think I agree with you, is not actually what it's for, is that it will give me the interface to human beings, but the discipline and predictability of the computer and that isn't working yet. Absolutely not. And what's happening, I do think that like OpenAI is saying, just give us a minute. Just give, we're gonna get you that, we're gonna get you the people that you don't have to pay that do exactly what you tell. We just need a little more time. And at some level, I feel like that's where AGI has landed as a concept, like a cohort of disciplined bots.

There are organizations that are real, right? That is the actual value of this thing is that it generates constructive confusion and that you have to address it with it, but then it can actually, you can iterate through confusion and get to goals. Yeah. And that is very, very real. And it is not saleable. That's not what anybody wants to buy.

it's very hard to be totally AI native retrofitting into a big company.

there'll be this huge layer of acceleration from relatively small organizations that can deal with that, take it in, learn it, and apply it and have a desire to share the value they want to like, do more, get paid less, but move faster. I think like there's huge opportunities there. I think where people are screwed is if they're like, cool, now I can engineer 10 times faster.

I'm just sort of thinking about really big orgs I've worked with where the engineers just say no all the time and the CEO is really frustrated. But that's just life. That's just how it goes. That's what it's always been like. And then somebody shows up and they're just like, it doesn't have to be that way. You know, you can have everything'that's gonna feel so good, it's gonna feel so good, and they're gonna throw it by the wayside.

They're gonna just be, they're gonna abandon their family because they can suddenly, like the supply chain, SAP integration that was scheduled for 36 months now takes three. Oh my God. And I just got'the other, the other thing too, and I'm sorry to get corporate with it, but SMBs can't afford big enterprise software, but they also like, don't have CTOs. Like they still know what to do in the middle, and they can have really good tools now. They can like, which means for them that instead of implementing Salesforce.com, they can buy a summer home.

So I think it's kind of a yes to everything as well as the status quo. Because it's such a big space, it's not gonna change, but I think we gotta watch the margins. I think stuff is gonna shift really weirdly in ways that we weren't expecting.

They should check out our website aboard.com. We have a really, really nice, think of it as like super pro Webflow vibe coding platform that lets you build stuff, but we build it with you. We don't just give you a tool. We make sure that like we have good product managers, we call 'em solution engineers who listen and they will help you out. So that's enough shilling. You can send me an email paul.ford@aboard.com


Edited:    |       |    Search Twitter for discussion