(2023-03-10) Sloan Phase Change

Robin Sloan: Phase change. Group of internet thinkers has proposed a Summer of Protocols. I don’t know a ton about the program or its organizers, but I like the spirit captured on the website, and I feel like it might be a gener­a­tive oppor­tu­nity for someone(s) reading this.

Now, on to the AI thoughts, which, as you’ll see, loop around to connect to protocols again:

Earlier this week, in my main newsletter, I praised a new project from Matt Webb. Here, I want to come at it from a different angle. Briefly: Matt has built the Braggoscope.

Important to say: it doesn’t work perfectly. Matt reports that GPT-3 doesn’t always return valid JSON, and if you browse the Braggoscope, you’ll find plenty of ques­tion­

And yet! What a technique. (Matt credits Noah Brier for the insight

*Using GPT-3 as a function call.
Using GPT-3 as a universal coupling.
It brings a lot within reach.()

I think the magnitude of this shift … I would say it’s on the order of the web from the mid 90s?

Language models as universal couplers begin to suggest protocols that really are plain language. What if the protocol of the GPT-alikes is just a bare TCP socket carrying free-form requests and instructions? What if the RSS feed of the future is simply my language model replying to yours when it asks, “What’s up with Robin lately?”

An important fact about these language models — one that sets them apart from, say, the personal computer, or the iPhone — is that their capa­bil­i­ties have been surprising, often confounding, even to their creators.

AI at this moment feels like a mash-up of programming and biology. The program­ming part is obvious; the biology part becomes apparent when you see AI engineers probing their own creations the way scien­tists might probe a mouse in a lab.

it’s become clear that the “returns to scale”—both in terms of (1) a model’s size and (2) the scope of its training data — are expo­nen­tial and nonlinear

nonlin­earity is, to me, the most inter­esting part

I’ve found it helpful, these past few years, to frame my anxieties and dissat­is­fac­tions as questions. For example, fed up with the state of social media, I asked: what do I want from the internet, anyway? It turns out I had an answer to that question.

Where the GPT-alikes are concerned, a question that’s emerging for me is: What could I do with a universal function — a tool for turning just about any X into just about any Y with plain language instructions?

I think “brace for it” might mean imagining human-only spaces, online and off. We might be headed, paradoxically, for a golden age of “get that robot out of my face”.

set your stance a little wider and form a question that actually matters to you. It might be as simple as: is this kind of capability, extrap­o­lated forward, useful to me and my work? If so, how?


Edited:    |       |    Search Twitter for discussion