(2023-03-21) Brander Pond Brains And Gpt4

Gordon Brander: Pond brains and GPT-4. Stafford Beer and Gordon Pask were building a pond that thinks. Their biological computing project set out to build ecosystems with inputs and outputs, that could function as computers.

It seems more than likely that if we were given conscious control over all the parameters that bear on our internal milieu, our cognitive abilities would not prove equal to the task of maintaining our essential variables within bounds and we would quickly die. This, then, is the sense in which Beer thought that ecosystems are smarter than we are—not in their representational cognitive abilities, which one might think are nonexistent, but in their performative ability to solve problems that exceed our cognitive ones.

What is intelligence? What kinds of things are intelligent? Choose all that apply: A person A parrot An octopus (and other things)

The answer offered by cybernetics is “all of the above”. To some, the critical test of whether a machine is or is not a ‘brain’ would be whether it can or cannot ‘think.’ But to the biologist the brain is not a thinking machine, it is an acting machine

All of these things are intelligent. They get information and do something about it. (smeels like an IsA issue)

Evolution is a pragmatist. It only cares about actual behavior

Ecosystems are too high-dimensional for representational thought.

but if a pond can think, what else can think? Where are we going to draw the line? As Norbert Weiner discovered, the minimum requirement for intelligent goal-seeking behavior is feedback. A loop. That’s it.

So, through this lens we might say a thermostat is intelligent. It gets information and does something about it.

Feedback can be found nearly everywhere, so intelligence can be found nearly everywhere.

"LLMs aren't actually thinking. They're just predicting the next token." It is common to encounter this claim in the discourse around LLMs. Is this right?

We are faced with a surprising fact. If you predict the next token at large enough scale, you can generate coherent communication, generalize and solve problems, even pass the Turing Test. So is this actually thinking?


Edited:    |       |    Search Twitter for discussion