(2025-06-09) Bjarnason Trusting Your Own Judgement On Ai Is A Huge Risk

Baldur Bjarnason: Trusting your own judgement on ‘AI’ is a huge risk. One of the major turning points in my life was reading my dad’s copy of Robert Cialdini’s Influence: The Psychology of Persuasion as a teenager.

Other highlights of my dad’s library – he was a organisational psychologist before he retired – included books by Fanon, Illich, and Goffman and a bunch on systems thinking and systems theory so, in hindsight, I was probably never not going to be idiosyncratic.

No matter how smart you were, the mechanisms of your thinkings could easily be tricked in ways that completely bypassed your logical thinking and could insert ideas and trigger decisions that were not in your best interest

Worse yet, reading through another set of books in my dad’s library – those written by Oliver Sacks – indicated that the complex systems of the brain, the ones that lend themselves to manipulation and disorder, are a big part of what makes us human.

But to a self-important asshole teenager, one with an inflated sense of his own intelligence, Cialdini’s book was a timely deflation as it was specifically written as a warning to people to be careful about manipulation.

his very eighties metaphor was like that of a tape in your mind that went “click whirr” and played in response to specific stimuli

These are what I tend to call psychological or cognitive hazards. Like the golfer’s sand trap, the only way to win is to avoid them entirely.

What made me especially receptive to this idea at the time was the recent experience of having been sold a shitty CD player – that was obviously a sub-par model – by an excellent salesman.

software developers in particular are prone to being convinced by these hazards and few in the field seem to have ever had that “oh my, I can’t always trust my own judgement and reasoning” moment that I had.

A recent example was an experiment by a CloudFlare engineer at using an “AI agent” to build an auth library from scratch.

From the project repository page:
I was an AI skeptic*

If you don’t know what I mean with “an auth library” just know that it’s the most security sensitive and attacked point of any given web service. The consequences of a security flaw in this kind of library are potentially huge

The authors claimed that an Large Language Model (LLM) agent let them build it faster and more reliably than otherwise, many in software dev are convinced that this is powerful evidence that these tools really work.

It’s not, for a good reason, but it’s also important to note the process here that bypasses the judgement of even smart people.

First off, that project is a single person acting without any control. It has the evidentiary value of a blog post claiming that echinacea cured their cold complete with bloodwork showing no cold virus

When all you have is gossip (software development research is not great as it’s genuinely a separate problem domain from computer science) you have to make do with it – (anecdata)

but when you’re trying to answer a question with huge ramifications, you really want proper research.

TypeScript, for those who aren’t familiar, is a Microsoft rewrite of JavaScript that’s incompatible with basic JavaScript in multiple ways and has a long history of breaking projects when updated.

The decision to use TypeScript over JavaScript, despite there not really being any evidence available that doing so will make the overall system safer and more productive, is relatively harmless. It won’t kill people. It won’t disenfranchise anybody

Pretty much everything used to argue for or against TypeScript is either from self-experimentation or from anecdotal stories about other people’s self-experimentation.

And that’s where the psychological hazard comes in.

Self-experimentation is exactly how smart people get pulled into homeopathy or naturopathy, for example. It’s what makes them often more likely to fall for superstitions and odd ideas.

There are many classes of problems that simply cannot be effectively investigated through self-experimentation and doing so exposes you to inflicting Cialdini-style persuasion and manipulation on yourself.

Consider homeopathy

Did your choosing of homeopathy over established medicine expose you to risks you weren’t aware of?
That last part is important as we have, as humans, an extremely poor understanding of probability.

One of the reasons why I wrote the original LLMentalist post two years ago was I wanted people to understand that chatbots and the like are a psychological hazard.

Experimenting with them can lead to odd beliefs and a serious misunderstanding of both how you and the chatbots work.

Our only recourse as a field is the same as with naturopathy: scientific studies by impartial researchers

Impartial research on “AI” is next to impossible at the moment.

Marks become hazards in their own right

A big risk of exposure to con artists, such as a psychic, is when a smart person is fooled by their own subjective experience and cognitive blindness to probabilities and distribution, refuses to believe they were fooled, and becomes and evangelist for the con.

*And it’s not just limited to instilling a belief in the imminent arrival of Artificial General Intelligence. Subjective validation can be triggered by self-experimentation with code agents and chatbots. From the ever useful Wikipedia:

Subjective validation, sometimes called personal validation effect, is a cognitive bias by which people will consider a statement or another piece of information to be correct if it has any personal meaning or significance to them. People whose opinion is affected by subjective validation will perceive two unrelated events (i.e., a coincidence) to be related because their personal beliefs demand that they be related.*

on My AI Skeptic Friends Are All Nuts ((2025-06-02) Ptacek My AI Skeptic Friends Are All Nuts)

I don’t recommend reading it, but you can if you want. It is full of half-baked ideas and shoddy reasoning.*

But one reason to highlight the shoddiness of it’s argument is that calls from authority figures are a cognitive hazard in and of themselves and if you aren’t familiar with how deceptive personal experience is when it comes to health, education, and productivity, you might find the personal, subjective experience of a notable figure in the field inherently convincing.

Even otherwise extremely sensible people fall for this, like Tim Bray:
I keep hearing talented programmers whose integrity I trust tell me “Yeah, LLMs are helping me get shit done.” The probability that they’re all lying or being fooled seems very low.

AI Angst

The odds are not low. They are, in fact, extraordinarily high. This is exactly the kind of psychological hazard – lot to gain, subjective experiences, observations free of the context of its impact on other parts of the organisation or society – that might as well be tailor-made to trick developers who are simultaneously overwhelmingly convinced of their own intelligence and completely unaware of their own biases and limitations.

The problem, though, with responding to blog posts like that, as I did here (unfortunately), is that they aren’t made to debate or arrive at a truth, but to reinforce belief. The author is simultaneously putting himself on the record as having hardline opinions and putting himself in the position of having to defend them. Both are very effective at reinforcing those beliefs.

A very useful question to ask yourself when reading anything (fiction, non-fiction, blogs, books, whatever) is “what does the author want to believe is true?”

Because a lot of writing is just as much about the author convincing themselves as it is about them addressing the reader.

The only sensible action to take – which was also one of the core recommendations I made in my book The Intelligence Illusion – is to hold off. Adoption of “AI” during a bubble, without extensive study as to the large-scale impact of adoption, is the cognitive, productive, and creative equivalent to adopting a new wonder drug at scale without testing for side effects, complications, or adverse drug interactions.

We are also being let down by the epidemiology of our beliefs

They all belong to larger, more complex fields into topics and phenomena that even expert practitioners often only half-understand.

We still have a lot to learn about the human body, especially the human brain.

*even “AI” academics regularly talk about how they don’t fully understand how many of their larger models work.

These are perfect conditions for the spread of superstitious beliefs.*

because these ideas are only half-understood and vague, we can fit them in with our other ideas without problems or conflicts. There’s always a way for a developer, for example, to explain away conflicting details or odd events. The vagueness creates room to accommodate contradiction and furthers belief.

This is specifically the kind of large scale technology that needs thorough scientific testing because, on a micro-level, it might as well be purpose-designed to fool our judgement.


Edited:    |       |    Search Twitter for discussion