It’s Uncomfortable To Sit With “I Don’t Know”
There’s the thing where if you’re reading an article in the newspaper, and it’s about stuff you don’t know a ton about, it all seems well and good. Then you read another article in the same paper and it’s about something you know intimately (your job, your neighborhood, your hobby, etc) there is a good chance you’ll be like hey! that’s not quite right!
Chris extends this idea to AI-generated code, i.e. if you don’t know or understand the generated code you probably think, “Looks good to me!” But if you do know it you probably think, “Wait a second, that’s not quite right.”
Here’s Jeremy Keith riffing on Chris’ thoughts:
I’m astounded by the cognitive dissonance displayed by people who say “I asked an LLM about {topic I’m familiar with}, and here’s all the things it got wrong” who then proceed to say “It was really useful when I asked an LLM for advice on {topic I’m not familiar with, hence why I’m asking an LLM for advice}.”
Kind of feels like this boils down to: How do we know what we know?
To be fair, that’s a question I’ve wrestled with my whole life.
And the older I get, the more and more I realize how often we barely know anything.
There’s a veneer of surety everywhere in the world.
There are industries of people and services who will take your money in exchange for a sense of surety — influencers, consultants, grifters, AI, they all exist because we are more than willing to pay with our time, attention, and money to feel like we “know” something.
“You’re absolutely right!”
But I, for one, often feel increasingly unsure of everything I thought I knew.
For example: I can’t count the number of times I thought I understood a piece of history, only to later find out that the commonly-accepted belief comes to use from a single source, written decades later in a diary or on a piece of parchment or on a stone, by someone with blind spots, questionable incentives, or a flair for the dramatic, all of which leaves me seriously questioning the veracity and objectivity of something I thought I knew.
Which leads me to the next, uncomfortable question: How many other things are there that I thought I knew but are full of uncertainty just like this?
All surety vanishes.
And that’s an uncomfortable place to be. Who wants to admit “I don’t know”?
It’s so easy to take what’s convenient over what corresponds to reality.
And that’s what scares me about AI.