Organic Intelligence
Jeremy wrote about how the greatest asset of a company like Google is the trust people put in them:
If I use a [knowledge tool] I need to be able to trust [it] is good...I don’t expect perfection, but I also don’t expect to have to constantly be thinking “was this generated by a large language model, and if so, how can I know it’s not hallucinating?”
That question — “Was this generated, in some part, by an LLM and how can I assess its accuracy?” — is becoming a larger and larger part of my life. It’s taxing.
Jeremy’s post made me think[1] about the parallels between the rise of industrial farming and AI (or, might I say, industrial knowledge work).
Artificial food is to organic food, as artificial intelligence is to natural (i.e. organic) intelligence.
At one point in time, we said “eggs” and generally agreed on what that meant. With the rise of industrial farming, we began to understand that not all eggs are created equal, nor do they match our mental model of where eggs come from. So terms like “organic” and “free-range” and “cage-free” began to surface in our vernacular to help us suss out which eggs match our mental model for the term “eggs” that’s printed on the label.
It’s like that ice cream that can’t be called ice cream but rather a frozen dairy dessert. Or chocolate that can’t be called chocolate so it’s labeled “chocolate-flavored“ or “chocolatey”.
Now, with LLMs, a search result isn’t a search result. An image isn’t an image. A video isn’t a video.
We’re going to need a lot more qualifiers.