Jim Nielsen’s Blog
Preferences
Theme: This feature requires JavaScript as well as the default site fidelity (see below).

AI Is Like a Lossy JPEG

That’s something I’ve heard before — ChatGPT Is a Blurry JPEG of the Web — and it kind of made sense when I read it. But Paul Ford, writing in the Aboard Newsletter, helped it make even more sense in my brain.

[AI tools] compress lots and lots of information—text, image, more—in a very lossy way, like a super-squeezed JPEG. Except instead of a single image, it’s “The Web” or “five million images.”

The nice thing about lossy compression in a JPEG is that it’s obvious. You can see the compression artifacts. But with AI? Not so much:

because of the way AI works, constantly guessing and filling in blanks, you can’t see the artifacts. It just keeps going until people have twelve fingers, stereotypes get reaffirmed, utter nonsense gets spewed, and so forth. You can see the forest, but the trees are all weird.

What you end up with is text that looks like knowledge, but like a lossy JPEG, upon closer inspection you will find a lack of clarity. As Paul notes, you end up seeing the forest but zoom in to the details of any tree and stuff doesn’t looks right.

Side by side view of an image of a forest. On the top is the original and a zoomed view. On the bottom is the compressed version and a zoomed in view. Zoomed out you can't really see the difference, but zoomed in on the details and there's a huge difference. The one with compression has huge blocks of solid colors.

AI is that: lossy compression, but on the level of knowledge not pixels.

Meme-like photo of the the universe with the caption “What you think you know when you use AI” and below it is a zoomed in part of the same photo with really bad lossy compression artifacts and the caption “What you actully know of any one detail”

It follows that, as Paul notes, you end up with not only a tool whose output is akin to the lossy, visual artifacts of a JPEG, but a tool whose output introduces into the world the cognitive and social equivalent of those big blocky compression artifacts of a JPEG.

As more and more people create, consume, and communicate with AI, more and more people will begin to understand themselves through a lens of lossyness — a lack of clarity. As Marshall McLuhan said: we shape our tools and then our tools shape us.

Paul raises one last parallel of AI, it’s like a “slightly high intern”:

You can’t really trust their output, but they do help you move things along. They’re good at using the web to gather stuff. They’re bright [but their] teen brains can’t quite figure out why you want this stuff, just that they have to do it...So they do what you ask, but they fill in the blanks with whatever comes to mind and hope you don’t get too annoyed about it.

AI is basically that—a perpetually cotton-mouthed undergrad who doesn’t really need the job—but, thank God, many hundreds of times faster. We wanted a smart robot that does our laundry and maintains our jetpacks, but we got a 19-year-old accelerated hyperstoner with no respect for copyright. But as always, we’ll work with what shows up.

Indeed. We work with what shows up.

But when people start saying these “slightly high interns” should and will replace us all (and our best systems) in the immediate future — I take pause.