“I Don’t See Why Not”

Excuse my rant[1].

Nobel-prize winning CEO of DeepMind, Demis Hassabis, was on 60 Minutes and floored me when he predicted:

We can cure all diseases with the help of AI. [The end of disease] is within reach, maybe within the next decade or so. I don't see why not.

“I don’t see why not” is doing a lot of work in that sentence.

As I’m sure you know from working on problems, “I don’t see why not” moments are usually followed by, “Actually this is going to be a bit harder that we thought…”

If you want to call me a skeptic, that’s fine. But “the end of disease” in the next decade is some ostentatious claim chowder IMHO. As one of the YouTube comments says:

The goodies are always just another 5-10 years ahead, aren't they

Generally speaking, I tend to regard us humans as incredibly short-sighted. So if I had to place a wager, I’d put my money on the end of disease not happening in the next decade (against my wishes, of course).

But that’s not really how AI predictions work. You can’t put wagers on them, because AI predictions aren’t things you get held accountable for.

“Yeah, when I said that, I added ‘I don’t see why not’ but we quickly realized that X was going to be an issue and now I’m going to have to qualify that prediction. Once we solve X, I don’t see why not.”

And then “once we solve _Y_”. And then Z.

“Ok, phew, we solved Z we’re close.”

And then AA. And AB. And AC. And…

I get it, it’s easy to sit here and play the critic. I’m not the “man in the arena”. I’m not a Nobel-prize winner.

I just want to bookmark this prediction for an accountability follow-up in April 2035. If I’m wrong, HOORAY! DISEASE IS ENDED!!! I WILL GLADLY EAT MY HAT!

But if not, does anyone’s credibility take a hit?

You can’t just say stuff that’s not true and continue having credibility.

Unless you’re AI, of course.