The AI ​​hype is spiraling out of control

I hope everyone is enjoying the latest breakthrough in artificial intelligence (AI) as much as I do.

In one of the latest developments in AI, a new computer program – DALL-E 2 – generates images from a text prompt. Give it the phrase “Club Penguin Bin Laden”, and it will trigger and draw Osama like a cartoon penguin. For some, it was more than a bit of fun: it was further proof that we will soon be ruled by machines.

Sam Altman, chief executive of the now-for-profit Open AI company that provides the model that underpins DALL-E, suggested generalized intelligence (AGI) was within reach. So did Elon Musk, who founded Altman’s company. Musk even gave a year for that to happen: 2029.

Yet, when we look closer, we see that DALL-E really isn’t very smart at all. It’s a crude collage maker, which only works if the instructions are simple and clear, such as “Easter Island statue giving a TED talk”. He struggles with more subtle prompts, fails to render everyday objects: fingers are drawn as grotesque tubercles, for example, and he can’t draw a hexagon.

DALL-E is actually a fine example of what psychologists call priming: because we expect to see a bin Laden penguin, that’s what we’ll see – even if he doesn’t look like Osama or a penguin.

“Impressive at first sight. Less impressive in the second. Often a completely useless exercise in the third,” is how Filip Pieńkowski, a scientist at Accel Robotics, describes such claims, and DALL-E fully complies with this general rule.

Today’s AI hyperbole has gotten completely out of hand, and it would be remiss not to pit the absurdity of the claims against the reality, as the two now seriously diverge. Three years ago, Google CEO Sundai Pinchar told us that AI would be “deeper than fire or electricity.” However, driverless cars are further away than ever, and AI has yet to replace a single radiologist.

There have been some small improvements to software processes, like the wonderful way old movie footage can be brought back to life by being upscaled to 4K resolution and 60 frames per second. Your smartphone camera now takes slightly better photos than it did five years ago. But over the years, the confident predictions that large swaths of white-collar jobs in finance, media and law would disappear sound like a fantasy.

Any economist who confidently extrapolates deep structural economic changes – of a magnitude that affects GDP – from AI companies such as DALL-E should keep those thoughts to themselves. This wild extrapolation was given a name by philosopher Hubert Dreyfus, who brilliantly debunked the first big AI hype of the 1960s. He called it the “first stage fallacy”.

His brother, Stuart, a true pioneer of AI, explained it this way: “It was like claiming that the first monkey that climbed a tree was progressing towards landing on the moon.”

“Deep learning”, as it is misleadingly called today, is simply a brute-force statistical approximation, made possible by the fact that computers are able to process far more data than they could, to find regularities or statistical patterns.

The AI ​​has become good at mimicry and pastiche, but it has no idea what it’s drawing or saying. It is brittle and breaks easily. And over the past decade it’s gotten bigger but not much smarter, which means the fundamental issues remain unresolved.

Earlier this year, neuroscientist, entrepreneur, and serial AI critic Gary Marcus had had enough. Taking Musk on his 2029 prediction, Marcus challenged the Tesla founder to a bet. By 2029, he postulated, AI models like GPT — which uses deep learning to produce human-like text — should be able to pass five tests. For example, they must be able to read a book and reliably answer questions about its plot, characters, and their motivations.

A foundation agreed to host the bet and the stake rose to $500,000 (£409,000). Musk didn’t take the bet. For his pains, Marcus found himself labeled as what Scientologists call a “suppressor.” It’s not an area that responds well to criticism: when GPT was launched, Marcus and similarly skeptical researchers were promised access to the system. He never had it.

“We need much stricter regulation around AI and even claims on AI,” Marcus told me last week. But that’s only half the picture.

I think the reason we’re so easily fooled by the output of AI models is because, like Agent Mulder in The X-Files, it’s because we want to believe. The Google engineer who became convinced that his chatbot had developed a soul was one example, but it’s also the journalists who seem to want to believe in magic more than anyone.

The Economist devoted a 4,000-word long article last week to the claim that “huge base models are accelerating progress in AI”, but assured that the magic spell was not broken by quoting only the faithful, not critics like Marcus.

Plus, a lot of people are doing pretty well as things stand – dithering over a hypothetical future that might never happen. Quangos abound, and for example, the UK’s research funding body recently injected £3.5m of taxpayers’ money into a program called Enabling a Responsible AI Ecosystem.

It is useless to say that the Emperor has no clothes: the Courtiers could be unemployed.

Comments are closed.