In a new manifesto, OpenAI’s Sam Altman envisions an AI utopia—and reveals glaring blind spots

by Hyeon Yun
AI
Credit: Unsplash/CC0 Public Domain

By now, many of us are probably familiar with artificial intelligence hype. AI will make artists redundant! AI can do lab experiments! AI will end grief!

Even by these standards, the latest proclamation from OpenAI chief executive Sam Altman, published on his personal website this week, seems remarkably hyperbolic. We are on the verge of “The Intelligence Age“, he declares, powered by a “superintelligence” that may just be a “few thousand days” away. The new era will bring “astounding triumphs,” including “fixing the climate, establishing a space colony, and the discovery of all of physics.”

Altman and his company—which is trying to raise billions from investors and pitching unprecedently huge datacenters to the US government, while shedding key staff and ditching its nonprofit roots to give Altman a share of ownership—have much to gain from hype.

However, even setting aside these motivations, it’s worth taking a look at some of the assumptions behind Altman’s predictions. On closer inspection, they reveal a lot about the worldview of AI’s biggest cheerleaders—and the blind spots in their thinking.

Steam engines for thought?

Altman grounds his marvelous predictions in a two-paragraph history of humanity:

“People have become dramatically more capable over time; we can already accomplish things now that our predecessors would have believed impossible.”

This is a story of unmitigated progress heading in a single direction, driven by human intelligence. The cumulative discoveries and inventions of science and technology—Altman reveals—have led us to the computer chip and, inexorably, to artificial intelligence, which will take us the rest of the way to the future. This view owes much to the futuristic visions of the singularitarian movement.

Such a story is seductively simple. If human intelligence has driven us to ever-greater heights, it is hard not to conclude that better, faster, artificial intelligence will drive progress even farther and higher.

This is an old dream. In the 1820s, when Charles Babbage saw steam engines revolutionizing human physical labor in England’s industrial revolution, he began to imagine constructing similar machines for automating mental labor. Babbage’s “analytical engine” was never built, but the notion that humanity’s ultimate achievement would entail mechanizing thought itself has persisted.

According to Altman, we’re now (almost) at that mountaintop.

Deep learning worked—but for what?

The reason we are so close to the glorious future is simple, Altman says, “deep learning worked.”

Deep learning is a particular kind of machine learning that involves artificial neural networks, loosely inspired by biological nervous systems. It has certainly been surprisingly successful in a few domains: deep learning is behind models that have proven adept at stringing words together in more or less coherent ways, at generating pretty pictures and videos, and even contributing to the solutions of some scientific problems.

So the contributions of deep learning are not trivial. They are likely to have significant social and economic impacts (both positive and negative).

But deep learning works only for a limited set of problems. Altman knows this: “Humanity discovered an algorithm that could really, truly learn any distribution of data (or really the underlying ‘rules’ that produce any distribution of data).”

That’s what deep learning does—that’s how it works. That’s important, and it’s a technique that can be applied to various domains, but it’s far from the only problem that exists.

Not every problem is reducible to pattern matching. Nor do all problems provide the massive amounts of data that deep learning requires to do its work. Nor is this how human intelligence works.

A big hammer looking for nails

What is interesting here is the fact that Altman thinks “rules from data” will go so far towards solving all humanity’s problems.

There is an adage that a person holding a hammer is likely to see everything as a nail. Altman is now holding a big and very expensive hammer.

Deep learning may be working but only because Altman and others are starting to reimagine (and build) a world composed of distributions of data. There’s a danger here that AI is starting to limit, rather than expand, the kinds of problem-solving we are doing.

What is barely visible in Altman’s celebration of AI are the expanding resources needed also for deep learning to work. We can acknowledge the great gains and remarkable achievements of modern medicine, transportation and communication (to name a few) without pretending these have not come at a significant cost.

They have come at a cost both to some humans—for whom the gains of global north have meant diminishing returns—and to animals, plants and ecosystems, ruthlessly exploited and destroyed by the extractive might of capitalism plus technology.

Although Altman and his booster friends might dismiss such views as nitpicking, the question of costs goes right to the heart of predictions and concerns about the future of AI.

Altman is certainly aware that AI is facing limits, noting “there are still a lot of details we have to figure out.” One of these is the rapidly expanding energy costs of training AI models.

Microsoft recently announced a US$30 billion fund to build AI data centers and generators to power them. The veteran tech giant, which has invested more than US$10 billion in OpenAI, has also signed a deal with owners of the Three Mile Island nuclear power plant (infamous for its 1979 meltdown) to supply power for AI. The frantic spending suggests there may be a hint of desperation in the air.

Magic or just magical thinking?

Given the magnitude of such challenges, even if we accept Altman’s rosy view of human progress up to now, we might have to acknowledge that the past may not be a reliable guide to the future. Resources are finite. Limits are reached. Exponential growth can end.

What’s most revealing about Altman’s post is not his rash predictions. Rather, what emerges is his sense of untrammeled optimism in science and progress.

This makes it hard to imagine that Altman or OpenAI takes seriously the downsides of technology. With so much to gain, why worry about a few niggling problems? When AI seems so close to triumph, why pause to think?

What is emerging around AI is less an “age of intelligence” and more an “age of inflation”—inflating resource consumption, inflating company valuations and, most of all, inflating the promises of AI.

It’s certainly true that some of us do things now that would have seemed magic a century and a half ago. That doesn’t mean all the changes between then and now have been for the better.

AI has remarkable potential in many domains, but imagining it holds the key to solving all of humanity’s problems—that’s magical thinking too.

Provided by
The Conversation


This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
In a new manifesto, OpenAI’s Sam Altman envisions an AI utopia—and reveals glaring blind spots (2024, September 26)
retrieved 27 September 2024
from https://techxplore.com/news/2024-09-manifesto-openai-sam-altman-envisions.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.



Source Link

You may also like

Leave a Comment