AI Expert Slams Letter Urging 6-Month Pause on AI Training as Seriously Understating the Situation, “We Are All Going to Die.”

The FPS Review may receive a commission if you purchase something after clicking a link in this article.

Image: Nacon Studio Milan

An AI expert is claiming that the letter (published by the Future Of Life Institute) which was signed by Elon Musk and more than 1,600 people, is seriously understating the situation. Eliezer Yudkowsky is a decision theorist who leads research at the Machine Intelligence Research Institute and has over twenty years of experience in the field. The AI expert slams the letter in an op-ed piece for Time Magazine stating that a six-month pause in training AI is woefully inadequate in relation to how long it will take to study the behavior of current AI.

From Time:

“On Feb. 7, Satya Nadella, CEO of Microsoft, publicly gloated that the new Bing would make Google “come out and show that they can dance.” “I want people to know that we made them dance,” he said.

This is not how the CEO of Microsoft talks in a sane world. It shows an overwhelming gap between how seriously we are taking the problem, and how seriously we needed to take the problem starting 30 years ago.

We are not going to bridge that gap in six months.

It took more than 60 years between when the notion of Artificial Intelligence was first proposed and studied, and for us to reach today’s capabilities. Solving safety of superhuman intelligence—not perfect safety, safety in the sense of “not killing literally everyone”—could very reasonably take at least half that long.

It’s not you, it’s me

Eliezer’s response clarifies that AI will not base a decision to eradicate all life on earth on emotion but that “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.

The expert goes on to explain how those developing AI either have no plan at all for AI alignment or intend to have future iterations of AI implement their own AI alignment. AI alignment is a term, per ai-alignment.com, that refers to AI aligning with the intent of its operators.

“OpenAI’s openly declared intention is to make some future AI do our AI alignment homework. Just hearing that this is the plan ought to be enough to get any sensible person to panic. The other leading AI lab, DeepMind, has no plan at all.”

The state of self-awareness for AI is of great concern but presently it is not truly known how long until a future, or current, version of AI will reach it. Eliezer admits that GPT-4 is probably just imitating data to produce its own responses but not enough research has been done to see where it’s heading and further criticizes its developer’s ignorance of it.

“If that’s our state of ignorance for GPT-4, and GPT-5 is the same size of giant capability step as from GPT-3 to GPT-4, I think we’ll no longer be able to justifiably say “probably not self-aware” if we let people make GPT-5s. It’ll just be “I don’t know; nobody knows.” If you can’t be sure whether you’re creating a self-aware AI, this is alarming not just because of the moral implications of the “self-aware” part, but because being unsure means you have no idea what you are doing and that is dangerous and you should stop.”

A little Skynet goes a long way

The expert doesn’t hold back on the doom and gloom, and with good reason. In considering a number of alarming instances of unnerving chatbot responses, both recently with Google’s Bard and the newly improved Bing, along with the infamous turning of Tay, Microsoft’s earlier attempt at a Twitterbot in 2016, there are plenty of examples with good chatbots gone bad. There is after all an old expression, garbage in, garbage out.

“We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems. If we actually do this, we are all going to die.”

Ultimately the expert says that we, the whole world, need to indefinitely stop all AI training, and study where it already is in its development. International agreements need to be made to create and enforce laws punishing violators who continue to create AI computer farms (aka GPU clusters). This includes tracking all GPUs being sold and monitoring how many any particular country has and using military action to eliminate them when a violator has been identified.

Join the discussion in our forums...

Peter Brosdahl
As a child of the 70’s I was part of the many who became enthralled by the video arcade invasion of the 1980’s. Saving money from various odd jobs I purchased my first computer from a friend of my dad, a used Atari 400, around 1982. Eventually it would end up being a lifelong passion of upgrading and modifying equipment that, of course, led into a career in IT support.

Recent News