In a recent interview with MIT Technology Review, AI pioneer Mustafa Suleyman predicted that generative AI like DALL-E and ChatGPT represents just a temporary phase in AI’s evolution. The co-founder of DeepMind and Inflection argues the next wave will be interactive AI – bots capable of autonomous goal-directed action.

For those who do not know, Mustafa Suleyman is a co-founder of the artificial intelligence company DeepMind, which was acquired by Google in 2014. He worked at Google leading various AI policy teams before leaving to start a new AI company called Inflection in 2022. Suleyman has been a prominent voice in the AI ethics and policy sphere, advocating for the safe and beneficial development of the technology.

“The first wave of AI was about classification. Deep learning showed that we can train a computer to classify various types of input data: images, video, audio, language,” said Suleyman. “Now we’re in the generative wave, where you take that input data and produce new data.”

But he believes the third wave “will be the interactive phase. That’s why I’ve bet for a long time that conversation is the future interface. You know, instead of just clicking on buttons and typing, you’re going to talk to your AI.”

More Than Just Chatbots

Crucially, Suleyman emphasises these conversational AIs will go beyond chit-chat to take meaningful actions in the world.

“These AIs will be able to take actions. You will just give it a general, high-level goal and it will use all the tools it has to act on that. They’ll talk to other people, talk to other AIs,” he said.

Suleyman called this shift “very, very profound” and suggested it could represent one of the most significant moments in technology history. He compared it to technologies like electricity and the steam engine in terms of the change it could catalyze across society.

The Inflection co-founder explained that today’s software is static, only doing what it’s told. But interactive AI would be “animated” – able to take autonomous initiative within defined human-specified boundaries and goals.

Regulation Will Be Key

With such powerful, autonomous systems comes increased risk, as Suleyman acknowledged. He believes regulation will be essential to ensure AI safety and prevent abuse.

“Humans will always remain in command. Essentially, it’s about setting boundaries, limits that an AI can’t cross,” he noted.

Suleyman suggested restricting capabilities like recursive self-improvement, and drawing comparisons with licensing dangerous technologies like nuclear materials. He also pointed to existing frameworks regulating complex spaces like aviation as models that could inspire AI governance.

While conceding legitimate concerns around AI, Suleyman struck an optimistic tone about the feasibility of regulation in this arena based on successes moderating online platforms.

He concluded that “we can very clearly see that with every step up in the scale of these large language models, they get more controllable.” But ensuring human control over autonomous, interactive AI will require continued innovation and diligent oversight.

Suleyman’s predictions underscore that generative AI, remarkable as it is, likely represents just the tip of the iceberg in terms of future AI potential. But maximizing benefits while minimizing risks will necessitate a collaborative effort between technologists, regulators and the public.