At Noema, Nathan Gardels reports on computer scientist William Hinton, who contrasts AI with analog computation:

By contrast, digital computation “makes it possible to run many copies of exactly the same model on physically different pieces of hardware, which makes the model immortal. In this way, thousands of identical digital agents can look at thousands of different datasets and share what they have learned very efficiently. That is why chatbots like GPT-4 or Gemini can learn thousands of times more than any one person.”  Such bots have the intensifying capacity to absorb all available information, process it at quadrillion calculations per second through the deep learning layers of artificial neural networks and then efficiently impart its distillation. This ever-compounding acquisition of virtual omniscience far surpasses any human potential.

 

As Hinton sees it, when these superintelligences compete with each other, a survival-of-the-fittest evolutionary culling will take hold. The one that can grab the most resources will be the smartest. The imperative of self-preservation under the condition of competition will incentivize the most intelligent systems to be the most aggressive, reaching for more control by seeking to ascertain the “subgoals” of their programmed orientation — putting together what they have learned to reconfigure on their own how to get from A to B in any given circumstance. Once having attained that chain of thought reasoning, superintelligence would become autonomously able to prompt itself. It will have a mind of its own to set its own goals and orientation. The most powerful will win out over the rest.

 

The worst nightmare would be if bad actors, “like Trump or Putin” to use Hinton’s example, hack and orient the learning networks of the most powerful models. “We were once smarter than animals; now machines are smarter than us,” Hinton concludes. There is little evidence in history, he further observes, that lesser intelligences were ever able to escape domination by some superior intelligence. “If digital super-intelligence ever wanted to take control, it is unlikely that we could stop it. So the most urgent research question in AI is how to ensure that they never want to take control.”