On its own, in just a few hours of experimental self-play, AlphaZero blew past a level of Chess mastery that took humans over 1,500 years to attain.

If AlphaZero can achieve this mastery in such a short period of time, where will artificial intuition be in one year? Five years?

13D Research
13D Research

--

The following article was originally published in “What I Learned This Week” on January 18, 2018. To learn more about 13D’s investment research, please visit our website.

Of all the subjects we cover in this publication, machine learning, or as we call it, artificial intuition, is the most important with the greatest implications for the human race. We have written for years that machine learning is accelerating at a pace beyond our comprehension. If AlphaZero can achieve this mastery in such a short period of time, where will artificial intuition be in one year? Two years? Five years?

Many years ago we wrote a paper in college predicting that upon birth a computer chip — comprising all of human knowledge and wisdom — would be implanted in our brains so that humanity wouldn’t keep repeating the same mistakes that had been made over thousands of years. Depending upon which vantage point you take, artificial intuition is the most important development in the history of the human race or the most dangerous.

In WILTWs December 14, 2017 and December 21, 2017, we began to explore the implications of DeepMind’s AlphaZero algorithm achieving a superhuman level of play in Chess within a few hours of “tabula rasa” self-play. In early December, AlphaZero defeated Stockfish — the reigning Chess program — within four hours or 300k “training steps.” In the Japanese game Shogi, AlphaZero outperformed the ranking computer champion — Elmo — in less than two hours or 110k steps. International Chess Grandmaster, Viswanathan Anand, underscores that AlphaZero’s ability to figure “everything out from scratch…is scary and promising…” One can only wonder what level of learning AlphaZero could reach if it kept playing for days or weeks.

We recently read Life 3.0: Being Human in an Age of Artificial Intelligence by MIT professor Max Tegmark and Homo Deus by historian Yuval Noah Harari. Both books explore the ominous implications of the emerging AI era. The world is at a major inflection point that will determine our AI future and perhaps the survival of the human species. What are the key implications?

  • ƒ The rate of improvement in machine learning is accelerating at a mind- boggling rate. In the last month, China’s Alibaba has developed an AI that beat humans in a Stanford University reading and comprehension test. Japanese researchers also created a neural network that can read and interpret complex human thoughts — surpassing prior achievements.
  • ƒ Governments realize that whoever becomes the leader in AI could rule the world and are investing huge sums to take the lead. No doubt Russia, Israel and Iran, to name a few, are working intensely on it. Earlier this month, Beijing announced plans to build a $2 billion AI research park. The Chinese government is also building a $10 billion quantum computer center, and has opened a national lab — operated by Baidu — dedicated to making the nation more competitive in machine learning. Additionally, Alibaba is doubling its R&D spending to $15 billion to focus on AI and quantum computing. As a result, it is probable DeepMind may not be the leader in deep learning systems, but is rather the only one that has chosen to publish its results.
  • ƒ Superintelligent AI will increasingly be able to tackle complex problems — supercharged by quantum computing systems. Powerful AI systems may be able to find solutions to global grand challenges that have been unsolvable by humans, whether it is climate change, poverty, or lack of clean water. AlphaZero is an AI agent that can be transplanted into any other domain. Demis Hassabis, DeepMind’s founder, believes that AlphaZero- related algorithms can help design new ways of innovating, such as in drug discovery, and cheaper and more durable materials.
  • ƒ But, smart AI systems are also a double-edged sword because they could also be harnessed by dark forces. Terrorists or predatory governments may ask AI systems the best way to hack the U.S. grid, or achieve an EMP attack without detonating an atom bomb in the atmosphere. Given the ability of algorithms to brilliantly strategize in Chess and GO, could it be easier for an AI system to hack a nation’s nuclear arsenal? A timely question considering last weekend’s apparent false missile alert in Hawaii.
  • As change accelerates under machine learning, it may overload human nervous systems. In Homo Deus, Yuval Harari warns that “dataism” may prevail. Dataists believe that humans can no longer cope with the immense flows of data. Hence, humans cannot distill data into information, let alone into knowledge or wisdom.

Dataism is a potential new “cultural religion” that could engulf society as AI becomes pervasive. It combines the premise that organisms are biochemical algorithms with computer-scientist ability to engineer increasingly-sophisticated electronic algorithms. In essence, dataism argues that the same mathematical laws apply to both biochemical and electronic algorithms, thereby collapsing the barrier between animals [or humans] and machines. “Beethoven’s Fifth Symphony, a stock-exchange bubble and the flu virus are just three patterns of data flow that can be analyzed using the same basic concepts and tools,” notes Harari.

In dataism, human experiences are not sacred and Homo Sapiens are not the peak of creation or a precursor to a future “Homo deus” — as humans seek immortality, happiness and divinity by upgrading themselves into the equivalent of gods via technology. Instead, humans are simply tools for creating the Internet-of-All-Things that may ultimately spread beyond Earth to occupy the entire universe. Initially, notes Harari, dataism may accelerate the humanist pursuit of health, happiness and power. However, “once authority shifts from humans to algorithms, humanist projects may become irrelevant.”

“We are striving to engineer the Internet-of-All-Things in the hope that it will make us healthy, happy and powerful. Yet once the Internet-of-All-Things is up and running, humans might be reduced from engineers to chips, then to data, and eventually we might dissolve within the torrent of data like a clump of earth within a gushing river. Datasim thereby threatens to do to Homo sapiens what Homo sapiens has done to all other animals.”

In Life 3.0, Tegmark methodically explores a dozen scenarios of how AI superintelligence may evolve. While many outcomes show boundless possibility, others are dark. The scenarios include totalitarianism, cyborgs, libertarian utopias, benevolent dictators, protector Gods, Zookeepers, an Orwellian “1984”, and self-destruction, among others.

In the book’s prelude, Tegmark describes “The Tale of the Omega Team,” in which a secret group within a big tech company develops a super AI — nicknamed Prometheus. The Omegas focused on making Prometheus extraordinary at programming AI systems, and then used it to secretly take over the global economy through shell companies. To maintain control of the AI, the Omega Team kept Prometheus in physical confinement, with no internet connection. Their motivating force was the “intelligence explosion” argument by British mathematician Irving Good 1965:

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus, the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”

The tale of Prometheus eerily reminds us of DeepMind’s breakthrough AlphaGO Zero and AlphaZero for its rapid rate of improvement, as well as AutoML, Google’s new AI that writes algorithms superior to humans (see WILTW October 26, 2017).

Yuval Harari makes the case that just as mass industrialization created the working class, AI will create a new un-working class, which he and some tech titans call “the useless class.” Harari writes: “The most important question in 21st- century economics may well be: What should be done with all the superfluous people, once we have highly intelligent non-conscious algorithms that can do almost everything better than humans?”

As algorithms take more human jobs, wealth and power may concentrate among a tiny elite that own the algorithms. However, it is also possible that the algorithms may own themselves. We initially explored software-based decentralized autonomous agents in WILTW October 23, 2014, agents that, in time, could acquire far-reaching capabilities. As Harari notes, our legal system already recognizes intersubjective entities like corporations and nations as “legal persons.”

“Though Toyota or Argentina has neither a body nor a mind, they are subject to international laws, they can own land and money, and they can sue and be sued in court. We might soon grant similar status to algorithms. An algorithm could then own a transportation empire or a venture-capital fund without having to obey the wishes of any human master. Before dismissing the idea, remember that most of our planet is already legally owned by non- human intersubjective entities, namely nations and corporations. Indeed, 5,000 years ago much of Sumer was owned by imaginary gods such as Enki and Inanna. If gods can possess land and employ people, why not algorithms?”

Many challenges remain for AI to become super intelligent. A potential major obstacle to widespread AI adoption is teaching algorithms to explain their decision-making process to humans. Scientists simply do not understand how deep learning systems arrive at some of their decisions, and DARPA has launched an international effort to create “explainable AI.” DeepMind’s Mustafa Suleyman believes the study of the ethics, safety and societal impact of AI is poised to become a major issue in the coming year. We hope he is right. We hope it is not too late.

This article was originally published in “What I Learned This Week” on January 18, 2018. To subscribe to our weekly newsletter, visit 13D.com or find us on Twitter @WhatILearnedTW.

--

--

Navigating complexity in a rapidly-changing world. For more from What I Learned This Week, go to: http://www.13d.com/