History of AI Winters

Reading time: 5 mins

Artificial intelligence (AI) is a very popular topic in the media now, but the concept of AI was first proposed almost 70 years ago. What happened in the AI research community over this time period and why did it take so long for the breakthroughs to occur? Milton Lim looks at the booms and busts of the AI industry over the years.

 

Figure 1: The boom and bust cycle of AI research

 

“AI will be either the best, or the worst thing, ever to happen to humanity.”  –  Stephen Hawking

The beginning

In 1950, Alan Turing posed the question “Can machines think?[i]. As the concept of ‘thinking’ is difficult to define, he proposed a simpler question, such as whether a machine could imitate a human being during a conversation with another human. Known as the Turing Test, this involves a human judge asking questions to an unknown party in another room, who may be either human or machine, to evaluate whether they are human or not. The machine passes the test if it can convince the judge into believing it is human. The Turing Test was significant because it provided the first solid benchmark to answer the question “Can machines do what we humans (as thinking entities) do?

“Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”

The field of AI research was officially born in 1956 at the Dartmouth Conference which introduced the term “artificial intelligence” to unify the various research efforts in cybernetics, automata theory, and complex information processing to give machines the ability to “think”. A small group of prominent researchers including John McCarthy, Marvin Minsky, Claude Shannon and Norbert Wiener proposed that “every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it.” This provided a clear pragmatic direction for subsequent AI research efforts.

Figure 2: The first implementation of the perceptron in the Mark I Perceptron machine that could recognise images with a 20 x 20 pixel camera.

 

In 1958, Frank Rosenplatt created the perceptron learning algorithm, the simplest type of neural network with only one layer of neurons connecting inputs to outputs. The New York Times sensationally reported the perceptron to be “the embryo of an electronic computer that the Navy expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.” However, it was proven that the single layer perceptron could only recognise linearly separable patterns, but not more complex types such as the XOR (exclusive OR) function.[i]  The field of neural networks stagnated for a decade, until it was realised that multi-layer perceptrons, with sufficient computing power and data, were a very effective way of modelling more complex non-linear functions. This is the principle behind current research in deep learning with many complex layers of neurons, such as those used by Google DeepMind’s AlphaGo to defeat Lee Sedol at the game of Go

Past AI Winters

AI research has endured a bumpy journey and survived two major droughts of funding, known as “AI winters”, which occurred in 1974 – 1980 and 1987 – 1993. Although the field suffered collapses in the perception of the value of AI by government bureaucrats and venture capitalists, researchers continued to make advances despite the criticism.

In 1973, the UK Science Research Council commissioned the Lighthill Report[ii], which criticised the utter failure of AI to achieve its “grandiose objectives” and noted that “in no part of the field have the discoveries made so far produced the major impact that was then promised.” At a similar time, Richard Karp famously proved 21 difficult problems in computer science to be NP-complete[iii], which led to the famous unsolved P vs NP problem with a $US1 million reward prize.[iv] This highlighted the problem of “combinatorial explosion”, where the computing time required to solve the problem increased exponentially as a function of the input size. This meant that it was impossible to scale up any of the AI solutions to toy problems into useful real-life applications with the available hardware. The curse of dimensionality had struck a blow on AI research.

The hard problems are easy, and the easy problems are hard.

Another fundamental limitation, known as Moravec’s Paradox, states that “It is comparatively easy to make computers exhibit adult level performance on intelligence tests, playing checkers or calculating pi to a billion digits, but difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility… The mental abilities of a child that we take for granted – recognizing a face, lifting a pencil, or walking across a room – in fact solve some of the hardest engineering problems ever conceived… Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. [v] In short, the hard problems are easy, and the easy problems are hard. This explains why research into computer vision and robotics had made so little progress by the 1970s.

Over the next decade, business investment in the AI industry boomed from a few million dollars in 1980 to billions of dollars in 1988. Expert systems, such as XCON, LISP machines, and Symbolics, became very popular as specialised systems that simulated the decision-making ability of human experts to solve narrow specific problems such as diagnosing infectious diseases or identifying chemical compounds. Meanwhile, desktop computers from Apple and IBM had been steadily gaining speed and power (as per Moore’s Law) until they overtook the more expensive LISP machines. These expert systems eventually proved too expensive to maintain as they were difficult to update, could not learn, and were brittle, rather than robust in handling unusual inputs. As consumers no longer needed to buy an expensive machine specialised for running LISP, this led to the collapse of the market for specialised AI hardware in 1987. Thus, an entire industry worth half a billion dollars was replaced in a single year.

The Future

"AI is the new electricity.” – Andrew Ng

Andrew Ng, Professor of AI at Stanford University, has optimistically described “AI is the new electricity”. His personal view is that “Hardware advances will provide the fuel required to make emerging AI techniques feasible. Multiple hardware vendors have been kind enough to share their road maps and I feel very confident that they are credible, and we will get more computational power and faster networks in the next several years.”[vi]

Perhaps the current AI boom might one day reach the hypothesised “Technological Singularity”, where the exponential increase in computing power will result in the creation of an artificial superintelligence which surpasses the total of human intelligence. This would cause a runaway chain-reaction of self-improvement cycles where the machine intelligence can advance itself without the need for human effort. This is argued to result in an intelligence explosion radiating outward from Earth, until it saturates the entire universe. Some authors have predicted this Singularity to be reached as early as 2045.[vii]

This raises some deep philosophical and ethical questions[viii] that have been explored in countless books and movies:

  • Can a machine have emotions?
  • Can a machine be self-aware?
  • Can a machine be original or creative?
  • Can a machine be benevolent or hostile?
  • Can a machine have a soul?

 

These ethical considerations will have to be developed in parallel with the technological capability of emerging AI models. For a considered view of how AI and machine learning will change society, I highly recommend the UK Royal Society 2017 Report “Machine Learning: the power and promise of computers that learn by example

In the next article in this series, we will examine the “curse of dimensionality” in machine learning and statistics to understand the major obstacles to success for AI technology.

 

[i] Turing, Alan (1950), “Computing Machinery and Intelligence”, Mind

[i] Minsky & Papert (1969) “Perceptrons”

[ii] Lighthill, J. (1973), “Artificial intelligence: a general survey”, Artificial intelligence: a paper symposium

[iii] Karp, Richard (1972), “Reducibility Among Combinatorial Problems”, Complexity of Computer Computations

[iv] Clay Mathematics Institute http://www.claymath.org/sites/default/files/pvsnp.pdf

[v] https://en.wikipedia.org/wiki/Moravec%27s_paradox

[vi] https://www.technologyreview.com/s/603062/ai-winter-isnt-coming/

[vii] Kurzweil, Raymond (2005), “The Singularity Is Near: When Humans Transcend Biology”, Viking

[viii] https://en.m.wikipedia.org/wiki/Philosophy_of_artificial_intelligence 

 

CPD Actuaries Institute Members can claim two CPD points for every hour of reading articles on Actuaries Digital.

About the author

Milton Lim

Milton Lim worked as an actuary at Taylor Fry Consulting Actuaries for five years, specialising in general insurance and accident compensation schemes. He is currently studying to broaden his horizons, both internationally and into the future, with the double degree program at HEC Paris MBA and the London School of Economics MSc Analytics / Data Science.

Comment on the article (Be kind)

Likes:0
Comments:0
Print

No Comments

Also this month