A Brief History of AI: Exploring The Past, Present & Future

February 21st 2023

history of ai
The history of AI is a long and fascinating one, with numerous milestones and achievements that have shaped the development of this technology. In this blog post, we’ll provide a timeline of key events in the history of AI, from its early beginnings to its present-day applications.
Artificial intelligence is a rapidly growing field that has gained a lot of attention in recent years. It has become an essential part of our lives, from virtual assistants and chatbots to autonomous vehicles and medical diagnosis. The history of AI spans several decades, and it has undergone numerous changes and developments throughout that time. Early traces date back to ancient times when people started to imagine the possibility of creating intelligent machines. The major emergence was seen in the 1940s when the first computer was invented. Since then, it has gone through several transformations and advancements that have made it one of the most influential technologies of the modern era. From the earliest attempts at AI to complex, self-learning algorithms that can mimic human intelligence, the field has undergone numerous breakthroughs, challenges, and controversies. Keep reading as we explore the past, present, and future of artificial intelligence.

Pre-1900

 

AI is often associated with modern-day technology, but the idea of creating intelligent machines has been around for centuries. In ancient Greek mythology, there were stories of automatons created by Hephaestus, the god of technology, that could move and speak like humans. In the 13th century, the Spanish philosopher Ramon Llull developed a system of mechanical logic that was based on symbols and diagrams. Later, in the 17th century, the philosopher and mathematician Gottfried Wilhelm Leibniz envisioned a universal language of symbols to solve problems. This laid the foundation for the development of logic and computation that would eventually lead to the creation of intelligent machines.

Antiquity

Greek mythology featured stories of intelligent robots and artificial beings like Pandora, and the idea of “biotechne,” or how technology can alter biological phenomena.

Middle Ages
  • Al-Jazari invented the first programmable humanoid robot in the form of a water-powered boat with four mechanical musicians, which was recreated and studied by computer scientist Noel Sharkey as early programmable automatata.
  • Majorcan philosopher-mathematician Ramon Llull developed logical machines to produce all possible knowledge by combining basic, undeniable truths through simple logical operations.
Renaissance
  • Mathematicians, philosophers, and theologians consider the possibility of mechanized ‘human’ thought in non-humans
  • 1445: Leonardo da Vinci displayed his “mechanical knight” at the Italian Renaissance, which moved its arms and legs like a human using pulleys and cables.
  • 1533: Regiomontanus built an iron automata eagle that could fly

 

The Age of Reason & Revolution
  • Literature begins to play with references to modern-technology. E.g. ‘Gulliver’s Travels’ includes an ‘engine’ that can improve knowledge and skills with the assistance of a non-human mind, and ‘Erewhon’ entertains the notion of future machines that could possess consciousness.
  • 1833: The revolutionary partnership between Charles Babbage and Ada Lovelace saw the invention of the Analytical Engine.
  • Bernard Bolzano makes the first modern attempt to formalise semantics
  • George Boole invents Boolean Algebra
  • Samuel Butler proposes the idea that machines could become conscious and replace humanity

1900 – 1950

 

  • 1931: Mathematicians Bertrand Russell and Alfred North Whitehead’s “Principia Mathematica” laid the foundations for type checking and type inference algorithms.
  • 1921: Czech playwright Karel Čapek’s science fiction play “Rossum’s Universal Robots” introduced the concept of factory-made artificial people, called robots, and inspired the use of the term in research, art, and discoveries.
  • 1939: John Vincent Atanasoff and Clifford Berry create the Atanasoff-Berry Computer (ABC)
  • 1949: Edmund Berkely publishes ‘Giant Brains: Or Machines That Think’ which noted how machines can handle large amounts of information at speed.

 

1950s: Early Roots of AI

 

The 1950s was a significant decade for artificial intelligence research. With the invention of the first digital computer, scientists and researchers began to explore the possibilities of machine intelligence. However, progress was slow due to limitations in computing power and lack of funding. Despite this, the 1950s laid the foundation for future breakthroughs in AI.

 

  • 1950: Claude Shannon, known as the ‘Father of the Information Theory’ published ‘Programming a Computer for Playing Chess’
  • 1950: Alan Turing, a British mathematician and computer scientist, made significant contributions to the field of AI including the Turing Test. This was a measure of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.
  • 1950: Isaac Asimov publishes ‘I, Robot’
  • 1955: John McCarthy and his team coin the term ‘artificial intelligence’.
  • 1955: Allen Newell, Herbert Simon, and Cliff Shaw create the first artificial intelligence computer program
  • 1956: Arthur Samuel develops the first chess-playing computer program
  • 1958: John McCarthy develops Lisp, the most popular programming language for artificial intelligence research
  • 1969: Arthur Samuel coins the term ‘machine learning’.

1960s: Accelerated Advancements

 

In the 1960s, the field of artificial intelligence experienced a significant surge in interest and investment. This decade saw the development of the first natural language processing program, the first machine learning algorithm, and a rise in artificial intelligence being depicted in popular culture.

 

  • 1961: George Devol invents Unimate, the first industrial robot to work on the General Motor assembly line.
  • 1961: James Slagle developed the problem-solving program SAINT
  • 1963: John McCarthy started Project MAC, later becoming the MIT Artificial Intelligence Lab.
  • 1964: Daniel Bobrow created STUDENT, cited as an early milestone of NLP. The early AI program, written in Lisp, solved algebra word problems.
  • 1965: Joseph Weizenbaum developed the first chatbot, ELIZA.
  • 1966: Ross Qullian showed semantic networks could use graphs to model the structure and storage of human knowledge.
  • 1966: Charles Rosen developed Shakey the Robot
  • 1968: Stanley Kubrick’s ‘2001: A Space Odyssey’ is released featuring HAL, a sentient computer.
  • 1968: Terry Winograd created the early natural language computer program, SHRDLU.

1970s: Expert Systems

 

In the 1970s, the focus in AI shifted from symbolic reasoning to more practical applications, such as expert systems and natural language processing. Expert systems were designed to mimic the decision-making abilities of human experts in specific domains, while natural language processing aimed to develop machines that could understand and respond to human language. However, progress in AI was limited due to computational constraints and a lack of funding, leading to what became known as the ‘AI winter’.

 

  • 1970: Waseda University in Japan built the first anthropomorphic robot, WABOT-1.
  • 1973: In a report to the British Science Council, James Lighthill stated that AI research had not produced a major impact. This led to reduced government support for AI research.
  • 1977: George Lucas’ Star Wars is released, featuring the humanoid robot C-3PO
  • 1979: A remote-controlled tv-equipped mobile robot, The Stanford Cart, became an early example of an autonomous vehicle.

1980s: Rise of Machine Learning

 

In the 1980s, the development of machine learning algorithms marked a major turning point in the history of AI. These algorithms allowed computers to learn and adapt based on data input, rather than being explicitly programmed to perform a specific task. This opened the door for more complex and sophisticated AI systems. However, despite these advancements, the AI hype of the 1980s eventually led to an ‘AI winter’, as the technology failed to live up to some of the lofty expectations set for it.

 

  • 1981: The Japanese Ministry of International Trade and Industry allocated $850 million to the Fifth Generation Computer project.
  • 1984: Roger Schank and Marvin Minsky warn of the AI winter.
  • 1986: Mercedes-Benz released a driverless van with cameras and sensors
  • 1988: Rollo Carpenter developed Jabberwacky to ‘simulate natural human chat in an interesting, entertaining and humorous manner’.

1990s: Advancements in Natural Language Processing

 

The 1990s saw a resurgence of interest in artificial intelligence (AI) after a period of decreased funding and attention in the 1980s. This was due to the emergence of new technologies such as neural networks. In addition, the World Wide Web became publicly available, leading to the development of search engines that used natural language processing to improve the accuracy of search results. The 1990s also saw the development of intelligent agents and multi-agent systems, which helped to further advance AI research.

 

  • 1995: Richard Wallace developed the chatbot A.L.I.C.E
  • 1997: Sepp Hochreiter and Jurgen Schmidhuber developed Long Short-Term Memory a type of recurrent neural network for handwriting and speech recognition.
  • 1998: Dave Hampton and Caleb Chung invented the first pet toy robot for children, Furby.
  • 1999: Sony introduced a robotic pet dog, AIBO, which could understand and respond to over 100 voice commands

2000s: The New Century

 

The turn of the century saw the emergence of smart assistants, such as Apple’s Siri and Amazon’s Alexa, which used NLP technology to understand and respond to voice commands. The development of self-driving cars also began in the 2000s, with companies such as Google and Tesla leading the way in this field.

 

  • 2000: The Y2K problem involving computer bugs related to the formatting and storage of the electronic calendar beginning on 01/01/2000.
  • 2000: Cynthia Breazeal developed a robot that could recognize and simulate emotions, Kismet.
  • 2000: Honda releases the AI humanoid robot ASIMO
  • 2001: Steven Spielberg released ‘A.I. Artificial Intelligence’
  • 2002: i-Robot releases an autonomous robot vacuum
  • 2004: NASA’s robotic explorations navigates Mars without human intervention
  • 2006: Oren Etzioni, Michele Banko and Michael Cafarella coin the term ‘machine reading’
  • 2009: Google starts working on a driverless car

2010s: Continued Advances in AI

 

The 2010s saw extensive advances in AI, including the development of deep learning algorithms, which allowed for even more sophisticated AI systems. AI started to become play a key role in a variety of industries, including healthcare, finance, and customer service.

 

  • 2010: Microsoft launched the first gaming device to track human body movement, Xbox 360
  • 2011: A natural language question answering computer created by IBM, Watson, participated in the popular televised game, Jeopardy
  • 2011: Apple released Siri
  • 2012: Jeff Dean and Andrew Ng trained a large neural network of 16000 processors to recognize images of cats
  • 2013: Carnegie Mellon University releases a semantic machine learning system that can analyze image relationships
  • 2014: Microsoft released Cortana
  • 2014: Amazon create Amazon Alexa
  • 2016: Sophia, the humanoid robot created by Hanson Robotics, becomes the first ‘robot citizen’.
  • 2016: Google releases Google Home
  • 2017: Facebook trains two chatbots to communicate to learn how to negotiate
  • 2018: Alibaba, the language processing AI, outscores human intellect at Stanford test
  • 2018: Google develops BERT
  • 2018: Samsung release Bixby

2020s & The Future

 

AI is continuing to grow at an unprecedented rate. The start of this decade has seen plenty of incredible advancements with chatbots, virtual assistants, NLP, and machine learning. It’s being used to analyze large amounts of data, identify patterns and trends, and assist with decision-making. Artificial intelligence is poised to positively transform various aspects of our lives in the future. It’s being used across a wide range of industries such as improving customer service, personalizing experiences, content management, and assisting with diagnosing and treating diseases. AI has come a long way since its early roots in the 1950s, and this is going to carry on into the future. We’ve already started to see this with the viral emergence of Open AI’s Chat GPT, and there’s plenty to look forward to throughout the decade.

Ready to see what we can do for you?

In the right hands, artificial intelligence can take human performance to a hitherto unimaginable level. Are you ready for evolution?

WordPress Cookie Plugin by Real Cookie Banner