0% found this document useful (0 votes)
51 views9 pages

The History of Artificial Intelligence - Science in the News

The document outlines the history of artificial intelligence (AI), beginning with early concepts in the 20th century and the foundational work of figures like Alan Turing. It highlights key milestones, including the Dartmouth Summer Research Project in 1956, the rise and fall of funding in the 1970s and 1980s, and significant achievements in the 1990s and 2000s, such as IBM's Deep Blue defeating a chess champion. The future of AI is discussed, focusing on advancements in language processing and the ethical considerations surrounding the development of general intelligence.

Uploaded by

Mushlih Ridho
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views9 pages

The History of Artificial Intelligence - Science in the News

The document outlines the history of artificial intelligence (AI), beginning with early concepts in the 20th century and the foundational work of figures like Alan Turing. It highlights key milestones, including the Dartmouth Summer Research Project in 1956, the rise and fall of funding in the 1970s and 1980s, and significant achievements in the 1990s and 2000s, such as IBM's Deep Blue defeating a chess champion. The future of AI is discussed, focusing on advancements in language processing and the ethical considerations surrounding the development of general intelligence.

Uploaded by

Mushlih Ridho
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

26/11/24, 14.

55 The History of Artificial Intelligence - Science in the News

      
 

by Rockwell Anyoha

Can Machines Think?


In the first half of the 20th century, science fiction familiarized the world with the concept of
artificially intelligent robots. It began with the “heartless” Tin man from the Wizard of Oz and
continued with the humanoid robot that impersonated Maria in Metropolis. By the 1950s, we
had a generation of scientists, mathematicians, and philosophers with the concept of artificial
intelligence (or AI) culturally assimilated in their minds. One such person was Alan Turing, a
young British polymath who explored the mathematical possibility of artificial intelligence.
Turing suggested that humans use available information as well as reason in order to solve
problems and make decisions, so why can’t machines do the same thing? This was the logical
framework of his 1950 paper, Computing Machinery and Intelligence in which he discussed
how to build intelligent machines and how to test their intelligence.

Making the Pursuit Possible


Unfortunately, talk is cheap. What stopped Turing from getting to work right then and there?
First, computers needed to fundamentally change. Before 1949 computers lacked a key
prerequisite for intelligence: they couldn’t store commands, only execute them. In other
words, computers could be told what to do but couldn’t remember what they did. Second,
computing was extremely expensive. In the early 1950s, the cost of leasing a computer ran up
to $200,000 a month. Only prestigious universities and big technology companies could
afford to dillydally in these uncharted waters. A proof of concept as well as advocacy from
high profile people were needed to persuade funding sources that machine intelligence was
worth pursuing.

The Conference that Started it All

https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/ 1/9
26/11/24, 14.55 The History of Artificial Intelligence - Science in the News

Five years later, the proof of concept was initialized through Allen Newell, Cliff Shaw, and
Herbert Simon’s, Logic Theorist. The Logic Theorist was a program designed to mimic the
problem solving skills of a human and was funded by Research and Development (RAND)
Corporation. It’s considered by many to be the first artificial intelligence program and was
presented at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) hosted
by John McCarthy and Marvin Minsky in 1956. In this historic conference, McCarthy,
imagining a great collaborative effort, brought together top researchers from various fields
for an open ended discussion on artificial intelligence, the term which he coined at the very
event. Sadly, the conference fell short of McCarthy’s expectations; people came and went as
they pleased, and there was failure to agree on standard methods for the field. Despite this,
everyone whole-heartedly aligned with the sentiment that AI was achievable. The
significance of this event cannot be undermined as it catalyzed the next twenty years of AI
research.

Roller Coaster of Success and Setbacks


From 1957 to 1974, AI flourished. Computers could store more information and became
faster, cheaper, and more accessible. Machine learning algorithms also improved and people
got better at knowing which algorithm to apply to their problem. Early demonstrations such
as Newell and Simon’s General Problem Solver and Joseph Weizenbaum’s ELIZA showed
promise toward the goals of problem solving and the interpretation of spoken language
respectively. These successes, as well as the advocacy of leading researchers (namely the
attendees of the DSRPAI) convinced government agencies such as the Defense Advanced
Research Projects Agency (DARPA) to fund AI research at several institutions. The
government was particularly interested in a machine that could transcribe and translate
spoken language as well as high throughput data processing. Optimism was high and
expectations were even higher. In 1970 Marvin Minsky told Life Magazine, “from three to
eight years we will have a machine with the general intelligence of an average human being.”
However, while the basic proof of principle was there, there was still a long way to go before
the end goals of natural language processing, abstract thinking, and self-recognition could be
achieved.

https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/ 2/9
26/11/24, 14.55 The History of Artificial Intelligence - Science in the News

Breaching the initial fog of AI revealed a mountain of obstacles. The biggest was the lack of
computational power to do anything substantial: computers simply couldn’t store enough
information or process it fast enough. In order to communicate, for example, one needs to
know the meanings of many words and understand them in many combinations. Hans
Moravec, a doctoral student of McCarthy at the time, stated that “computers were still
millions of times too weak to exhibit intelligence.” As patience dwindled so did the funding,
and research came to a slow roll for ten years.

In the 1980’s, AI was reignited by two sources: an expansion of the algorithmic toolkit, and a
boost of funds. John Hopfield and David Rumelhart popularized “deep learning” techniques
which allowed computers to learn using experience. On the other hand Edward Feigenbaum
introduced expert systems which mimicked the decision making process of a human expert.
The program would ask an expert in a field how to respond in a given situation, and once this
was learned for virtually every situation, non-experts could receive advice from that program.
Expert systems were widely used in industries. The Japanese government heavily funded
expert systems and other AI related endeavors as part of their Fifth Generation Computer
Project (FGCP). From 1982-1990, they invested $400 million dollars with the goals of
revolutionizing computer processing, implementing logic programming, and improving
artificial intelligence. Unfortunately, most of the ambitious goals were not met. However, it
could be argued that the indirect effects of the FGCP inspired a talented young generation of
engineers and scientists. Regardless, funding of the FGCP ceased, and AI fell out of the
limelight.

Ironically, in the absence of government funding and public hype, AI thrived. During the
1990s and 2000s, many of the landmark goals of artificial intelligence had been achieved. In
1997, reigning world chess champion and grand master Gary Kasparov was defeated by
IBM’s Deep Blue, a chess playing computer program. This highly publicized match was the first
time a reigning world chess champion loss to a computer and served as a huge step towards
an artificially intelligent decision making program. In the same year, speech recognition
software, developed by Dragon Systems, was implemented on Windows. This was another

https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/ 3/9
26/11/24, 14.55 The History of Artificial Intelligence - Science in the News

great step forward but in the direction of the spoken language interpretation endeavor. It
seemed that there wasn’t a problem machines couldn’t handle. Even human emotion was fair
game as evidenced by Kismet, a robot developed by Cynthia Breazeal that could recognize
and display emotions.

Time Heals all Wounds


We haven’t gotten any smarter about how we are coding artificial intelligence, so what
changed? It turns out, the fundamental limit of computer storage that was holding us back 30
years ago was no longer a problem. Moore’s Law, which estimates that the memory and
speed of computers doubles every year, had finally caught up and in many cases, surpassed
our needs. This is precisely how Deep Blue was able to defeat Gary Kasparov in 1997, and
how Google’s Alpha Go was able to defeat Chinese Go champion, Ke Jie, only a few months
ago. It offers a bit of an explanation to the roller coaster of AI research; we saturate the
capabilities of AI to the level of our current computational power (computer storage and
processing speed), and then wait for Moore’s Law to catch up again.

Artificial Intelligence is Everywhere


We now live in the age of “big data,” an age in which we have the capacity to collect huge sums
of information too cumbersome for a person to process. The application of artificial
intelligence in this regard has already been quite fruitful in several industries such as
technology, banking, marketing, and entertainment. We’ve seen that even if algorithms don’t
improve much, big data and massive computing simply allow artificial intelligence to learn
through brute force. There may be evidence that Moore’s law is slowing down a tad, but the
increase in data certainly hasn’t lost any momentum. Breakthroughs in computer science,
mathematics, or neuroscience all serve as potential outs through the ceiling of Moore’s Law.

The Future
So what is in store for the future? In the immediate future, AI language is looking like the next
big thing. In fact, it’s already underway. I can’t remember the last time I called a company and
directly spoke with a human. These days, machines are even calling me! One could imagine
interacting with an expert system in a fluid conversation, or having a conversation in two
different languages being translated in real time. We can also expect to see driverless cars on
the road in the next twenty years (and that is conservative). In the long term, the goal is
general intelligence, that is a machine that surpasses human cognitive abilities in all tasks.
This is along the lines of the sentient robot we are used to seeing in movies. To me, it seems
inconceivable that this would be accomplished in the next 50 years. Even if the capability is
https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/ 4/9
26/11/24, 14.55 The History of Artificial Intelligence - Science in the News

there, the ethical questions would serve as a strong barrier against fruition. When that time
comes (but better even before the time comes), we will need to have a serious conversation
about machine policy and ethics (ironically both fundamentally human subjects), but for now,
we’ll allow AI to steadily improve and run amok in society.

Rockwell Anyoha is a graduate student in the department of molecular biology with a background in
physics and genetics. His current project employs the use of machine learning to model animal
behavior. In his free time, Rockwell enjoys playing soccer and debating mundane topics.

This article is part of a Special Edition on Artificial Intelligence.

For more information:

Brief Timeline of AI

https://www.livescience.com/47544-history-of-a-i-artificial-intelligence-infographic.html

Complete Historical Overview

http://courses.cs.washington.edu/courses/csep590/06au/projects/history-ai.pdf

Dartmouth Summer Research Project on Artificial Intelligence

https://www.aaai.org/ojs/index.php/aimagazine/article/view/1904/1802

Future of AI

https://www.technologyreview.com/s/602830/the-future-of-artificial-intelligence-and-
cybernetics/

Discussion on Future Ethical Challenges Facing AI

http://www.bbc.com/future/story/20170307-the-ethical-challenge-facing-artificial-
intelligence

Detailed Review of Ethics of AI

https://intelligence.org/files/EthicsofAI.pdf

https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/ 5/9
26/11/24, 14.55 The History of Artificial Intelligence - Science in the News

312 thoughts on “The History of Artificial Intelligence”

ziebart baku
OCTOBER 29, 2024 AT 8:44 AM

Thanks for sharing the information. ziebart baku

REPLY

Charlie
SEPTEMBER 19, 2024 AT 4:44 AM

Thankyou for such an informative and interesting post. I really appreciate it! I’m currently
looking into the topic of AI for a college project and I have a higher understanding of AI now.

REPLY

Frank Greco
JULY 6, 2024 AT 10:51 AM

Thank you. Very informative post.

Imo, you need to include the intense interest in mathematical biology in the 1930s with
Nicolas Rashevsky as a leading researcher. This work undoubtedly inspired Weiner,
McCullough, and Pitts in the 1940s to investigate patterns and artificial neural networks,
which of course inspired McCarthy in the 1950s to extend the research into intelligence.

REPLY

Pious nkrumah
JUNE 19, 2024 AT 6:33 PM

https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/ 6/9
26/11/24, 14.55 The History of Artificial Intelligence - Science in the News

It is a great work and I really appreciate that because it saves me alot

REPLY

Ikenna Akuchi
JUNE 18, 2024 AT 1:05 AM

Wierd comments, helpful pieece

REPLY

rat
JUNE 10, 2024 AT 4:45 AM

who ever did ai is a fucking dickhead

REPLY

OLDER COMMENTS

Leave a Reply

Your email address will not be published. Required fields are marked *

https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/ 7/9
26/11/24, 14.55 The History of Artificial Intelligence - Science in the News

Comment *

Name *

Email *

Save my name, email, and website in this browser for the next time I comment.

Notify me of follow-up comments by email.

Notify me of new posts by email.

POST COMMENT

PREVIOUS

Psychosis, Dreams, and Memory in AI

NEXT

How Artificial Intelligence Will Revolutionize the


Energy Industry

https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/ 8/9
26/11/24, 14.55 The History of Artificial Intelligence - Science in the News

This work by SITNBoston is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0


International License.

Unless otherwise indicated, attribute to the author or graphics designer and SITNBoston, linking back to this page
if possible.

Proudly powered by WordPress  Theme: Canard by Automattic.

https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/ 9/9

You might also like