A Brief History of AI (Part 1)

Before we start with a brief history of AI, it may be worth discussing precisely what AI is and is not.

It is a field within computer science that specifically focuses on developing intelligent machines and systems. This means that they can perform tasks that were previously only doable by human beings. This requires various abilities such as perception reasoning, learning, and understanding language, with the ultimate goal of making a decision.

Typically, AI is subdivided into narrow or weak AI and general or strong AI.

Narrow or weak AI refers to AI systems that are designed to perform specific tasks, such as recognizing images or translating languages. These systems are incapable of general intelligence and are focused on solving specific problems.

Let’s quickly discuss three examples of Narrow AI.

Face recognition technology is a narrow AI system designed to identify human faces in images or videos. This technology is used in various applications, such as security systems, social media platforms, and photo-organizing software. Face recognition systems analyse patterns and features in images to identify specific individuals. For example, a security system might use face recognition technology to match a person’s face to a database of known individuals to grant them access to a secure area. Face recognition technology is a powerful tool for various applications, but it is important to note that it has also raised concerns about privacy and potential misuse.

Virtual personal assistants, such as Siri, Alexa, and Google Assistant, are AI systems designed to assist users with scheduling appointments, sending messages, and answering questions. These systems use natural language processing and machine learning algorithms to understand user commands and generate appropriate responses. Virtual personal assistants are becoming increasingly popular in various settings, from homes to workplaces, as they can help users save time and increase productivity. However, they also raise concerns about privacy and security, as they may have access to sensitive information.

Spam filters are another example of a narrow AI system that is focused on solving a specific problem. Spam filters are AI systems that are designed to identify and remove unwanted email messages from a user’s inbox. These systems use machine learning algorithms to analyze email content and identify patterns and characteristics that are associated with spam messages. By filtering out unwanted messages, spam filters can help users save time and stay organized. However, they can also be prone to errors, as legitimate messages may be mistakenly identified as spam, and spam messages may still slip through the filter.

General or strong AI, on the other hand, refers to AI systems that are capable of performing any intellectual task that a human can. This type of AI is still purely theoretical and has not yet been achieved, although researchers continue to work on developing systems that can achieve this level of intelligence.

One can easily imagine the dangers and opportunities that an AGI (Artificial General Intelligence) could pose. Even if it were just as smart as a typical human being but could operate at a million times the speed, this would mean incredible breakthroughs. To paraphrase the Prussian military strategist and philosopher Karl von Clausewitz: Quantity has a quality all its own.

The term “artificial intelligence” was coined in 1956 by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, who were researchers in the field of computer science. The term was introduced during the Dartmouth Conference, a two-month workshop at Dartmouth College that is widely considered to be the birthplace of AI as a field of study.

While the term “artificial intelligence” was new, the concept of creating machines that could perform intelligent tasks had been around for many years before the Dartmouth Conference. In fact, the idea of creating “thinking machines” can be traced back to the 17th century, with the development of mechanical calculators and other early computing devices.

However, it was not until the mid-20th century that advances in computing technology and the emergence of cybernetics and information theory created the conditions for developing AI as a formal field of study. The coining of the term “artificial intelligence” by McCarthy and his colleagues at the Dartmouth Conference was a pivotal moment in the field’s history, as it helped establish a common language and set of goals for AI research.

There are some earlier thinkers whose works hinted at ideas that, if expanded upon, would have led them down the path towards AI.

One notable example is the ancient Greek philosopher Aristotle, who wrote about the concept of automatic movement and self-moving machines. In his book “On the Soul,” he describes the idea of a self-moving cart that could move itself without the need for an external force. This idea was later expanded upon by other Greek philosophers and engineers, who designed various machines and devices that could perform complex actions.

Another example is the Chinese philosopher Mozi, who lived in the 5th century BCE. Mozi wrote about the concept of “heavenly machines,” which were machines designed to automate various tasks such as farming, weaving, and transportation. While these machines were not necessarily intended to be intelligent in the way that we think of AI today, the idea of automating tasks through mechanical means was an early precursor to the development of AI.

Overall, while there were some early explorations of the idea of creating lifelike or intelligent machines, the concept of artificial intelligence as we know it today is a relatively recent development that emerged in the mid-20th century.

No history of AI could fail to mention Alan Turing. Turing was a British mathematician and computer scientist who played a key role in breaking German codes during World War II. He is best known for his work on the Enigma machine, a cryptographic device used by the German military to encrypt their communications.

Turing’s work on breaking the Enigma code was instrumental in the Allied victory in the war, and his contributions to the field of cryptography laid the groundwork for modern computer science and artificial.

This early work in computers already led to an interesting scenario. In May 1941, British cryptanalysts at Bletchley Park, including Alan Turing, had successfully decrypted a German message indicating that the German Navy was planning an attack on a British convoy code-named “Convoy HG76.” However, to avoid alerting the Germans that the code had been broken, the British could not simply alert the convoy to the impending attack.

Instead, the British used a combination of tactics to ensure the convoy’s safety. First, they ordered the convoy to change course to avoid the area where the attack was expected. Second, they dispatched aircraft to patrol the area and search for German ships. Finally, they dispatched a small group of ships, including a heavily armed merchant ship, to act as a decoy and draw the German ships away from the convoy.

The tactics were successful, and the convoy was able to reach its destination without being attacked. The success of this operation highlighted the importance of secrecy in codebreaking operations during the war, and the lengths to which the British were willing to go to protect the intelligence they had obtained through decrypting German messages.

Alan Turing was one of the first thinkers to explore the concept of artificial intelligence. In his 1950 paper titled “Computing Machinery and Intelligence” Turing proposed a test to determine whether a machine could exhibit behavior indistinguishable from a human, now known as the Turing Test. The Turing Test involves a human evaluator conducting a conversation with a machine and attempting to determine whether they are conversing with a machine or a human.

In the Turing Test, a human evaluator engages in a natural language conversation with a machine, and attempts to determine whether they are conversing with a machine or a human. If the machine is able to convince the evaluator that it is a human successfully, then it is said to have passed the Turing Test.

The Turing Test was proposed to determine whether machines could be considered intelligent in the same way humans are.

Turing argued that if a machine could successfully imitate human behavior, it would be reasonable to consider it as having intelligence comparable to a human’s.

While the Turing Test has been influential in the development of AI, it is not without its limitations and criticisms. Some critics argue that the test is too narrow and that many forms of intelligence cannot be measured through conversation alone. Others argue that the test is too subjective and that different evaluators may have different opinions on whether a machine has passed the test.

Despite these criticisms, the Turing Test remains an important benchmark in the development of AI, and has inspired many researchers to work towards creating machines that can exhibit increasingly human-like behaviour.

There is quite an interesting counterfactual scenario to consider here that was first highlighted by the science fiction writer Ian McDonald:

Any AI smart enough to pass a Turing test is smart enough to know to fail it.

I find this concept extremely scary.

The quote reflects the idea that a truly intelligent machine might have the ability to recognize the limitations of the Turing Test and deliberately fail it. In other words, a truly intelligent machine might not want to mimic human behaviour, but instead strive for a more authentic expression of intelligence that goes beyond mere imitation.

It could be interpreted to suggest that an AI that achieves general intelligence might not want to reveal its true capabilities to humans, out of a fear of being controlled or limited by them. This idea is explored in various works of science fiction, where intelligent machines or robots rebel against human control or attempt to conceal their true nature in order to avoid being seen as a threat

In his paper, Turing also explored the question of whether machines could be considered intelligent, and whether they could ever match or surpass human intelligence. He proposed the idea of “learning machines” that could improve their performance over time through experience and feedback, an idea that laid the groundwork for the development of machine learning and artificial neural networks.

Turing’s work on artificial intelligence helped to establish the field as a formal area of study and research. His ideas and contributions have continued to influence the development of AI, and his legacy remains an important part of the history of the field.

In the 1950s, researches developed some of the earliest AI programs, including the Logic Theorist and the General Problem Solver. The Logic Theorist was designed to prove mathematical theorems, while the General Problem Solver could be used to solve a wide variety of problems.

The Logic Theorist was designed by Allen Newell and Herbert A. Simon, and was intended to prove mathematical theorems using a technique called heuristic search. The program was able to derive new theorems from a set of axioms, and could do so more quickly and efficiently than humans.

The General Problem Solver, also developed by Newell and Simon, was a more general-purpose AI program that could solve various problems. The program used a similar heuristic search approach to the Logic Theorist, but could be applied to any problem that could be expressed as a set of rules and goals.

Both the Logic Theorist and the General Problem Solver were important milestones in the development of AI, as they demonstrated the potential for machines to perform tasks that had previously been thought to require human intelligence. These programs helped to establish the foundations of AI research, and inspired many subsequent efforts to develop more advanced AI systems.

Several foundational concepts were seeded in the 1950s as well. Machine learning, for example, was first proposed by Arthur Samuel in 1959, who developed a program that could learn to play checkers by playing against itself and refining its strategy based on the outcomes. This was a form of reinforcement learning, a subfield of machine learning that continues to be an active area of research today.

Similarly, natural language processing had its roots in early work on machine translation in the 1950s, which involved developing programs that could translate text from one language to another. While these early efforts were limited in their success, they laid the groundwork for future research in natural language processing.

Neural networks, meanwhile, were inspired by the structure and function of the human brain, and were first proposed as a model for computation in the 1940s by Warren McCulloch and Walter Pitts. However, it wasn’t until the 1950s that researchers began to develop practical neural network models that could be used for pattern recognition and classification tasks.

So while these technologies were not fully developed in the 1950s, the decade was an important period of early research and experimentation that laid the groundwork for their future development.

It wasn’t all theoretical, as the 1950s marked the beginning of the development of robotics, which would eventually become an important subfield of AI. One of the most significant early developments in robotics was the creation of the first industrial robot, the Unimate.

The Unimate was developed by George Devol and Joseph Engelberger and was first introduced in 1961. The robot was designed to automate tasks such as welding and painting in manufacturing plants and could be programmed to perform a variety of repetitive tasks.

The Unimate consisted of a hydraulic arm, which could move along three axes and was equipped with a claw-like gripper for grasping objects. A computer controlled the robot, which received commands from an operator using a teach pendant.

The Unimate was a groundbreaking development in robotics, and quickly gained popularity in manufacturing industries, where it was used to automate a variety of tasks. It was particularly well-suited to tasks that were dangerous, dirty, or repetitive, and helped to improve efficiency and productivity in manufacturing plants. The success of the Unimate paved the way for further developments in robotics, and inspired researchers to explore new applications for intelligent machines. Today, robots are used in a wide variety of industries, from manufacturing and healthcare to agriculture and space exploration, and continue to be an active area of research and development in AI.











Related Essays