Author(s): Avi Mohan, Ph.D. Originally published on Towards AI. The Blessing of Dimensionality?! (Part 1) Fig 1. Shakey the Robot built at the Stanford Research Institute, circa 1972 (source: Wikimedia Commons) From Dartmouth to LISP “We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer,” said the proposal. Little did John McCarthy and his colleagues know while writing that line, that their short summer project was about to set in motion one of the greatest technological revolutions humanity has ever seen. In many ways, the 1956 Dartmouth Summer Project on Artificial Intelligence [1,2] can be considered the place where AI, including the name AI, took birth. Since its inception, two schools of thought have emerged, each with its own distinct approach to understanding and creating intelligent systems. Symbolic AI: the first school Arguably, the more “ancient” of the two is called Symbolic AI (the other being Connectionist AI). Very simply put, this approach involves creating a big database of objects (say, animals), defining rules to manipulate these objects (what makes an animal a mammal, a reptile or an insect?) and using the first two steps to make logical inferences (“Is a cat an insect?”) More formally, the symbolic approach to AI emphasizes the use of symbols and rules to represent and manipulate knowledge. Unlike connectionist AI, which is inspired by the structure of the human brain, symbolic AI focuses on explicit representations of concepts and relationships. In symbolic AI systems, knowledge is encoded in the form of symbols, which can represent objects, actions, or abstract concepts. These symbols are then manipulated using rules of inference, which allow the system to draw conclusions and make decisions. This rule-based reasoning enables symbolic AI systems to solve problems in a logical and structured manner. Very simple example I could load an AI with the following databaseFacts about animals: name, type, and characteristics a. animal(cat, mammal, fur, four_legs). b. animal(fish, aquatic, scales, no_legs). c. animal(bird, avian, feathers, two_legs). d. animal(snake, reptile, scales, no_legs). Then program the following RULES for classification based on characteristics a. mammal(X) :- animal(X, mammal, fur, four_legs). b. aquatic(X) :- animal(X, aquatic, scales, no_legs). c. avian(X) :- animal(X, avian, feathers, two_legs). d. reptile(X) :- animal(X, reptile, scales, no_legs). Following this, we have a simple expert system that can respond to different queries, such as[a] Query: mammal(cat). #We’re asking if a cat is or isn’t a mammal. Expected Response: true[b] Query: mammal(fish)Expected response: false. Here’s a more detailed primer on how Symbolic AI systems work Symbolic AI has been particularly successful in areas that require reasoning and planning, such as expert systems and game-playing. Successes include: Shakey the Robot (see the first figure) was built by SRI International, then, the Stanford Research Institute [5], expert systems such as the rule-based AI named “MYCIN,” developed at Stanford University in the 70s to diagnose and treat infections in humans, and ELIZA, one of the first natural language processing programs, was created by Joseph Weizenbaum at MIT. Symbolic AI was, at least initially, exceptionally successful and employed in a wide range of applications. Indeed, entire families of formal languages, such as LISP (1950s) were created to facilitate thinking and coding for AI systems [3]. There are, however, some very fundamental flaws with this way of thinking about AI systems. Fig 2. A conversation with ELIZA (source: Wikimedia Commons) Some problems with Symbolic AI Handling Uncertainty– Symbolic AI systems struggle with representing and reasoning about uncertainty. – Real-world situations often involve incomplete or ambiguous information, and traditional symbolic approaches may struggle to handle uncertainty effectively.– In domains where uncertainty is inherent, such as medical diagnosis or natural language understanding, symbolic AI may provide overly deterministic or inaccurate results. (related) Knowledge Representation and Acquisition– Symbolic AI systems rely on explicit representations of knowledge in the form of symbols and rules.– This poses a challenge in capturing the vast and nuanced knowledge of the real world.– A highly complex task such as autonomous driving, for example, simply cannot be represented as a set of “If-Then-Else” commands — how billions of such conditions will you code into the AI agent? Lack of Learning and Adaptation– There’s essentially no way to learn from new data and experience! – The AI agent simply does whatever the programmer codes into it. Real world data can only be injected into the agent as the programmer’s knowledge of how said world works — which could be both very limited and biased. These, and a few other rather serious handicaps, made progress in symbolic AI quite a frustrating endeavor. All this while however, a parallel line of thought had been quietly developing, supported by the researchers who, hearing Minsky and Papert’s XOR death knell for the Perceptron classifier [6], had responded with a defiant, “this ain’t over yet!” Everything changed when the Connectionist Nation attacked… The Connectionist approach to AI is (mostly) what you read about these days from self-driving Waymo cabs, to Apple’s facial recognition systems, to that robot army uncle Bob just told you about over Thanksgiving dinner. Fig 3. Waymo’s self-driving car (source: Wikimedia Commons) The connectionist approach to artificial intelligence (AI) draws inspiration from the structure and function of the human brain, aiming to create intelligent systems by emulating the interconnectedness of neurons. Unlike symbolic AI, which relies on explicit rules and representations, connectionist AI utilizes computer programs called “Artificial Neural Networks” (ANNs) to learn from data and make decisions. ANNs are composed of layers of interconnected artificial neurons, each receiving input signals, processing them, and transmitting output signals to other neurons. For example, on the right side of Fig. 4, each disc (blue, green or yellow) represents one “neuron.” These connections, or weights, are adjusted through a process called learning, allowing the ANN to identify patterns and relationships in the data. This learning ability enables connectionist AI systems to solve complex problems and make predictions based on new data, […]
↧