Artificial Intelligence

http://www.geocities.com/ResearchTriangle/Lab/8751/

 

What exactly is Artificial Intelligence ?

Although most attempts to define complex and widely used terms end in futility, it is useful to draw at least an approximate boundary around the concept to provide a perspective on the course. To do this we take the by no means universally accepted Definition : Artificial Intelligence (AI) : is the study of how to make computers do things which, at the moment, people do better (Rich, 1991). This definition is somewhat vague due to its reference to the current state of computer science. The ability to solve problems of one sort or another is widely used as a measure of intelligence in many different contexts. It is rather obvious that intelligent machines are unlikely to serve any practical purpose unless they are capable of coping with some of the myriad of simple (or not-so-simple) problems which people overcome as a matter of routine. Problems come in a bewildering variety of shapes and sizes. There are problems which can be solved with patience and perseverance, and others which require flair and intuition. There are formal, abstract problems, like those involved in game playing, the solution of which may be of little more than academic interest. There are many problems that are practical and urgent , matters of life and death even. Some problems yield to elementary common sense; others can only be solved with the help of obscure knowledge (Aleksander and Burnett 1987). There are several reasons one might want to model human performance at these sorts of tasks:

 

Intelligence

It is difficult to discuss the workings of even relatively simple machines, such as washing machines say, or sewing machines, unless we understand the functions they were designed to perform. Since artificial intelligence is concerned with perhaps the most complex kind of machines we can imagine, 'intelligent machines', perhaps we should try to define what we expect such machines to do. Obviously, we expect them to be intelligent.

What do we mean by intelligence ? The dictionary gives the following definition Intelligence : The faculty of understanding.

Understanding : To comprehend something, or to recognise its significance. This is a concept that seems clear enough. When we apply it subjectively, it seems to correspond reasonable accurately to our own individual experience of what it is like to be intelligent or to use our intelligence. Unfortunately it begins to fall apart when we try to apply it objectively, to consider intelligence as a faculty which might be shared by other entities, whether living or mechanical.

The main problem is that we know what it feels like to understand something, and are generally willing to credit other people/things with sensations similar to our own. Take a simple example, a familiar piece of machinery, the thermostat in a central heating system. It does not just recognise when the temperature falls below or rises above a certain level, it responds by taking the appropriate action. In a single very limited, respect it seems to possess understanding and to demonstrate this in the clearest possible fashion, by behaving intelligently. If the thermostat is intelligent, we devalue the word to the point where it becomes meaningless.

General intelligence has turned out to be a concept of dubious value when applied in practice, and the whole question of using IQ tests to measure people's worth or suitability for a job has become extremely controversial. So should we break intelligence down into separate faculties such as perception, reason, creativity ? If so, what is the difference between intelligence and knowledge ?

Knowledge

One of the few hard and fast results to come out of the first three decades of AI research is that intelligence requires knowledge. To compensate for its one overpowering asset, indispensability, knowledge possesses some less desirable properties, including :

There are many different ways of categorising knowledge types, one of the main distinctions is the difference between induced knowledge and deduced knowledge. This is best explained by means of an example.

Consider a commonplace skill which most children master between the ages of five and ten - catching a ball. At the age of five a child may have difficulty in catching a beach ball gently tossed at a few yards, yet a few years later he/she will probably be able to catch a tennis ball lobbed high in the air from twenty yards away. Human beings are not capable of mastering the technique for calculating ballistic trajectories at such an impressively early age. The child's understanding has been gained by induction. It is as a result of watching the trajectories of many balls and trying to catch them, that the child has been capable of predicting the trajectory of the next ball he/she wants to catch.

A computer system on the other hand would rely on information on the projectile velocity and trajectory to calculate the future location of a projectile using Newton's laws. This would be dependant on a rigorous and mathematically explicit set formula programmed into the computer. The program enables the computer to deduce the flight path of a projectile by reference to the set of formal mathematical rules.

Few people would dispute the proposition that calculating a ballistic trajectory mathematically requires more intelligence than being able to catch a ball (Aleksander and Burnett 1987). So there is an important distinction to be made between knowledge and intelligence. It should also be clear that a machine may store knowledge without necessarily possessing intelligence; an intelligent machine which had no knowledge is an impossibility.

The question of how, to what extent, and in what sense can a machine be imbued with knowledge is thus fundamental to all aspects of artificial intelligence.

What Are The Branches of Artificial Intelligence ?

Here's a list, but some of the branches are surely missing, because no-one has identified them yet. Some of these may be regarded as concepts or topics rather than full branches.

What a program knows about the world in general the facts of the specific situation in which it must act, and its goals are all represented by sentences of some mathematical. logical language. The program decides what to do by inferring that certain actions are appropriate for achieving its goals. The first article proposing this was [McC59]. [McC89] is a more recent summary. [McC96] lists some of the concepts involved in logical AI. [Sha97] is an important text.

AI programs often examine large numbers of possibilities, e.g. moves in a chess game or inferences by a theorem proving program. Discoveries are continually made about how to do this more efficiently in various domains.

When a program makes observations of some kind, it is often programmed to compare what it sees with a pattern. For example, a vision program may try to match a pattern of eyes and a nose in a scene in order to find a face. More complex patterns, e.g. in a natural language text, in a chess position, or in the history of some event are also studied. These more complex patterns require quite different methods than do the simple patterns that have been studied the most.

Facts about the world have to be represented in some way. Usually languages of mathematical logic are used.

From some facts, others can be inferred. Mathematical logical deduction is adequate for some purposes, but new methods of non- monotonic inference have been developed since the 1970s.

This is the area in which AI is farthest from human-level, in spite of the fact that it has been an active research area since the 1950s. While there has been considerable progress, e.g. in developing systems of non-monotonic reasoning and theories of action, yet more new ideas are needed.

Programs do that. The approaches to AI based on connectionism and neural nets specialize in that. There is also learning of laws expressed in logic. [Mit97] is a comprehensive undergraduate text on machine learning. Programs can only learn what facts or behaviors their formalisms can represent, and unfortunately learning systems are almost all based on very limited abilities to represent information.

Planning programs start with general facts about the world (especially facts about the effects of actions), facts about the particular situation and a statement of a goal. From these, they generate a strategy for achieving the goal. In the most common cases, the strategy is just a sequence of actions.

This is a study of the kinds of knowledge that are required for solving problems in the world.

Ontology is the study of the kinds of things that exist. In AI, the programs and sentences deal with various kinds of objects, and we study what these kinds are and what their basic properties are. Emphasis on ontology begins in the 1990s.

A heuristic is a way of trying to discover something or an idea imbedded in a program. The term is used variously in AI. Heuristic functions are used in some approaches to search to measure how far a node in a search tree seems to be from a goal. Heuristic predicates that compare two nodes in a search tree to see if one is better than the other, i.e. constitutes an advance toward the goal, may be more useful.

What Are The Applications Of Artificial Intelligence

Here are some.

You can buy machines that can play master level chess for a few hundred dollars. There is some AI in them, but they play well against people mainly through brute for computation-- looking at hundreds of thousands of positions. To beat a world champion by brute and known reliable heuristics force requires being able to look at 200 million positions per second.

In the 1990s, computer speech recognition reached a practical level for limited purposes. While it is possible to instruct some computers using speech, most users have gone back to the keyboard and the mouse as still more convenient.

Just getting a sequence of words into a computer is not enough. Parsing sentences is not enough either. The computer has to be provided with an understanding of the domain the text is about, and this is presently possible only for very limited domains.

The world is composed of three-dimensional objects, but the inputs to the human eye and computers' TV cameras are two dimensional. Some useful programs can work solely in two dimensions, but full computer vision requires partial three-dimensional information that is not just a set of two-dimensional views. At present there are only limited ways of representing three-dimensional information directly, and they are not as good as what humans evidently use.

A 'knowledge engineer' interviews experts in a certain domain and tries to embody their knowledge in a computer program for carrying out some task. How well this works depends on whether the intellectual mechanisms required for the task are within the present state of AI. When this turned out not to be so, there were many disappointing results. One of the first expert systems was MYCIN in 1974, which diagnosed bacterial infections of the blood and suggested treatments. It did better than medical students or practicing doctors, provided its limitations were observed. Namely, its ontology included bacteria, symptoms, and treatments and did not include patients, doctors, hospitals, death, recovery, and events occurring in time. Its interactions depended on a single patient being considered. Since the experts consulted by the knowledge engineers knew about patients, doctors, death, recover, etc., it is clear that the knowledge engineers forced what the experts told them into a predetermined framework. At the present stante of AI, this has to be true. The usefulness of current expert systems depends on their users having common sense.

One of the most feasible kinds of expert system given the present knowledge of AI is to put some information in one of a fixed set of categories using several sources of information. An example is advising whether to accept a proposed credit card purchase. Information is available about the owner of the credit card, his record of payment and also about the item he is buying and about the establishment from which he is buying it (e.g., about whether there have been previous credit card frauds at this establishment).

What Are The Current Approaches Of Artificial Intelligence

To be intelligent requires knowledge and reasoning skills. Intelligent behaviour implies the linking of these two together and hence being able to deduce facts that are not explicit in the knowledge and produce sensible reactions to these facts. In humans there is a consciousness that enables us to understand concepts such as what and why, that is intentionality. With this ability we are able to make reasoned judgements and act accordingly. Of course the "reason" within our decisions is often subjective (and in the same way, our definition of intelligent behaviour is largely subjective). So what forms of reasoning are there? Here are the three main types:

The second requirement for intelligent behaviour is the knowledge itself. It is impossible to reason conclusions from knowledge if there is no knowledge. So if we put some facts into a computer system, use a reasoning program into action and we in theory have an intelligent machine! The reality is that many of these A.I. structures will work well in simple "toy" domains but once they are presented with real world domain problems and give real world values they suddenly begin to have problems. The problem is that they don't have enough knowledge about the domain and so can't respond to it. If we attempt to simply solve this problem by stuffing more information into the system we quickly come across the problem of speed. The specific piece of information in the database of knowledge cannot be accessed fast enough for a reasonable response using simple search techniques.

One of the major keys to AI then is being able to store knowledge in an efficient fashion and in such a way that it is possible to compose programs that can access it in a reasonable time. In an ideal world all the knowledge in the world would be incorporated into a system, but this leaves obvious problems. There are no obvious solutions but a number of methods have been proposed that look at knowledge representation like semantic nets, conceptual graphs, frames, first order predicate calculus and rules but there is not time to go into these topics in the confines of this essay.

An Expert System is a class of computer program that can advise, analyse, categorise, communicate, consult, design, diagnose, explain, explore, forecast, form concepts, identify, interpret, justify, learn, manage, monitor, plan, present, retrieve, schedule, test or tutor. They address problems normally thought to require human specialist for their solution. - Edward Feigenbaum

Expert based systems are currently in use in business in projects like credit rating people to see if they're worth giving credit to or in the prediction of rise and fall in shares in the stock market. An expert system is based on English so is easier to program and maintain than other languages. Expert systems are however only experts in their particular field but have the advantage of unlike humans not grow old or make mistakes and can process information faster. The expert based system must have a user interface to gain knowledge and a technique for learning from experience. The system must ask questions and absorb the information given to it. There are several problems with this system though. The learning requires human intervention so its knowledge must be given by an expert in the field. Large amounts of memory are required to hold all the information and a powerful computer so the data is processed in a reasonable time. The other problem is that it can go wrong and this could be expensive for whoever is using the system. Some banks already use these systems in a limited way in the stock market. There is always human supervision though!

Eliza was one of the first expert systems. This is the "computer therapist" program created by artificial intelligence pioneer Joseph Weizenbaum at MIT. You communicate with this program just as if it were a therapist and it passes the Turing test up to a point. It could be a real person therefore it is an intelligent program or so you are lead to believe. Try and answer these questions. What makes people feel understood by a therapist; What leads to a feeling of rapport; What is it about a conversation that is therapeutic? What is missing in the computer's responses that makes it not a real psychotherapist? A copy of Eliza may be found here. or if you would like to try Eliza on-line click here.

A neural network is essentially a type of computer but doesn't work in the same way. Conventional computers have a CPU and memory and information is represented in terms of structures of symbols. A neural network however is based upon the structure of the brain which consist of many billions cells called neurons. These build into a network and electrical signals can pass between neurons at a very fast speed by a conversion into chemical energy. Each neuron in the brain is equivalent to a single processing unit in the computer. The neural network is very similar passing electrical energy from one point to the other. The mapping out of the basic morphology and of the biological neuron, while being a major step forward, raised the difficult question of how networks of such neurons, that is brains, might carry out the information processing tasks that they obviously perform in people and animals. Some light was cast on this question by McCulloch-Pitts in the early 1940's. In a classic work, these two researchers showed how neuron-like threshold units or TUs might represent logical expressions. From this starting point they went on to show how networks of such units might carry out calculations. In the simplest case the threshold unit is capable of being either on or off. It has several input connections and, in a given time step, receives an input on each connection. The input can be considered to be either a 0 or 1. If the number of 1's received exceeds some threshold the unit outputs a 1, otherwise it outputs a zero. In a network of such units, the output on one neuron is the input to others.

McCulloch-Pitts suggested that networks of TUs formed a good model for the function of biological neurons. Such a network can perform a computation. Input values can be set on the input units of the network, and a result will be computed on the axons of the output units. The most important feature of neural networks is that it is possible that they could learn. They need to be trained first though. There are various training method and learning rules for neural networks but these are too lengthy for this essay. The problems with neural networks was that they are to expensive to construct in anything but the smallest of trials and current computers just lack a sufficient number of pathways between components.

Fuzzy logic is a superset of conventional (Boolean) logic that has been extended to handle the concept of partial truth -- truth values between "completely true" and "completely false". Boolean logic says that something is either on or off, true or false. You are either sleeping or awake. But what about in-between these times e.g. that time in- between sleep and a full state of consciousness? When sitting down for a meal, the meal is not just there or not there, there is a continuous period of it being eaten and each period can be broken down further into more stages of being eaten or not eaten. This idea was introduced by Dr. Lotfi Zadeh of UC/Berkeley in the 1960's as a means to model the uncertainty of natural language.

Zadeh says that rather than regarding fuzzy theory as a single theory, we should regard the process of ``fuzzification'' as a methodology to generalize ANY specific theory from a crisp (discrete) to a continuous (fuzzy) form. Thus recently researchers have also introduced "fuzzy calculus", "fuzzy differential equations", and so on. Fuzzy logic is used directly in very few applications. The Sony PalmTop apparently uses a fuzzy logic decision tree algorithm to perform handwritten (well, computer lightpen) Kanji character recognition. Most applications of fuzzy logic use it as the underlying logic system for fuzzy expert systems. Fuzzy logic and neural networks have been implemented together in recent times and they were even used to control a helicopter with a missing rotor blade. The point being that this could be done quick enough by a computer but not by a pilot even though initially the program had to be trained by a pilot.

History

In the early 1900's, Torres y Quevedo, a Spanish inventor, built a machine that could checkmate its opponent with a rook and a king against a king. Systematic work began only after the invention of the digital computer. The 1st scientific article on AI was published by Alan Turing in 1950, and the 1st full time research group was started in 1954 at Carnegie Mellon University by Allen Newell and Herbert Simon. But the field of AI as a coherent area of research is roughly about 40 years old. It all started in the 1956 Dartmouth conference where ten young researchers had the same dream of using a computer to model the ways humans think. Their hypothesis was that mechanisms of human thought could be precisely modeled and simulated on a digital computer. This is what the whole foundation of AI is based on.

Within a few years AI seemed to take off. Checkers, translations from sentences to code into human understandable words, and identification of patterns were created. On the downside arguments arose. People were sure that technology would fail, and that getting computers to "think" was impossible. They confused the early difficulties and stumbles with the fundamental limits of technology. In the words of Scotty, "I can't do it Captain. I just don't have the power." It was impossible for artificial intelligence to be created with the limited technology of the early 60's. Past failures and new technology led to many advances in science's history. Each failure added more information to build on.

Conclusion

People still remain skeptical about artificial intelligence, but with all the new breakthroughs and the use of modern technology, AI is progressing extremely rapidly. With the use of new "thinking" robots in agriculture, industry, NASA, and the military, the advance of AI is astonishing. Who knows, soon "thinking" bots may be so common we will not even think of the struggles and hard work that it took to get to that point. Farmers may not have to drive plows, robots may pilot choppers in the Army, and doctors will be following a robot's advice. We might even have robots taking over our daily chores around the house. Just think, no more "Clean up your room!" from your parents. It will be more like, "Get your bot to clean the house!". Sounds good to me! Who knows what's around the corner in the 21st century...