Assuming that artificial intelligence's practical applications, or AIPA, is completely successful and that society will soon have programs whose performance can equal or beat that of any human in any comprehension task at all.
The best current programs can beat all but the very best chess players, but it would be a mistake to think of them as substantial information to artificial intelligence's cognitive science field (Ptacek, 1994).
There is also a more structured approach in assessing artificial intelligence, which began opening the door of the artificial intelligence contribution into the science world.
From this point of view, artificial intelligence can not only give a commercial or business world the advantage, but also a understanding and enjoyable beneficial extend to everyone who knows how to use a pocket calculator.
The development of artificial intelligence is just a small percentage of the computer revolution and how society deals with, learns, and incorporates intelligence.
In its last instantiation, ACT-R (Anderson et al., 2004; Anderson & Lebiere, 1998) is presented as a hybrid cognitive architecture. Its symbolic structure is a production system. The subsymbolic structure is represented by a set of massive parallel processes that can be summarized by a number of mathematical equations. Both representations work together to explain how people organize knowledge and produce intelligent behavior. ACT-R theory tries to evolve toward a system that can perform the full range of human cognitive tasks, capturing in great detail how we perceive, think about, and act on the world. Because of its general architecture, the theory is applicable to a wide variety of research disciplines, including perception and attention, learning and memory, problem solving and decision making, and language processing.
Evolutionary programming, originally conceived by Lawrence J. Fogel in 1960, emphasizes the relationship between parent solutions (the solutions being analyzed) and their offspring (new solutions resulting from some modification of the parent solutions). Fogel, Owens, and Walsh’s 1966 book Artificial Intelligence Through Simulated Evolution is the landmark publication in this area of AI. In general, in evolutionary programming, the problem to be solved is represented or encoded in a string of variables that defines all the potential solutions to the problem. Each full set of variables with its specific values is known as an individual or candidate solution. To solve the problem, a population of “individuals” is created, with each individual representing a random possible solution to the problem. Each of the individuals (i.e., each candidate solution) is evaluated and assigned a fitness value based on how effective the candidate solution is to solving the problem. Based on this fitness value, some individuals (usually the most successful) are selected to be parents, and offspring are generated from these parents.
Second the algorithmic modeling culture (subscribed to by 2%of statisticians and many researchers in biology, artificialintelligence, and other fields that deal with complex phenomena),which holds that nature's black box cannot necessarily be described by a simplemodel.
A stronger focus on computer modeling and simulations and on study of cognition as a system resulted in the development of cognitive science. Cognitive science is an interdisciplinary discipline concerned with learning how humans, animals, and machines acquire knowledge, represent that knowledge, and how those representations are manipulated. It embraces psychology, artificial intelligence, neuroscience, philosophy, linguistics, anthropology, biology, evolution, and education, among other disciplines.
The term AI was coined at the Dartmouth Summer Research Project on Artificial Intelligence in 1956 at Dartmouth College. This two-month workshop was organized by John McCarthy, Marvin Minsky, Claude Shannon, and Nathaniel Rochester and included as participants Trenchard More from Princeton, Arthur Samuel from IBM, Ray Solomonoff and Oliver Selfridge from MIT, and Allen Newell and Herbert Simon from Carnegie Tech, all of whom played fundamental roles in the development of AI. The Dartmouth workshop is considered the official birthplace of AI as a field, and it provided significant advances from previous work. For example, Allen Newell and Herbert Simon demonstrated a reasoning program, the Logic Theorist, which was capable of working with symbols and not just numbers.
In modern scientific AI, the first recognized work was Warren McCulloch and Walter Pitts’s 1943 article A Logical Calculus of the Ideas Immanent in Nervous Activity, which laid the foundations for the development of neural networks. McCulloch and Pitt proposed a model of artificial neurons, suggesting that any computable function could be achieved by a network of connected neurons and that all logical connectives (and, or, not, etc.) could be implemented by simple network structures. In 1948, Wiener’s popular book Cybernetics popularized the term cybernetic and defined the principle of the feedback theory. Wiener suggested that all intelligent behavior was the result of feedback mechanisms, or conditioned responses, and that it was possible to simulate these responses using a computer. One year later, Donald Hebb (1949) proposed a simple rule for modifying and updating the strength of the connections between neurons, which is now known as Hebbian learning. In 1950, Alan M. Turing published Computing Machinery and Intelligence, which was based on the idea that both machines and humans compute symbols and that this commonality should be the basis for building intelligent machines. Turing also introduced an operational strategy to test for intelligent behavior in machines based upon an imitation game known as the Turing test. (A brief description of the test and its impact on AI is discussed.) Because of the impact of his ideas on the field of AI, Turing is considered by many to be the father of AI.
Strong AI (also known as hard AI) supports the view that machines are really intelligent, and that, someday, they could have understanding and conscious minds. This view assumes that all mental activities of humans can be eventually reducible to algorithms and processes that can be implemented in a machine. Thus, for example, there should be no fundamental differences between a machine that emulates all the processes in the brain and the actions of a human being, including understanding and consciousness. One of the problems with the strong AI view centers on the following two questions: (a) How do we know that an artificial system is truly intelligent? (b) What makes a system (natural or not) intelligent? Even today, there is no clear consensus on what intelligence really is. Turing (1950) was aware of this problem and, recognizing the difficulties in agreeing on a common definition on intelligence, proposed an operational test to circumvent this question. He named this test the imitation game and it was later known as the Turing test.