<artificial intelligence, philosophy of mind, philosophy of AI, PI> (AI) The subfield of computer science concerned with the concepts and methods of symbolic inference by computer and symbolic knowledge representation for use in making inferences.
AI can be seen as an attempt to model or simulate aspects of human cognition on computers. Thus Minsky defined it as "The science of making machines do things that would require intelligence if done by people" (1968). It is also sometimes defined as trying to solve by computer any problem that a human can solve faster.
Two examples of AI problems are computer vision (building a system that can understand images as well as a human) and natural language processing (building a system that can understand and speak a human language as well as a human). These may appear to be modular, but all attempts so far (2001) to solve them have foundered on the amount of context information and "intelligence" they seem to require.
The birth of AI was at the Dartmouth Summer Research Project on Artificial Intelligence organised by John McCarthy in 1956. However, researchers had been working on problems related to machine intelligence decades earlier. The early years of AI were dependent on symbolic models of cognitive processing which are formally equivalent to the Turing machine and thus computable (algorithmically calculable). The Physical Systems Symbol Hypothesis of Newell and Simon formalised the commitments of this sort of approach to machine intelligence.
Introduction to AI
Foundational Questions in AI
Minsky, M.L., ed. (1968). Semantic information processing. Cambridge, MA: MIT Press.
Newell, A (1990). Unified theories of cognition. Cambridge, MA: Harvard University Press.
Turing, A.M. (1950). Computing machinery and intelligence. Mind. 59: 433-460.
ACM SIGART. U Cal Davis. CMU Artificial Intelligence Repository.
See also AI-complete, artificial life, connectionism, fuzzy computing, genetic programming, neats vs. scruffies, neural network, symbolicism
Based on [FOLDOC] and Chris Eliasmith - [Dictionary of Philosophy of Mind] Homepage
<machine, epistemology, simulation, semantics, Turing machine> Scientific research aimed at building machines capable of performing a variety of functions as well as (or better than) human agents do. Although early efforts focused on symbolic manipulation and linguistic representation, most now use the notion of parallel distributed processing for the construction of a quasi-neural network that recognize and associate patterns. Hopes (or fears) of the success of this discipline often give rise to philosophical questions about whether purely physical systems can adequately support consciousness, perception, and mind. Recommended Reading: The Simulation of Human Intelligence, ed. by Donald Broadbent (Blackwell, 1993); Jack Copeland, Artificial Intelligence: A Philosophical Introduction (Blackwell, 1993); Daniel C. Dennett, Brainchildren: Essays on Designing Minds, 1984-1996 (Bradford, 1998); and Semantic Information Processing, ed. by Marvin L. Minsky (MIT, 1969).
[A Dictionary of Philosophical Terms and Names]
Try this search on OneLook / Google