Computation and Intelligence

Summary Prepared by Nicholas Plummer, Akeed Habeeb, and Michael Colloton [comments and additions by Frawley]

What is computation? What is intelligence?

As stated by Barbara von Eckhardt, in her book What is Cognitive Science?, computation is the capability of automatically "inputting, storing, manipulating, and outputting information in virtue of inputting, storing, manipulating, and outputting representations of that information. These information processes occur in accordance with a finite set of rules that are effective and that are, in some sense, in the machine itself." According to Webster's Third New International Dictionary, intelligence is the use of one's existing knowledge to meet new situations and to solve new problems, to learn to foresee problems, to use symbols or relationships, to create new relationships, and to think abstractly. It is the ability to perceive one's own environment, to deal with it symbolically, to deal with it effectively and to adjust to it to work toward a goal.

Question: how does the popular nontechnical view of intelligence as represented by Webster's compare to the technical, cognitive science definition?

Are we computers?

A computer is an input-output device that manipulates symbols in an effective [i.e., statable] way. Computers possess automatic functions to relate data and symbols and form patterns. Therefore, a computer with unlimited memory which could be programmed to simulate any other computer would be a universal symbol machine. A universal symbol machine would be intelligent in the fact that it would [might?] pass the Turing Test, which is an examination to determine intelligence on the basis of whether the subject can convince the tester that it possesses intelligence [by virtue of its overt behavior]. However, this test is more of a heuristic, a "rule of thumb" so to speak, in part because it is so behavioristic in its determination. [There are systems that would fail the Turing Test yet be intelligent and systems that would pass but not be intelligent.] In comparison to the universal symbol machines which, for the most part, are computers which run sequentially, the human brain runs in parallel. Human memories are distributed and human brains gracefully degrade whereas computer memories are localized and computers ungracefully "crash." [Taking the characteristics of computational learning as a guide], a machine is comprised of [computable input], target programs, learning mechanisms, and evaluation functions. The mind-brain is a machine that intakes computable input and creates a program. In this sense, the mind-brain is comprised of many smaller programs working as a whole. No machine can be universal or can learn every sense of input. Learning means not having to evaluate, or be surprised by, input [or halting computing, not having to output a program]. Therefore we are learning devices that do not purely memorize but rather learn from our mistakes.

More particularly: for some human knowledge, it might be said that the mind-brain takes in computable input, outputs a program, and eventually comes to a point where it does not have to output a new program to account for new input. That is, all the input is decided. What kinds of knowledge, and at what level, can be treated in this fashion? Think of core face knowledge or universal grammar. Note that in this triple of input -- machine -- program, you can modify the relations by borrowing form one to feed the other: the more powerful the machine, the less reliance on input, e.g. So one way to apply this to the human mind-brain is to say that the machine is already rich in its initial state: a priori knowledge reduces reliance in rich computable input.

Are we born with the knowledge of "dog"? Perhaps we are born with the knowledge of [its formal] type but not its specific instance. That is, we are equipped at birth with certain knowledge that is then refined by our community. So we are born with a Grammar [an underspecified core of knowledge of language], and then, through public verification, we learn our "speech." Wittgenstein is relevant here. Is our in-born knowledge a private mental language or does it need the external community for verification?

What is mental content? What is the architecture, the overall design of a computing system, of our mind?

There are two hypotheses, that of the symbolists and that of the connectionists. According the modular, or symbolist, theory the brain is domain specific, it knows rather than learns. The brain has a set of specified rules as well as a rich set of abstract formulae. One modular or symbolic example is the ambiguity of the word "apple" in relation to other words. Both meanings for apple, fruit and computer, will enter the brain for approximately 200 milliseconds at which time one is picked, based on the context [that is, for ambiguous words, there seem to be modular effects].. On the other hand, connectionists believe that our learning is accomplished by pattern associations, the manipulation of nodes in a network. Learning is accomplished by emergent patterns through repetitive exposure. This learning is interactive and models of the process work by back-propagation of errors. One interactionist or connectionist example is the vagueness of the word "apple" for color: it could be green or red, and this is determined by the interaction of context [that is, for vague words, there seem to be interactive effects].

Some tensions that ultimately drive considerations: