Summary Prepared by Dan Miller (Additions and Comments by Frawley)
Dr. Chester's lecture attempted to help us answer the question, "Are we computers?" To begin answering the question we need to define what a computer is. Computers simply manipulate symbols.
[Note: by "symbol system," Chester means `repeatable pattern in binary representation.' Hence, there is a hardware version of what is normally thought of as a software concept: representation.]Their memory is constructed of registers arranged in consecutive order; those whose contents are set by switches are inputs, and those whose contents are set by an external display are outputs. Some memory registers are inputs to logic circuits or look-up tables (ROM). When binary patterns are stored in input registers, the outputs automatically get set to the patterns that are functions of the inputs.
There are built in functions such as shift, compare, or add, which allow the computer to compute any relationship between patterns.
[These are automatic functions that move data around and interpret some patterns as instructions (then counted in the instruction counter).]The important concept to understand is that any general purpose computer can be programmed to simulate any other general purpose computer. If a general purpose computer is given unlimited memory to which it has access, it would be a universal symbol machine.
[Another way of saying this: with a small set of functions, you can construct a general purpose computer; with no memory limits, you can compute universally. Computers are finite, but large -- like brains? -- though we think of them as infinite.]We are then presented with two hypotheses: symbol systems are capable of thinking (they would be capable of passing a Turing test as well) (weak version); only symbol systems are capable of thinking (stron version). What are we, then?
[Following Copeland, in the reading for this section: if the brain does not manipulate symbols, are the symbol hypotheses not correct? Maybe not: not all mind is symbol manipulation.]One approach to this is to examine the fundamental brain machines, neurons. Here's what we know about neurons: 1) there are more than a million million neurons in the brain, 2) there can be 1,000 to 10,000 connections to each neuron. Neurons transmit electrical pulses when sufficiently stimulated by light, pressure, etc., 4) they can be stimulated by and can release chemicals. (see also the summaries of presentations by Profs. Scott, Hoffman, and Northmore.)
In the early McCulloch and Pitts model of the neuron, they were seen as finite-state machines -- in an on or off state. (In fact, modern computers are really networks of McCulloch-Pitts neurons: logic gates.) But the McCulloch-Pitts neuron isn't a very good illustration of a real neuron because 1) the frequency of pulses in real neurons transmits information, 2) there are over 100 different types of neurons, 3) there are as many as 10,000 inputs to each neuron, 4) there appear to be complex interaction between inputs within dendrites.
In fact, when you look at real brians and real machines, you see notable differences. Computers can do many more steps of a computation in a given amount of time than a brain can, but in some tasks, the brain functions faster. The brain runs in parallel, whereas computers are sequential (for the most part -- even parallel machines are simulated on a sequential machine). Brain memories function by recalling descriptions, whether by an association or partial content. Computers recall by memory address. Brain memories are distributed, and computer memories are localized. Brains degenerate gradually when slightly damaged, computers can crash if one bit is flipped.
We might make some progress on linking brains and machines through parallel architectures, content addressable memories, and distributed (holographic?) memories, but these machines are still merely symbol manipulators and their successes are debated.
Now we come to the last question, "are we them?" There are many arguments that we are simply symbol manipulators. Representations in the brain could be like sentences in a language (Mentalese). The fact that we can handle paradoxes of belief can be explained by symbol manipulation. We are capable of generating an unlimited number of thoughts, just as symbol systems. However, it is possible to simulate these properties with analog representations. That raises the question: "if the brain uses analog representations, does that mean it is not a computer?"
This has motivated the creation of a new type of analog computer called a parallel distributed network (PDP), or an artificial neural network.
[Note: (1) How are PDP devices analogue? (2) The previous material essentially describes the standard von Neumann architecture, but PDP is supposed to be an alternative. Copeland (in the reading for this section of the course) notes: if it is not a von Neumann machine, then it is not necessarily not a computer. Computation is a type of process, not a physical realization.]The most important feature of PDP networks is that they are simple analog devices which are extensively interconnected. There are two commonly used PDP networks. The multi-layered network are still in continuous, but each neuron is connected to each other neuron in the next level, which makes it far superior to the McCulloch/Pitts model. Other types, called recurrent networks, feed some outputs of layers back into the inputs of earlier layers. These networks are much more efficient at simulating brain behavior. PDP networks are capable of learning rules for forming past tenses of English verbs, learning to talk, steering a van across the country, playing expert backgammon (neurogamon), and learning to read zip codes. Most PDP networks are simulated on digital computers, so does that mean there is no difference between analog and digital computers?
[And what is an analogue representation, anyway? "Analogue" usually has two senses:` continuous (non-digital)' and `iconic with respect to what is represented.' Thus a thermostat may be an analogue device in the first sense, but not the second; a digitized map could be analogue in the second sense but not the first. Do both properties have to hold for a represnetation to be analogue?]The more we find out about the brain, the less it resembles any computer that has been created (or hypothesized). We eventually end up where we began, asking questions such as "what kid of process is thinking?" and "is the computer thinking when it adds?" (Or is it simulating addition when it adds? Are we simulating addition when we add?) Are we even thinking when we add? Is there a difference? So, does that mean that computers think, or that we are computers?