Friday, March 16, 2007

Multiplicity of consciousnesses in strong AI

I want to prove an important result in strong AI through a thought experiment. Let's take the postulate in strong AI that a computer running a simulation produces consciousness.

Now consider two such computers running the same program. Further lets assume that their clocks are synchronized (by either running them off the same clock or using atomic clocks synchronized at the beginning, so that the program execution is still synchronized at the end down to the last cycle).

Now the question is, does each computer produce a separate consciousness? Or do both produce only one together? If the computers are physically separate from each other, it would appear that if one produces consciousness, so should the other. There is nothing in Daniel Dennett's Consciousness Explained (which I take to be the poster child for strong AI advocacy) that suggests otherwise.

However, let me make an argument that both together, in fact, produce only one, if any at all. To this end, I will make the following assumption-

If P and P' are two different physical systems which are functionally equivalent, functionality being defined in the physical domain of the form that gives rise to consciousness according to the strong AI hypothesis, then tautologically, P and P' should not make any material difference to the nature of the consciousness(es) produced, in qualitative terms, or multiplicity.

Now let P be the system that has the two computers as described above. Now let P' correspond to the system where every node of the first computer is connected to every corresponding node of the second computer by a wire. Since the two computers are clock synchronized and executing the same program, at any given instant of time, the voltage difference across each wire is zero, hence the current is zero. This means the presence of the wires do not make any material difference to the operation of the combined system. Hence P and P' are functionally equivalent. But the presence of the wires reduces the number of computers executing the program to one from two (since the wires just make the logic transistors switching each node twice as big, as opposed to a single computer)! So according to the strong AI hypothesis, if each independent computer produces one conscious entity in P, then the system P' should produce only one, since P' is a single computer which just has bigger devices. So if P has two consciousnesses, then P' should have only one. But we have already asserted that P and P' are functionally equivalent. Hence the multiplicity of consciousnesses for P and P' has to be the same. This leads to a contradiction.

By induction, any number of clock-synchronized computers operating the same program can produce only one stream of consciousness, if at all.

Note - This is not a far-fetched thought experiment. In fact, being a chip designer myself, we talk of paralleling gates or individual devices as "fanout". In fact the sizes of the devices in the latest Pentium may be vastly different from earlier ones on different process technologies. I am sure that Daniel Dennett himself would agree that his theories would not depend on which version of the Pentium his programs run on.