In this post, I had shown that two or more physically separate computers executing the same program trace can produce one consciousness at the most. Now the question remains as to whether even one consciousness can be produced as a result of a program execution.
According to the "strong AI" hypothesis, a computer running a program produces consciousness. This statement is admittedly vague, since one cannot really define what one even means by "running a program", except perhaps in the view of strong AI advocates.
If one considers a computer's state at any time to be comprised of N bits (that is, its memory), then the execution trace can be interpreted as a N-bit vector as a function of each clock cycle.
Let us now assume, for argument's sake, that a computer running a (finite) deterministic program does indeed produce consciousness. If we store the N bits for each clock cycle on the hard disk and replay them (on the same digital nodes as the original execution), does the act of replaying the trace produce consciousness again? If the strong AI advocates answer no, then they have some explaining to do as to why that's the case.
If on the other hand they assert that replaying the trace is no different from executing the program, then we need to look a bit more deeper into what constitutes "replaying" a trace.
The key issue here, I feel, is that an "external" conscious agent is necessary to even assign a meaning to the pattern of those N bits. For, being a chip designer myself, I have designed circuits where every other bit of a memory is flipped internally (to reduce the number of transistors). So if the execution is replayed with the even bits flipped, would it still produce the same consciousness?
What if the successive N bit vectors are hashed according to some hash table (which has entries for each bit for each clock cycle)? Now, without the hash table, the whole execution trace is garbage. But if the entries are taken in accordance with the hash table, then the execution would correspond to the original program. In that case, is it necessary to physically accomplish the dehashing step any more than it is necessary to physically re-invert the flipped bits in the previous example? If not, there always would correspond some hash table "out there" which would make any garbage bits correspond to a meaningful simulation. And since there are an almost infinite number of abstract hash tables available to do the dehashing (with only a small subset corresponding to meaningful simulations), which one(s) get chosen? Now the strong AI advocates cannot argue that it depends on which hash table the observer uses to interpret the bits, since that undermines their claim that the trace produces consciousness on its own with no external intervention.
Surely, assigning a meaning to the N bits seems to be a contentious issue. But even more contentious is what is meant by the bits themselves. To illustrate this point, let the N bits be transmitted into space as parallel light beams, with a '1' corresponding to a light pulse and a '0' corresponding to the absence of any. Let us assume that the light excites molecules in the gaseous medium, so that the molecules in a plane perpendicular to the direction of propagation can be used as a marker for the program trace. But wait, if we assume the light to travel a great distance, then different planes (an infinite number) will correspond to different times of the execution trace. So which plane should be selected to correspond to the actual conscious experience? What if someone chooses a plane of observation which is oblique to the direction of propagation?
It is pretty clear that the role of the observer is quite pivotal in interpreting the execution of the program. And we cannot accept the argument that qualia is observer dependent. For at a given physical time, I feel either hot or cold, not something that depends on who is reading off my mental states (as in the case of the thought experiment given above). If there is contention regarding that, then we are dealing with a different problem. Which is good, since I believe the problem we are supposed to be dealing with is itself something different.
I find the whole assertion of strong AI (as outlined above) rather silly and without merit.