By "baud rate" I mean the speed at which information is exchanged among (say) neurons in a brain, or among circuit elements in a computer, or among people in a society. Technically it is not a synonym for "bits per second" or bps, but for my purposes here it may as well be. Copyright © 1989, Peter Suber.

Mind and Baud Rate
Peter Suber, Philosophy Department, Earlham College

Section 1. Does Mind Depend on Baud Rate?

1. Suppose that mind depends solely on the brain, the stimuli to the brain, and life-support to the brain. That is, suppose that we can explain mind fully, in principle, without appeal to an immaterial soul.

2. Information is exchanged in the brain very quickly, at the speed of electricity along the axons (with slight resistance), and at the speed of chemical reaction at the synapses.

3. In addition, these exchanges occur in parallel. There is no "central processing unit", or transcendental unity of apperception, or pineal gland, through which all "wires" pass. Parallelism does not affect the rate of exchange between any two neurons. But a parallel processor in which each binary exchange is slow (like our brain) can vastly outpace a serial processor in which each binary exchange is fast (like a common computer).

4. We could in principle recreate the exchange of information that occurs in the brain in many other media and at a much slower rate. Instead of sending a pulse of a certain voltage from one neuron to another, at nearly the speed of light, we could carry a counter like a penny or marble from one pail to another. The pails could be light years apart and the exchanges could occur at pedestrian speeds.

5. We could also imagine that the speeds of electricity and chemical reaction, or time itself, were artificially slowed. Then we could keep every other aspect of the brain constant (just in case the "pail imitation" accidentally omitted something essential).

6. If a contraption creates mind as an emergent property when the exchange of information within it takes place near the speed of light, then would it also create mind (perhaps with interesting changes) at much slower speeds? If so, what changes would we expect?

7. If it would create mind, then perhaps many things are minds; for example, an anthill, the ecosystem of a swamp, the nature system, the universe. The U.S. mail? The circulation of money or water?

8. If it would not create mind, are we prepared to say that a certain baud rate is a necessary condition of mind? Which baud rate? Is it possible for a contraption to have all the sufficient conditions of mind except baud rate? Can the failure of baud rate alone prevent the emergence of mind?

9. We know many cases of emergence in which speed makes no difference. A fast and slow processor can each run the same software and make it do the same work. (Must we add: so far? Would this remain the case with software that created mind as an emergent property?)

10. We also know many cases of emergence in which speed does make a difference. An inflated balloon will shrink if the movement of the air molecules inside it is slowed by cooling. (Air pressure is an emergent property because it belongs to collections of air molecules and not to any individual air molecule.)

11. Take the slowed brain again. What if we could turn down the speed of electricity and chemical reaction with a knob? Would we reach a point at which "mind" shut off? Or would "mind" dim more or less continuously? Metaphors. Is a mind-contraption like an internal combustion engine that huffs, chokes, and dies after a definite point when the firing rate is gradually decreased? Or is it like a light bulb that darkens continually?

12. If mind-contraptions other than brains are even possible, would it be possible to make both kinds? One kind would be sensitive to baud rate and shut down below a certain threshold. Another kind would undergo what Kant called elanguescence.

13. If turning down baud rate as if with a knob would at some point "turn off" mind, then is the critical baud rate absolute (another fundamental constant of nature), or relative to something physical in the substratum, such as the distance between nodes of exchange?

14. Can we imagine consciousness whose computational substratum exchanged information as slowly as the pail imitation? Why is this difficult?

15. This is not a rhetorical question. Is it difficult to imagine slow consciousness because it is difficult to imagine any forms of consciousness different from our own? Or is difficult "from the nature" of slow consciousness?

16. Suppose the time it takes for a pulse to travel from one neuron to another in our brains is n seconds. I don't know what it is in fact. (Here n will be a small fraction like 0.00001.) If we want to take the brain's parallelism into account, we can divide n by 100 billion or so. Now, when we are conscious, why don't we notice n-sized gaps or jerks of sentience? This question is trickier than it looks.

17. If our brain were perfectly simulated on a serial processor (by "perfectly" I mean in every way but for its parallelism), then would we be more capable of noticing n-sized gaps or jerks of sentience? At first the answer seems to be yes: we'd be closer, for the same reason that it is easier to see through a line of marching soldiers than a roomful of randomly milling people. In a large parallel processor, the gaps of one processor are sure to be swamped by simultaneous actions of other processors. Hence, a mind emerged from their collective activity would not lurch. However, even if our brain were mapped to a serial processor to prevent this swamping effect, the minuteness of n might still prevent our actually noticing any discontinuous jerks in our own stream of consciousness.

But soon we wonder whether this is a category mistake. We might never notice the gaps or jerks because the intelligence that emerges from computation is constituted by the computations. I cannot feel with my fingertip the relatively huge gap between the nucleus and orbiting electrons in any of the atoms in my finger. Perhaps I don't notice the delays of neural mail in my consciousness any more than, when thinking thoughts of whose origin I am unaware, I notice the centuries of struggle and development that put those ideas "in the air" in my culture.

18. Are the many computations, taking n seconds each, like the many pings of air molecules against the inside of a balloon? If so (if this is our model of emergence), then as n decreased, we'd notice macroscopic effects; the balloon would deflate; consciousness would dim, darken, elanguesce, abate, subside, fade, wither. Or are the many computations more like the many years it took certain ideas to become public in just the way necessary for me to encounter them? Or are they more like the many words of a book, which create a virtual world from their information or mutual relations, irrespective of the speed at which they are read?

19. Even with a book, can we really say that its words create a virtual world irrespective of readers and the speed at which they read? If we read one word per year, we will miss the communication, the point, the world. But this seems precisely because our brains are accustomed to a faster baud rate. Our slow reading experiment does not decide the question whether a slow brain could take in one word per year and have the same experience we have taking in one word every 0.1 seconds. For same reason, we learn foreign languages better by immersion than in a classroom; the important difference is baud rate. The language understanding that results is certainly emergent, and is affected by baud rate, but this does not decide the relevancy of baud rate to understanding überhaupt; immersion is simply a baud rate better suited to our brains than classroom learning. Other conceivable minds might require a slower rate of information flow, or a faster one. Similarly, playing with blocks is only fun if done with some continuity and speed; if given a new block to add to our growing tower only once a month, then the fun would disappear. But there is no reason to draw conclusions for mind in general from this, or for understanding in general, or language learning in general, or fun in general; it may only reveal something about our brains and their native baud rate tolerances.

20. If we could not notice the n-sized gaps or jerks, even in principle, then it seems that mind could subsist as well at any baud rate. That is, even if n were very large rather than very small, the mind that emerged would not notice gaps —no more than fingers of any size could feel the jagged cavity between nucleus and electron shell in any of its own atoms.

21. We don't notice the frames of a motion picture. But this is a result of a posteriori engineering, not constitutive essence. We ran experiments and discovered the limits of the eye and brain for noticing changes of short duration, and we made the interval from frame to frame a little shorter than that. But does it make sense to ask whether the virtual people in the virtual world of the film notice the temporal gap between frames when the film is projected? If we showed one frame per second, we would notice gaps, but a slower eye-brain machine would not. If we say that "the phenomenological motion picture" is the reconstitution of the images in the consciousness of some brain running at a speed making it impossible to perceive the gaps between frames, then we have an interesting model or analog of consciousness: blind to its constitutive computational gaps by nature, or by construction, as it were a priori.

22. Would it make sense to run this experiment on an intelligent machine: to turn it off for n seconds, then on for n seconds, and so on for some period, and then to ask whether it noticed anything unusual? We could vary n and run the experiment again and again, hoping perhaps to get an affirmative reply at some point when n grew "sufficiently large". Or is this like expecting to experience death? (Epicurus: where death is, I am not; where I am, death is not.) We know when we have been unconscious because we are not restored to our previous state immediately upon awakening, but to a characteristic "waking up" state.

23. Even if we grant that mind emerges from a syntactical foundation, or from computation, does it emerge from information and its exchange alone? Or is baud rate essential? Or, is intelligent software intelligent when it is recorded on a disk sitting in a drawer? Or only when it is running? Or only when it is running at a certain speed?

24. In John Searle's Chinese room, the baud rate is much slower than in a brain or computer. I think this fact alone gives great intuitive appeal to his conclusion that the room is not an intelligence. How could a post office be intelligent? Or a rain storm? But if baud rate is immaterial, then much of the appeal of Searle's picture vanishes. His argument would mislead by changing an irrelevant variable (namely, baud rate), and worse, for pandering to the intuition likely to be misled.

25. If baud rate is generally immaterial, then what about the limit: zero baud rate?

26. We know from the history of the calculus, the infinitesimal, the derivative, and the limit that human beings accepted the concept of instantaneous velocity with resistance. More precisely, we used it centuries before we understood it, and we resisted understanding. It is as if our intuitions were cultured against it (a fact Searle exploits). We should expect trouble, then, with the concept of instantaneous sentience. I mean this strictly as sentience at a dimensionless point of time. (See citations below.) If instantaneous sentience is possible, does it follow that an intelligent program recorded on a disk sitting in a drawer is an example of it? Or would we come closer if we ran the program, halted processing at some point, and recorded the state of memory? (No matter what we found, once frozen it would be a digital pattern similar in relevant respects to specks of pepper on a sheet of paper, or the digits of a large binary number. Could either of these taken synchronically be intelligent?)

Whether or not running the program is necessary, we have pointed to a large number of bits embodied either in RAM or in a magnetic medium. Is the embodiment necessary to instantaneous sentience? Or could we take the information independently of the embodiment? If the latter, then some large numbers already contain the necessary information. Are we ready to say that some large numbers possess instantaneous sentience?

27. A disk in a drawer may contain an intelligent program that is not running. Is that instantaneous sentience? What about a memory-dump taken during a run of an intelligent program?

28. The disk file or the memory dump contains an uninterpreted array of bits —which we can interpret as some very large binary number. If "sentience" means at least that some semantics have been added to (or emerged from) the pure syntax of the bits, then should we say that "running" the program causes the semantic dimension to arise? If so, must we have a baud rate greater than zero for sentience?

29. Or is the intelligent program when not running as semantic or sentient as it is when running? Can we make sense of this notion?

30. Or is sentience as we experience it less semantic than we thought?

31. If experience is inherently uninterpreted (syntactical), then a phenomenological reduction (if it could live up to its description in Husserl) should reveal this. Can we by a kind of phenomenological reduction cease interpreting the syntax of our experience? Wipe off the semantic suds? Desist meaning?

32. It often feels as if we can, although it is easy to mistake the evidence. Blankness of mind, or loss of the ordinary semantic buffer of experience, is certainly attainable. But we should not mistake this for direct contact with the uninterpreted bits of cognitive computation at the lowest level.

33. Is it a category mistake to expect to reach this kind of syntactic experience? What or who would be the subject of the experience? Ex hypothesi, it would be a knower emergent from the syntax; if it could turn on itself and face its bits unfiltered by interpretation and unleavened with semantics, would that not require that it cease to be a knower emergent from the syntax? The very idea of a knower emergent from syntax is that of semantics emergent from syntax. The semantics may think in a rarefied moment that it experiences semantic-free syntax, but that impression is belied by the fact of its claim to any experience at all.

34. The experience of uninterpreted syntax must be a non-experience by a non-knower. If "we" can attain it, it must be by ceasing to be the kinds of things that can attain anything, and to become nothing more than the bits that we are, instead of emergent beings reflecting on the bits that they are.

35. In Merleau-Ponty's version of phenomenology, we get confirmation of this line of thought: when the reduction is performed to the extent that it can be performed (which is less than the completeness Husserl thought was possible), then we still find semantics "always already" there.

36. If experience is inherently uninterpreted (syntactical), then we should be able to shift from one interpretation to another as we do with the bits composing a formal system under the Löwenheim-Skolem theorem. Is this what Kuhn meant by a paradigm shift?

37. If experience is inherently uninterpreted (syntactical), then how does it ever become interpreted (semantical)? If semantics "emerges" from syntax, does it follow that we could undo this emergence in consciousness and "view" the uninterpreted syntax of experience? Does this mean to view it in consciousness, hence semantically?

38. If experience is inherently uninterpreted (syntactical), then it seems that an intelligent program could be as mindlike when not running as when running. The concept of instantaneous sentience —zero baud rate sentience— would at least make sense. Moreover, the optional connection between the syntactical substratum of sentience and the semantic color of experience would inhere just as much in the mutual relations of bits as in the neural switches of the brain. Large numbers may be as intelligent as minds.

Section 2. Do the Boundaries of the Self Depend on Baud Rate?

39. I once heard Douglas Hofstadter say something like the following. One neuron communicates information to other neurons in the same brain very quickly. But a neuron can also communicate information to another brain, much more slowly, through behavior and language. So minds or selves are not distinguished from one another by information exchange among neurons, but by baud rate.

40. This was a casual remark, not a worked out theory. But let us take it seriously. Let us take the position to be this: the baud rate of communication is the only difference, from the standpoint of one of my neurons, between one of my neurons and one of yours. Minds can be made of sets of communicating neurons of any cardinality; the only difficulty with very large sets is preserving a sufficient baud rate.

We could reverse the emphasis and summarize the position this way: the fact that we seem to be individuated into different selves, despite the exchange of information among our neurons, is an argument that baud rate does matter for mind, or at least for selfhood. The unification of the neurons in one brain is due to the uniformly high baud rate with which they can communicate with one another; unities of neurons are distinct from one another if the communication between them is markedly slower than the communication within them that constitutes them as unities.

41. One limitation of this view is that one of my neurons can communicate with other very particular neurons in my brain, but can only communicate with other brains in a general way. Although definite neurons in the other brain will be fired by hearing and interpreting my communication, these may differ from brain to brain, or within the same brain from time to time. Does this matter? If I get my message to your eyes or ears, and if you are well-"attuned" (programmed), can I have an effect of comparable particularity?

42. The individuality of the self is soluble (or dissoluble). As the baud rate between brains is increased by technical innovations, selfhood will expand.

43. Similarly, the scope of selfhood could expand not only to include other minds but to include artifacts, and in principle large parts of the natural world. (See Olaf Stapleton's Star Maker; Hegel's absolute knowing.)

44. Another way to put it. For the view that mind emerges from computation, we require several things in the substratum: (1) a very large number of bits of information; there may even be a threshold below which there is simply insufficient differentiation to support sentience, and (2) certain definite relations among the bits; that is, one program rather than another, these data rather than those. This view adds a third element: (3) in the processing or running of the program, a sufficient speed of calculation or bit-switching.

45. Note that the third element above would never be needed for the similar objective of creating semantics from syntax. The first elements two alone may not suffice for this either (see Löwenheim-Skolem); but if mind can emerge from computation, then semantics can emerge from syntax by these two alone. All that Löwenheim-Skolem shows is that some particular semantics (meaning) cannot be uniquely determined by syntax alone, even if the number of bits is infinite. But this only means that some large ambiguities are inevitable; it does not mean that semantics cannot emerge from syntax. (More detail: given the first two, that is, given enough bits and particular relations among them, then we have all the elements necessary to create structures analogous to any ideatum whatsoever; that is the semantic relationship, even if it is not strong enough to uniquely determine any particular ideatum.)

46. If we do infer the third element from the first and the second, then we may add a fourth: (4) there must be a limit to the size of the substratum. The reason is simply that, after a point, size limits baud rate. And we are not talking about sizes in the light-year range. Already chip design is constrained by the need to give electrons a shorter path than the half-inch or so they had in the previous generation of chips. This requirement is in tension with the first: if there must be a very large number of bits, there must be a relatively sizeable substratum to hold them. While in principle bits can be stored in virtually no space at all, current technologies for reading them off require sizes greater than those where, for example, quantum effects are known.

Section 3. Internal and External Baud Rates

47. Daniel Dennett argues that baud rate is essential for intelligence because it is essential for natural selection. (See his The Intentional Stance, MIT Press, 1987, Chapter 10, "Fast Thinking".)

Speed...is 'of the essence' for intelligence. If you can't figure out the relevant portions of the changing environment fast enough to fend for yourself, you are not practically intelligent, however complex you are. Of course all this shows is that relative speed is important. In a universe in which the environmental events that mattered unfolded a hundred times slower, an intelligence could slow down by the same factor without any loss; but transported back to our universe it would be worse than moronic. (Dennett, pp. 326-27)
It's not that sheer speed ("intrinsic" speed?) above some critical level creates some mysterious emergent effect, but that relative speed is crucial in enabling the right sorts of environment-organism sequences of interaction to occur. (Dennett, p. 332)

48. For Dennett, intelligence is a relation between organism and environment, not between neurons or bits. This demystification is at first attractive. But there must be something like intelligence emergent from neurons or bits for an organism to act successfully in its environment. Call this the "internal baud rate" as against Dennett's "external baud rate".

49. I can best get at my point by quoting Dennett again. In his article on "Consciousness" in The Oxford Companion to the Mind (ed. Richard L. Gregory, Oxford University Press, 1987, p. 161) he says, "[f]rom the inside, consciousness seems to be an all-or-nothing phenomenon —an inner light is either on or off." What I am calling internal baud rate is that which (with other conditions) suffices to turn the light on. The external baud rate is that which (with other conditions) suffices to help a creature survive in its changing environment.

50. Even in "Fast Thinking," Dennett does not deny this. He is more concerned there to establish the importance of the external baud rate than to deny any role to an internal baud rate. He is most concerned to measure intelligence by survival and prosperity, in Darwinian terms, not by bit-switching or computation —even if natural selection takes place in part through the medium of bit-switching and computation. His argument is that if any baud rate is essential to intelligence, it is the baud rate set by the rate of change in the environment at large, not the baud rate (if any) that finally trips over a threshold and congeals intelligence from circuit-running.

51. If Dennett is right, then environments of much slower external baud rates could harbor intelligent organisms of much slower internal baud rates. That is, there is no intrinsic minimum internal baud rate.

52. If there is no intrinsic minimum internal baud rate, then the exchange of pennies among pails at inter-galactic distances at pedestrian speeds can be an intelligence.

It is important to note a sense in which Dennett does deny this. He cites convincing research that the parallelism of the brain makes possible speeds that no serial processor could ever match. ("Fast Thinking," 326ff.) It does not follow that serial processors cannot be intelligent, only that they cannot match the intelligence of the brain. Dennett also admits that human-made computers need not be serial processors, and that brain-like massive parallel processing might be attainable in hardware (ibid. 327.)

53. Hence, if there is no intrinsic minimum internal baud rate for intelligence, then most of the natural world becomes eligible for intelligence. It exchanges bits among its parts continually. A bit need not, of course, be anything nearly so tame as a pulse of voltage of some constant magnitude. "Any difference that makes a difference" is a bit of information, in Gregory Bateson's phrase. Every physical object and process is subject to an infinite number of interpretations that make bits out of its aspects, or that pick out differences that should make a difference. (See my "What is Software?" Journal of Speculative Philosophy, 2, 2 (1988) 89-119, at pp. 91, 101-03.)

54. In addition to the sufficiency of bit-level differentiation and exchange, of course, intelligence would require some particular pattern of exchange. The number of patterns that supported intelligence is undoubtedly vanishingly small next to the number of possible patterns. What prevents us from becoming easy panpsychics is not the nature of intelligence but its unlikelihood. If most bit-patterns supported intelligence as an emergent property, that is, if most computer files were artificially intelligent, then belief in panpsychism would be more reasonable than belief in life. This panpsychism would point to collective, not distributive, mentality; intelligence emergent from collections of unintelligent parts, not traceable to intelligent individual monads or homunculi. Human intelligence would be more a node in a network of slower intelligence than an island of sentience in a dark universe.

55. If we generated arbitrarily many random sprays of bits and packaged them as computer files, the chances are very good that none would be intelligent, or even run. This alone should temper our panpsychism. The space of possible bit-patterns is so large compared to the subset that would support intelligence that there is no a priori reason to suppose that any natural system embodies one of those in the magical set. In fact there is a priori reason to think that spontaneous natural intelligence is stupendously improbable.

56. But are there a posteriori reasons to believe in spontanous natural intelligence or in the constructibility of artificial intelligence? We embody a pattern in the magical set. (Like Descartes' cogito, even doubt of this proposition confirms it.) If there is no intrinsic minimal internal baud rate for intelligence, then there is ground —not for hope, but for a research project. We must simply take care not to define what we are looking for in such a way that we never look where it might be found, or so that we culture ourselves into incapacity to recognize it when we stumble upon it. AI has more to fear from metaphysical provincialism than from stupendous improbabilities.


5/28/2000 addition. There are several germane quotations on the relevance of baud rate to mind in George B. Dyson, Darwin Among the Machines: The Evolution of Global Intelligence, Perseus Books, 1997, at pp. 203, 204, 217, and 227. In Chapter 5, Dyson obscurely draws an analogy between mind and turbulence which he uses to ground an inference something like this:  speed of water flow matters decisively for turbulence, so speed of information flow should matter decisively for mind.

[Blue
Ribbon] Peter Suber, Department of Philosophy, Earlham College, Richmond, Indiana, 47374, U.S.A.
peters@earlham.edu. Copyright © 1989, Peter Suber.