In the Cosmology Musing I established how holograms are stored in a repository by the natural, rule-driven functions of a wormhole, I still have the daunting task of determining how to use these stored holograms to create a conscious being in the 'real world' who can have any perception of this wonderful creation and can evolve to contribute more complex memes to the instantiation of future objects. Once I have created a conscious observer, it will be child’s play to construct instantiations of planets, stars, galaxies, dark matter, and dark energy in which our being will feel comfortable enough to relax and get about the business of bearing meme fruit.
If I could have any hope of explaining this mutual existence pact between reality and its conscious observers it would be to say that the 'universe' can be thought of as an information processor. It takes information regarding how things are now and produces information delineating how things will be at the next now, and the now after that. About a third of a second later our senses become aware of such processing by detecting how the physical environment changes over time. But the physical environment itself is emergent; it arises from the fundamental ingredient, information, and evolves according to the fundamental rules, the laws of physics.
I recognize that the majority of my readers are believers in what scientists regard as the theological creation myth, and I am trying to be deferential to that belief system because it provides great emotional comfort and enables important social control structures. However, there have been many, many demonstrations that conscious experiences depend upon the brain's special features that have evolved on our planet by way of Darwinian selection. This functionality of the brain fits comfortably into the original contract between physics and biology and supplements standard physiochemical processes with cybernetic engineering.
This 'bilateral contract' between physics and biology essentially provides feedback systems that respect physical laws made out of components that obey them. Like any other entity (the force of gravity, the strong and weak forces of quantum mechanics) consciousness is a given, and the physics lies in working out its consequences. So if the theory of consciousness forms part of the fundamental laws of physics, we no longer need to ask how consciousness came about, any more than physicists generally ask how the 'Big Bang' came about.
As a conscious being I reside in a metaphor for the complex interconnected networks formed by relationships between objects in a system—including social networks, the interactions of particles, and the 'symbols' that stand for ideas in a brain or intelligent computer. This consciousness system has free will because free will is a component of consciousness. As a being defined by the historical data of my experience within a reality or a sort of Memetic Matryoshka (MM), I am always a potentiality within the universe. This is more accurate than saying that I am an independent piece of the universe –- I am not independent -- I am one with the universe –- I have the potential to be independent, to have an independent freewill, within a reality within the universe. My independent free will requires a reality to become itself. Otherwise I am simply accumulated data, a potentiality within the universe.
From the universe’s historical database that I have called the Meme Repository, a historical individual, like me, or a sequence of historical characters that have progressed through multiple lifetimes, like my Memetic Matryoshka was selected. With this information, the universe adds a freewill -- which means it inserts this MM into a virtual reality where it can make free choices within an evolving decision space appropriate to my ability/quality/awareness. I am going to use the old term "Ensoulment" to describe this somewhat unfathomable blending of mechanics, metaphysics, and embryology which places a previously homeless collection of information states into a clump of cells quietly multiplying on my mother's uterus wall where they would instantiate as the software of my embryonic brain.
Remember that consciousness itself is the only thing that is fundamental and that everything else is virtual. This 'everything else' includes all the structured realities where experiential interaction takes place. All experiential realities are virtual. Consciousness creates the structure. The structure defines the reality, and the reality creates the possibility for an interactive experience between subsets of consciousness. The quality of the subset of consciousness (as specified by its history) and the structural bounds defining the reality together determine the available decision space and the nature of possible interactions.
The historical record of these subsets or entities grows or evolves as choices are made and their intent is expressed. What is gained by an instantiation of consciousness participating in a virtual reality is a new historical record that accumulates quality (reduces entropy) as it engages in exercising its freewill intent. My MM can be considered as an individual subset of consciousness with a history and could be 'bubbled up' or be chosen by the universe to engage in a virtual reality appropriate to the evolutionary needs. However, that assumption of separateness seems little more than a habit of pre-modernist thinking when one considers that I am a representative of my MM. I participate in this virtual reality as a manifestation of my MM, and as such I bring with me all the quality and history that my MM has to offer at the time. I am, in more technical terms, a specific instance of this MM that is restricted to abide by the current rule-set. As I experience and collapse probability waves in this virtual reality, my MM collects the data and integrates it in real time.
If my MM is simply collecting data to be processed by the universe, it is no more than a history file in the process of having data uploaded to it. If my MM is making freewill decisions and choices within one or more virtual realties that subsume me, then I have two or more tracks of evolution running at the same time that may influence each other. For example, two separate experience packets in reality at the same time (that may or may not interact) plus the MM actively interacting with one or each of them would constitute three tracks of evolution running at the same time. If that MM is making freewill decisions and choices within one or more virtual realties that are independent of the reality then it has two or more tracks of evolution running at the same time that do not directly interact or influence each other. For example, two separate experience packets each in a separate reality plus the MM is engaged in some other virtual reality that has nothing to do with either reality packet would constitute three independent tracks of simultaneous evolution.
Just before leaving this section of musings I would like to say that my being may well have already dispensed with the need for the sloppiness of biological materiality. The odds overwhelmingly favor the conclusion that you and I and everyone else are living within a simulation, perhaps one created by future (Omega Point) historians with a fascination for what life was like at the beginning of the twenty-first century. How then are we to trust anything, including the very reasoning that led us to the conclusion? Our confidence in a great many things might diminish. Will the sun rise tomorrow? Maybe, as long as whoever is running the simulation doesn't pull the plug. Are all of our memories trustworthy? They seem so, but whoever is at the keyboard may have a penchant for adjusting them from time to time. Nevertheless, the conclusion does not fully sever our grasp on the true underlying reality. Even if we believe that we are all on a 'holo-deck', we can still identify one feature that the underlying reality definitely possesses: it allows for realistic simulations. After all, according to our belief, we are in one. The unbridled skepticism generated by the suspicion that we're simulated aligns with that very knowledge and so fails to undermine it.
It would be likely that if a far future generation would decide to build a twenty-first century holo-deck, the simulation would be developed by mimicking the strategy apparently followed by nature in constructing the original reality by starting with a single hetoretic superstring and its set of fundamental equations. Such a simulator would take as input a mathematical theory of matter and the fundamental forces and a choice of 'initial conditions'; the computer would then evolve everything forward in time, thereby avoiding the issue of melding a patchwork of biochemistry, physiology, and psychology. If any inconsistencies began to develop, the programmer of the simulator might need to reset the program and erase the inhabitants' memory of the anomalies. A simulated reality might reveal itself through some glitches and irregularities. However, these inconsistences, anomalies, unanswered questions, and stalled programs could easily be reflected upon through failings in the intellectual community as they are in our reality. The sensible interpretation of such evidence would be that we scientific types would need to work harder and be more creative in seeking explanations.
If and when we do generate simulated worlds, with apparently sentient inhabitants, an essential question will arise: is it reasonable to believe that we occupy a rarified place in the history of scientific technological development -- that we have become the first creators of sentient simulations? We may have but if we go with the odds, we must consider alternative explanations that, in the grand scheme of things, don't require us to be so extraordinary. And there is an explanation that fits the bill. If we can create a garden variety simulator, there is probably not just one such simulation out there but a swarming ocean of simulators. While the simulation we've created might be a landmark feat in the limited domain to which we have access; it's nothing special, having been achieved a gazillion times over. Once we accept that idea, we can grow more comfortable with the idea that we are in a simulation, since that is the status of the vast majority of sentient beings in the 'Emulated Multiverse'.
The Holonomic Brain Theory posits a model of cognitive function as being guided by a matrix of neurological wave interference patterns situated temporally between holographic Gestalt perception and discrete, affective, quantum vectors derived from reward anticipation potentials. In particular, the fact that information about an image point is distributed throughout the hologram, such that each piece of the hologram contains some information about the entire image, seemed suggestive about how the brain could encode memories. This holographic idea led to the coining of the term 'holo-nomic' to describe the idea in wider contexts than just holograms.
The mind, in short, works on the data it receives very much as a sculptor works on his block of stone. In a sense the statue stood there from eternity. But there were a thousand different probabilities beside it, and the sculptor alone is to thank for having extricated this one from the rest. Just so the world of each of us, howsoever differently our several views of it may be, all lay in the primordial chaos of probabilities, which gave the mere matter to the thought of all of us indifferently. We may, if we like, by our reasonings unwind things back to that black and jointless continuity of space and moving clouds of swarming wavicles which science calls the only real world.
In a Memetic Matryoshka, perception is determined by features that belong to the visual display as a whole, and not just the local features initially analysed at the sensory surface, in the cells of the retina for example. Here, one finds cells that are exquisitely sensitive to what happens at just one point in visual (or tactile, auditory, etc.) space, but whose interaction with cells sensitive to other points in space is limited to very close neighbors. And, as the pathways responsible for sensory analysis have been traced in ever-increasing detail from the periphery to the center, a conventional neural basis for holistic properties of perception has remained elusive and a number of other theories have emerged which model the perception process in field effects now commonplace throughout physics; like the gravitational field, electromagnetic fields or the most mysterious of all fields, quantum mechanical entanglement. The implications of these theories is that evolution has created biological mechanisms that exploit quantum processes. One of these may be quantum calculation in the brain. There is a fairly long list of proposed quantum structures in the brain at the level of the neuron or smaller: receptor proteins, membrane lipids, presynaptic vesicle release structures, gap junctions, neurotransmitter molecules, calcium ions, DNA, RNA, and microtubules.
Now that I have tried to establish the holonomic theory, all that I now need is to develop a three dimensional prototype in the form of me as our first Memetic Matryoshka and then equip him with an Initial Program Load (IPL) made up of a holonomic brain simulator and a few experiential memes and we have a shiny new probability wave collapser to begin making contributions to the meme repository for our local reality. Both the memes of holography and anamorphosis are relevant, the later skewing our illusory perspective. Viewing from a certain angle can create impossible images — image warping, reflected aberrations. In post-modern relativism, there are holes in virtually every point of view.
Word choice and metaphor allow for the emergence of new memes, the replacement of memes, and the death of memes, via a concept that is called 'Conceptual Slippage'. Memes, like genes, only 'code for' a norm or mean of reaction correlating to the memetic selection bias. Neither memes nor genes determine all aspects of the properties of the entities they constitute. Cultural inheritance is not particularly creative, so most 'novelty' is merely the recombination or recycling of pre-existing memes in novel ways. Human life is all symbols, visible signs of invisible reality, intuitive ideas that cannot yet be formulated in any other or better way. We live in a mind soup of psycho-confabulation. Symbols wield the power of pattern recognition and association. Few objectively gain distance from the archetypal content of mind and emotions. Information results and arises from this innate structure.
We will filter our perceptions to screen out (deny) threatening information we cannot deal with. It is non-logical or 'magical thinking' to take a symbol to be its referent or an analogy to represent an identity. Magical thinking comes from an instinctual search and recognition of patterns, and regards symbols not as representations but as handles attached firmly to real-life objects and outcomes. Out of context, symbols are ineffectual. Evocative 'power' is one of the attractive aspects of the meme concept which is also a symbol and signifiers of meaning that are context dependent. There are three types of symbols:
- Symbols that reflect intrinsic mental states;
- Symbols that stand in for extrinsic (actual or objective) conditions or objects, and
- Symbols that stand in relation to cultural artefacts, or constructs, or memes.
Here the symbol and the object it represents are one and the same. Any distinction between symbol and symbolized is spurious. The emotional projection of symbols, or 'magical thinking', happens in psychosis, in cultures, and subcultures. Magical thinking helps us feel more secure in an unpredictable world. By manipulating symbols, we imagine being able to manipulate the reality that a symbol represents, but it makes us vulnerable to manipulation, too. The psychology of superstition 'works' better in a virtuality. Superstition provides the illusion of increased control. Symbols are captivating, indistinct, metaphoric and enigmatic portrayals of psychic reality. The content, i.e. the meaning of symbols, is far from obvious; instead, it is expressed in unique and individual terms while at the same time partaking of universal imagery. Our society is having to rethink such fundamental notions as money, security, growth and many other bases of our current worldview.
Symbols can be recognized as aspects of those images that control, order and give meaning to our lives. The source of symbols can be traced to the archetypes themselves which by way of symbols find more full expression. Symbols are thus one type of what Jung called “archetypal images,” that is, the representation in consciousness of an underlying archetype. This anamorphic is not the fractal, because the fractal is repeating a pattern. When the dominant vision that holds a period of culture together cracks, consciousness regresses into earlier containers, seeking sources for survival which also offer sources of revival. Self-empowerment can be entangled with self-delusion. We can no longer distinguish clearly between neurosis of self and neurosis of world, psychopathology of self and psychopathology of world. Species-wide trauma is playing out on the world stage. We compulsively recreate individual and collective trauma, perhaps as a way to awaken ourselves. Such madness is its own ritual and revelation.
As far as I can see there is no great 'architect' of the universe besides our compliance to feed it. Calling it archetypal simply means there is no physical joining (like an android who is made up of organism and implants) but rather, like mythic thinking, in the Jungian sense, where people mime/behave cues from technology when it becomes used by a million people and becomes environment (morphic resonance). Thus, the hologram or fractal is superseded by anamorphosis. We may now be in the post-spherical, post-news, post-information era and panicking in the anamorphic flux of our hyper dimensional being (chemical, astral, TV and chip bodies ) – our new medium.
Memes are quantized information stored in and expressed from neurological structures or cultural substrates. Memes reside as 'Neural Net' structures in our central nervous systems, but many emerge at a higher cultural level, expressed in a cultural ecology. Memes do not control behavior rigidly, but bias and constrain it to a norm of reaction. They are the replicators of cultural evolution, genealogical actors in ecological roles. Memes form ancestor descendent chains of populations that ramify, reticulate, and resonate with frequencies differing from biological phylogeny, but the differences appear to be within the extremes of the parameters of biology. What memes and genes do determine are the degrees of freedom. They bias and constrain the outcomes of the system. It’s as if we live in holographic bubbles of encoded information. where every event is a decision point. Images are animated over events in the flow of energy, like the animated frames of a movie. The confusing aspect of consensual reality is each bubble shares information with other bubbles, which is the nature of the perceivable world that we share together.
As the universe expanded and cooled the chance outcomes of the quantum accidents have helped to determine the character of individual galaxies, of particular stars and planets, of terrestrial life and the particular species that evolved on our planet, of Memetic Matryoshkas like me, and of the events of human history and our personal lives. My genotype has been influenced by numerous quantum accidents, not only my ancestral germ plasm, but even events affecting the fertilization of a particular egg by a particular sperm. The consequences of some such accidents can be far-reaching. The three-dimensional character of the whole universe was affected by that first accident occurring near the beginning of its expansion. Lost at the same moment was the potential for any creature evolving in this universe to be physiologically designed for seeing in more than three dimensions so it is very hard for us to imagine or develop a picture of the the heterotic superstring that originally gave birth to us. The nature of life on Earth depended on chance events that took place around 3.8 billion years ago. Once the outcome was specified, the long term consequences could take on the character of law, at any but the most fundamental level.
"O, what a world of unseen visions and heard silences, this insubstantial country of the mind! What ineffable essences, these touchless rememberings and unshowable reveries! And the privacy of it all! A secret theater of speechless monologue and prevenient counsel, an invisible mansion of all moods, musings, and mysteries, an infinite resort of disappointments and discoveries. A whole kingdom where each of us reigns reclusively alone, questioning what we will, commanding what we can. A hidden hermitage where we may study out the troubled book of what we have done and yet may do. An introcosm that is more myself than anything I can find in a mirror. This consciousness that is myself of selves, that is everything, and yet is nothing at all - what is it? And where did it come from? And why?"
The Origin of Consciousness in the Breakdown of the Bicameral Mind Julian Jaynes
Just to bring you up to date on what has occurred since I wrote about the Big Bang in my discussion of Cosmology, I am going to imagine the 13.8 year lifetime of the universe (or at least its present reality incarnation since the Big Bang) compressed into the span of a single year. Then every billion years of Earth history would correspond to about twenty-four days of our cosmic year, and one second of that year to 475 real revolutions of the Earth about the sun. I present the cosmic chronology as a list of some representative pre-December dates; a calendar for the month of December; and a closer look at the late evening of New Year's Eve.
On this scale, the events of our history books -- even books that make significant efforts to deprovincialize the present -- are so compressed that it is necessary to give a second-by-second recounting of the last seconds of the cosmic year. Even then, we find events listed as contemporary that we have been taught to consider as widely separated in time. In the history of life, an equally rich tapestry must have been woven in other periods -- for example, between 10:02 and 10:03 on the morning of April 6th or September 16th. But we have detailed records only for the very end of the cosmic year. The chronology corresponds to the best evidence now available. But some of it is rather shaky. No one would be astounded if, for example, it turns out that plants colonized the land in the Ordovician rather than the Silurian Period; or that segmented worms appeared earlier in the Precambrian Period than indicated.
Big Bang ~January 1
Origin of the Milky Way Galaxy ~May 1
Origin of the Solar System ~September 9
Formation of the Earth ~September 14
Origin of life on Earth ~ September 25
Formation of the oldest rocks known on Earth ~October 2
Date of oldest fossils ~October 9
Invention of sex (by microorganisms)~ November 1
Oldest fossil photosynthetic plants ~November 12
Eukaryotes (first cells with nuclei) flourish~November 15
Origin of Proconsul and Ramapithecus ~ 1:30 P.M.
First humans ~ 10:30 P.M.
Widespread use of stone tools ~11:00 P.M.
Domestication of fire by Peking man ~11:46 P.M.
Beginning of most recent glacial period ~11:56 P.M.
Seafarers settle Australia ~11:58 P.M.
Extensive cave painting in Europe ~11:59 P.M.
Invention of agriculture ~11:59:20 P.M.
Neolithic civilization; first cities~11:59:35 P.M.
First dynasties in Sumer, Ebla and Egypt~11:59:50 P.M.
Invention of the alphabet ~11:59:51 P.M.
Hammurabic legal codes in Babylon; Middle Kingdom in Egypt ~11:59:52 P.M.
Bronze metallurgy; Mycenaean culture; Trojan War; Olmec culture: invention of the compass
Iron metallurgy; First Assyrian Empire; Kingdom of Israel
Asokan India; Ch'in Dynasty China; Periclean Athens; birth of Buddha ~11:59:55 P.M.
Euclidean geometry; Archimedean physics; Ptolemaic astronomy; Roman Empire; birth of Christ ~11:59:56 P.M.
Zero and decimals invented in Indian arithmetic; Rome falls; Moslem conquests- ~11:59:57 P.M.
Mayan civilization; Sung Dynasty China; Byzantine empire; Mongol invasion; Crusades ~11:59:58 P.M.
Renaissance in Europe; voyages of discovery from Europe and from Ming Dynasty China; emergence of the experimental method in science ~11:59:59 P.M.
Widespread development of science and technology; emergence of a global culture; acquisition of the means for self-destruction of the human species; The first steps in spacecraft planetary exploration and of the search for extraterrestrial intelligence. ~ Now: first second of New Year's Day, 1/1/2012.
Somewhere in the last ten seconds some subset of human beings developed the special quality of 'consciousness'. This emergence of consciousness has been relatively unstudied until the twenty-first century. How do I think? What is this 'I' that seems to be doing the thinking? Would I be different if I had been born at a different time, in another place, or in another body? Where do I go when I fall asleep, and dream, and die? And how do late neuronal firing, cortical ignition, and brain-scale synchronicity ever create this subjective state of mind.
What kind of information processing architecture underlies the conscious mind. What is its reason for being, its functional role in the information based economy of the brain? When we say that we are aware of a certain piece of information, what we mean is just this: the information has entered into a specific storage area that makes it available to the rest of the brain. Among the millions of mental representations that constantly crisscross our brains in an unconscious manner, one is selected because of its relevance to our current goals. Consciousness makes it globally available to all of our high-level decision systems. We possess a mental router, an evolved architecture for extracting relevant information and dispatching it. The psychologist Bernard Baars calls it a "global workspace": an internal system, detached from the outside world, that allows us to freely entertain our private mental images and to spread them across the mind's vast array of specialized processors.
Consciousness then is just brain-wide information sharing. Whatever we become conscious of, we can hold it in our mind long after the corresponding stimulation has disappeared from the outside world. That's because our brain has brought it into the workspace, which maintains it independently of the time and place at which we first perceived it. As a result, we may use it in whatever way we please. In particular, we can dispatch it to our language processors and name it, this is why the capacity to report is a key feature of the conscious state. But we can also store it in long-term memory or use it for our future plans, whatever they are. The flexible dissemination of information is a characteristic property of the conscious state.
As a way of trying to get a handle on what is going on inside my brain, I am going to imagine that I have acquired an inexpensive set of nerve extensions from Radio Shack so that I can remove my brain from my cranial cavity and suspend it in a chemically rich and temperature controlled petri dish of cerebral spinal fluid where I can get a better look at it. I can also carefully touch it with my hands and with a few electrical instruments. My first thoughts are of how trustworthy the brain looks, how loyal, helpful, friendly, courteous, and if I can say it without embarrassment, how very beautiful it is. If I wouldn't have had such a bad exterior on my cranial cavity, I might have been something. It seems to be about the same size as a coconut, has the shape of a walnut, and the color of uncooked liver. It has two hemispheres, which are covered in a thin skin of deeply wrinkled grey tissue called the cerebral cortex. Each infold on this surface is known as a sulcus, and each bulge is known as a gyrus. At the very back of the main brain, tucked under its tail and partly fused to it, lies the cerebellum -- the 'little brain'.
As I continue to examine this wetware under a good light, I am initially disoriented to discover that my own brain is now reduced to a set of mechanico-electric, chemo-electric, and photo-electric transducers in my eyes, ears, nose, touch, and taste receptors so that I am confused by where I am. I don't know whether to think of myself as down there in the petri dish or up here higher where my transducers are located. Everything I sense is from the perspective of my sensory inputs as I watch my touch sensors reach out to feel the damp outer membrane of my wetware. Aristotle thought that the brain was an organ for cooling the blood – and of course it does cool our blood quite efficiently in the course of its operations. Suppose our livers had been in our skulls and our brains were snuggled into our ribcages. As we looked out at the world and listened, do you think we might have found it plausible that we thought with our livers? Our thinking seems to happen behind our eyes and between our ears – but that is because that’s where our brain is, or is that because we locate ourself, roughly, at the place we see from? Isn’t it in fact just as mind boggling to try to imagine how we could think with our brains – those soft grayish cauliflower shaped things – as to imagine how we could think with our livers – those soft reddish brown liver shaped things? And while I am on it, regardless of its form, how does this dynamic spider-web of quantum spins make the rest of the universe of a dynamic spider-web of quantum spins appear to exist and have continuity.
While I am trying to limit the scope of this musing to what is now contained in the petri dish, I must spend a moment to introduce my eyes. Hooded above and pouched below, it is easy to be mesmerized by our 'windows of the soul'. But what's deep inside them, on the underside of the retina, is what really counts: the nano-demons of vision. These came upon the biomass scene about 900 million years ago, some only 40 million years ago. They are made up of three participant sub-demons: a protein molecule that picks up the outside information; another protein molecule that translates the information into an electrical signal; and a lipid partner, the molecular double-layer of the sensory-cell membrane, which contributes the wherewithal for the integration and initial dissemination of the signal. The first two are the active participants, the cognitive heart of the operation. They contribute what in nonbiological sensors, like strain gauges, microphones, and photocells, is lacking: the Maxwellian demon element, the fundamental element of biological cognition. Both of the aforementioned demon types existed in single-cell times, ministering to different functions. But eventually they joined up in common enterprise. They started as equals, and even today each shows his independence in going through separate cognition cycles. But sometime during the long march of evolution, one of them picked up a few extra bits and gained sway over the other.
But what is it that makes the vision demons cast all others into the shade? What do they have that the others don't? They are astonishingly efficient as transducers--they make 1 picoampere, a macroscopic electrical current, from one elementary particle, a photon. They are capable of sensing a single quantum of light! No number of exclamation marks could do such a process justice. But it is precisely what sets these demons apart and makes them stand head and shoulders above the crowd. We have come to the boundary between the coarse-grained and the quantum realm--two worlds apart at the edge of a cognitive chasm where we can examine consciousness. Here at the seedbed of those states, at the brain periphery, we can watch the demons divide their interests accordingly: some have their antennas directed to the coarse grained world and others to the quantum world.
I don't want to dwell on consciousness as an adjunct to the act of thinking, but I must introduce and define consciousness as "a process in which information about multiple individual modalities of sensation and perception is combined into a unified multidimensional representation of the state of the system and its environment, and integrated with information about memories and the needs of the organism, generating emotional reactions and programs of behavior to adjust the organism to its environment."
The word consciousness, as we use it in everyday speech, is loaded with fuzzy meanings, covering a broad range of complex phenomena. The science of consciousness distinguishes a minimum of three concepts: vigilance--the state of wakefulness, which varies when we fall asleep or wake up, attention--the focusing of our mental resources onto a specific piece of information, and conscious access--the fact that some of the intended information eventually enters our awareness and becomes reportable to others.
What counts as genuine consciousness is conscious access--the simple fact that when we are awake, whatever we decide to focus on may become conscious. Neither vigilance nor attention alone is sufficient. When we are fully awake and attentive, sometimes we can see an object and describe our perception to others, but sometimes we cannot--perhaps the object was too faint, or it was flashed to briefly to be visible. In the first case, we are said to enjoy conscious access, and in the second we are not but our brain may be processing the information unconsciously.
This conscious access can be easily studied on my brain in the petri dish. We know dozens of ways in which a stimulus can cross the border between unperceived and perceived, between invisible and visible, allowing us to probe what this crossing changes in the brain.
Conscious access is also the gateway to more complex forms of conscious experience. In everyday language, we often conflate our consciousness with our sense of self--how the brain creates a point of view, an "I" that looks at its surroundings from a specific vantage point. Consciousness can also be recursive: my ""I" can look down at itself, comment on its own performance, and even know when it does not know something. The good news is that even these higher-order meanings of consciousness are no longer inaccessable to experimentation. In my laboratory, I will learn to quantify what the "I" feels and reports, both about the external environment and about itself. I will even be able to manipulate the sense of self, so that I can examine a so-called "out of body" experience while my brain is inside a functional MRI.
I will struggle harder with the sense of consciousness which we call "phenomenal awareness": the intuitive awareness, present in us all, that our internal experience possesses some kind of exclusive qualities, unique qualia such as the exquisite sharpness of tooth pain or the inimitable greenness of a fresh leaf. These inner qualities, it is argued, can never be reduced to a scientific neuronal description, by nature, they are personal and subjective, and thus they defy any exhaustive verbal communication to others. But I will try to argue that the notion of a phenomenal consciousness that is distinct from conscious access is highly misleading and leads down the slippery slope to the much discounted idea of "dualism".
Before starting these experiments my concept of consciousness was anchored to two separable sets of considerations that can be captured roughly by the phrases “from the inside” and “from the outside.” From the inside, our own consciousness seems obvious and pervasive, we know that much goes on around us and even inside our bodies of which we are entirely unaware or unconscious, but nothing could be more intimately known to us than those things of which we are, individually, conscious. Those things of which I am conscious, and the ways in which I am conscious of them, determine what it is like to be me. I know in a way no other could know what it is like to be me. From the inside, consciousness seems to be an all-or-nothing phenomenon – an inner light that is either on or off. I am sometimes drowsy or inattentive, or asleep, and on occasion I even enjoy abnormally heightened consciousness, but when I am conscious, that I am conscious is not a fact that admits of degrees. There is a perspective, then, from which consciousness seems to be a feature that divides the universe into two strikingly different kinds of things, those that have it and those that don’t. Those that have it are subjects, beings to whom things can be one way or another, beings it is like something to be. It is not like anything at all to be a brick or a pocket calculator or an apple. These things have insides, but not the right sort of insides – no inner life, no point of view. It is certainly like something to be me (something I know “from the inside”) and almost certainly like something to be you (for you have told me, most convincingly, that it is the same with you), and probably like something to be a dog or a dolphin (if only they could tell us!) and maybe even like something to be a spider.
I am going to escape wandering at length in the mysteries of consciousness by siding with the view of more recent development in “cognitive” experimental psychology. We have come to accept, without the slightest twinge of incomprehension, a host of claims to the effect that sophisticated hypothesis testing, memory searching, inference – in short, information processing – occurs within us though it is entirely inaccessible to introspection . It is not repressed unconscious activity of the sort Freud uncovered, activity driven out of the sight of consciousness, but just mental activity that is somehow beneath or beyond the ken of consciousness altogether. Freud claimed that his theories and clinical observations gave him the authority to overrule the sincere denials of his patients about what was going on in their minds. Similarly the cognitive psychologist marshals experimental evidence, models, and theories to show that people are engaged in surprisingly sophisticated reasoning processes of which they can give no introspective account at all. Not only are minds accessible to outsiders, some mental activities are more accessible to outsiders than to the very “owners” of those minds. I will approach the study of my own externalized brain with the relatively disinterested perspective of an outsider.
I want to examine this soft grayish cauliflower as having subsystems like little nano-demons inside it sending messages back and forth, asking for help, obeying and volunteering — the actual subsystems are deemed to be unproblematic nonconscious bits of organic machinery, as utterly lacking in a point of view or inner life as a kidney or kneecap. The full system, whether it be the nano-demon colony or the brain, is my "agent," in that it is responsible for how its symbols trigger each other. I can then ponder on the fact that a single nano-demon does not "carry any information about the overall nano-demon nest structure;" and I must then ask, "how then does this nest get created and where does the information reside?" These big questions will provide a background for my probe of this wetware and how it carries out the processes of thinking and how it spawns intelligence.
First I will use functional magnetic resonance imaging (fMRI) and magnetoencephalogy (MEG) to review my brain for the first signature of consciousness by looking for an amplification of sensory brain activity which gathers strength and invades multiple regions of the parietal and prefrontal lobes. I will have some egotistical hope that after carefull adjusting sound levels for the signals, I will clearly be able to differentiate the unconscious sounds activated only by the cortex surrounding the primary auditory area, from the avalanche of brain activity which amplifies out of this initial area and breaks into the inferior and prefrontal areas.
Although fMRI is a wonderful tool for localizing where in the brain the activation occurs, it is unable to tell me precisely when. I cannot really use it to measure how fast, and in which order, the successive brain areas light up when their is an awareness of the stimulus. A few EEG and MEG electrodes pasted onto the skin or magnetic sensors surrounding the wetware will let me track brain activity with millisecond precision.
The conscious avalanche produces a simple marker that is easily picked up by electrodes at the top of the brain. An ample voltage wave sweeps through this region. It starts at 270 milliseconds and peaks at 350 and 500 milliseconds. This slow and massive event has been called the P3 wave (because it is the third large positive peak after a stimulus appears) or the P300 wave (because it often starts around 300 milliseconds). It is only a few microvolts in size, a million times smaller than an AA batery. However, such a surge of electrical activity is easily measured with modern amplifiers. The P3 wave is our second signature of consciousness and can be easily recorded.
An important consequence of these observations is that our consciousness of unexpected events lags considerably behind the real world. Not only do we consciously perceive only a very small proportion of the sensory signals that bombard us, but when we do, it is with a time lag of at least one-third of a second. In this respect, our brain is like an astronomer who watches for supernovae. Because the speed of light is finite, the news from distant stars takes millions of years to reach us. Likewise, because our brain accumulates evidence at a sluggish speed, the informatuon that we attribute to the conscious "present" is outdated by at least one-third of a second. The duration of this blind period may even exceed half a second when input is so faint that it calls for a slow accumulation of evidence before crossing the threshold for conscious perception.
We are all blind to the limits of our attention and do not realize that our subjective perception lags behind the objective events in the outside world. But most of the time it doesn't matter. We can enjoy a beautiful sunset or listen to a symphony orchestra concert without realizing that the colors we see and the music we hear date from half a second ago. When we are passively listening, we do not really care exactly when the sounds were emitted. And even when we need to act, the world is often slow enough for our delayed conscious responses to remain roughly appropriate. It is only when we try to act "in real time" that we realize how slow our awareness is. Any pianist who rushes through an allegro knows better than to attempt to control each of his flying fingers--conscious control is way to slow to tramp into this fast dance. To appreciate the slowness of our consciousness, try to photograph a fast and unpredictable event, such as a lizard sticking its tongue out: by the time your finger presses the shutter, the event that you hoped to capture on film is long gone.
The empirical discovery of reproducible signatures of consciousness, which are present in all conscious humans, is only the first step. Others will need to work on the theoretical end as well: How do these signatures originate? Why do they index a conscious brain? Why does only a certain type of brain state cause an inner conscious experience? Exponentially growing research has elaborated a theory called "the global neuronal workspace" or a kind of global information broadcast within the cortex which arises from a neuronal network who's purpose is the massive sharing of pertinent information throughout the brain.
The philosopher Daniel Dennet aptly calls this idea "fame in the brain." Thanks to the global neuronal workspace, we can keep in mind any idea that makes a strong imprint on us for however long we choose, and make sure that it gets incorporated in our future plans, whatever they might be. Thus consciousness has a precise role to play in the computational economy of the brain--it selects, amplifies, and propagates relevant thoughts. This consciousness is physically implemented with a special set of neurons which diffuse conscious messages throughout the brain: giant cells whose long axons crisscross the cortex, interconnecting it into an integrated whole. When enough brain regions agree about the importance of incoming sensory information, they synchronize into a large-scale state of high-level activation--and the nature of this ignition explains our empirical signatures of consciousness.
The "condensed matter" of the brain is perhaps the most complex object on earth. Unlike the simple structure of a gas, a model of the brain will require many nested levels of explanation. In the dizzying arrangement of mental routines or processors, each implemented by circuits distributed across the brain, themselves made up of dozens of cell types. Even a single neuron, with its tens of thousands of synapses, is a universe of trafficking molecules that will provide modeling work for centuries.
The broadcasting function of consciousness allows us to perform uniquely powerful operations. The global neuronal workspace opens up an internal space for thought experiments, purely mental operations that can be detached from the external world. Thanks to it, we can keep important data in mind for an arbitrarily long duration. We can pass it on to any other arbitrary mental process, thus giving our brains the kind of flexibility that Descartes looked for in vain. Once information is conscious, it can enter into a long series of arbitrary operations--it is no longer processed in a reflexive manner but can be pondered and reoriented at will.And thanks to the connection to language areas, we can report it to others.
The brain is a complicated, inricately woven tissue, like nothing else that we know of in the universe, but it is composed of cells as any tissue is. They are, to be sure highly specialized cells, but they function according to the laws that govern any other cells. Their electrical and chemical signals can be detected, recorded and interpreted and their chemicals can be identified; the connections that constitute the brain's woven feltwork can be mapped. In short, the brain can be studied, just as the kidneys can.
We know that the neocortex is responsible for our ability to deal with patterns of information and to do so in a hierarchical fashion. Animals without a neocortex (non-mammals) are largely incapable of understanding hieirarchies. Understanding and leveraging the innately hierarchical nature of reality is a largely mammalian trait and results from mammals unique possession of this evolutionarily recent brain structure. The neocortex is responsible for sensory perception, recognition of everything from visual objects to abstract concepts, controlling movement, reasoning from spatial orientation to rational thought, and language--basically, what we regard as "thinking."
The human neocortex, the outermost layer of the brain, is a thin, essentially two-dimensional structure with a thickness of about 2.5 millimeters (about a tenth of an inch). In rodents, it is about the size of a postage stamp and is smooth. An evolutionary innovation in primates is that it became intricately folded over the top of the rest of the brain with deep ridges, grooves, and wrinkles to increase its surface area. Due to its elaborate folding, the neocortex constitutes the bulk of the human brain, accounting for 80 percent of its weight. Homo sapiens developed a large forehead to allow for an even larger neocortex. In particular we have a frontal lobe where we deal with the more abstract patterns associated with high-level concepts.
This thin structure is basically made up of six layers, numbered I (the outermost layer) to VI. The axons emerging from the neurons in layers II and III project to other parts of the neocortex. The axons (output connectors) from layers V and VI are connected primarily outside the neocortex, especially in the thalamus. The numbers of layers varies slightly from region to region. Layer IV, is very thin in the motor cortex, because in that area it largely does not receive input from the thalamus, brain stem, or spinal cord. Conversely, in the occipital lobe, (the part of the neocortex usually responsible for visual processing), there are three additional sublayers that can be seen in layer IV, due to the considerable input flowing into this region, including the thalamus.
A critically important observation about the neocortex is the extraordinary uniformity of its fundamental structure. This columnar organization of the neocortex was first noted in 1978 in an observation that is as significant to neuroscience as the Michelson-Morley ether-disproving experiment of 1887 was to physics. Victor Mountcastle described the remarkably unvarying organization of the neocortex, hypothesizing that it was composed of a single mechanism that was repeated over and over again, and proposed the cortical column as that basic unit. The differences in height of certain layers in different regions noted above are simply differences in the amount of interconnectivity that the regions are responsible for dealing with.
Extensive experimentation has revealed that there are in fact repeating units within the neuron fabric of each column. The basic unit is a pattern recognizer and constitutes the fundamental component of the neocortex. There is no specific physical boundary to these recognizers, as they are placed closely one to the next in an interwoven fashion, so the cortical column is simply an aggregate of a large number of them. These recognizers are capable of wiring themselves to one another through the course of a lifetime, so the elaborate connectivity (between modules) that we see in the neocortex is not prespecified by the genetic code, but rather is created to reflect the patterns we actually learn over time.
There are about a half million cortical columns in a human neocortex each occupying a space about two millimeters high and a half millimeter wide and containing about 60,000 neurons (resulting in a total of about 30 billion neurons in the neocortex. A rough estimate is that each pattern recognizer within a cortical column contains about 100 neurons, so there are in order of 300 million pattern recognizers in total in the neocortex. Human beings have only a weak ability to process logic, but a very deep core capability of recognizing patterns. To do logical thinking, we need to use the neocortex, which is basically a large pattern recognizer. It is not an ideal mechanizm for performing logical transformations, but it is the only facility we have for the job.
How many patterns can the neocortex store? We need to factor in the phenomenon of redundancy. The face of a loved one, for example, is not stored once but on the order of thousands of times. Some of these repetitions are largely the same image of the face, whereas most show different perspectives of it, different lighting, different expressions, and so on. None of these repeated patterns are stored as two dimensional arrays of pixels. Rather, they are stored as lists of features where the constituent elements of a pattern are stored as lists of features where the constituent elements of the patterns are themselves patterns.
Our procedures and actions also comprise patterns and are likewise stored in regions of the cortex. Three hundred million pattern processors may sound like a large number, and indeed it was sufficient to enable Homo sapiens to develop verbal and written language, all of our tools, and other diverse creations. These inventions have built upon themselves, giving rise to the exponential growth of the information content of technologies. The size of our own neocortex has exceeded a threshold that has enabled our species to build ever more powerful tools, including tools that can now enable us to understand our own intelligence. Ultimately our brains, combined with the technologies they have fostered, will permit us to create a synthetic neocortex that will containwell beyond a mere 300 million pattern processors.
In leaving this delicious wetware topic I would like to point out that nothing prevents the reproduction of the global neuronal workspace in nonbiological hardware such as a silicon based computer. In practice, however, the relevant operations are far from trivial. We do not yet know exactly how the brain implements them, or how we could endow a machine with them. Computer software tends to be organized in a rigidly modular fashion: each routine receives specific inputs and transforms them according to precise rules in order to generate well-defined outputs. A word processor may hold a piece of information (say a block of text) for a while, but the computer as a whole has no means of deciding whether this piece of information is globally relevant, or of making it broadly accessible to other programs. As a result, our current computers remain despairingly narrow-minded until they are replaced with the next generation of quantum computers.
The world around my wetware is strange by any reckoning. We know that the "universe" surrounding us is composed at very small scales of space and time that is not smooth, but quantized. This granularity occurs at the incredibly small dimensions of the "Planck scale" at 10-33 centimeters and 10-43 seconds. This basic makeup of the universe is that dynamic spider-web of quantum spins that I am looking at in the petri dish. These "spin networks" create an evolving array of Planck scale geometric volumes defining four dimensional spacetime. We apply Einstein's general relativity (in which mass equates to curvature, or perturbation of spacetime) all the way down to this near-infinitesimal geometry. Thus everything is, in reality, particular arrangements of spacetime geometry. Building on these ideas, we can liken spin network volumes to Leibniz monads and suggest that self-organizing processes at this level constitute a flow of time. Can infinitesimally small, weak and fast processes like these be coupled to biology in the human brain? A reasonable possibility for such a link is Roger Penrose's Objective Reduction (OR) -- a particular type of quantum state reduction in which new macroscopic information emerges. Our poor brain has the huge responsibility of converging a continuous inflow of large collections of quantum wavicles and to merge them into these unitary coherent states of macroscopic size and influence for storage as activity-dependent synaptic plasticity in the brain.
These particles, which make up everything in the universe, are minuscule--some of them have no mass at all. And the quantum world is full of such airy nothing: the photons, the elements that carry the electromagnetic force, and the mysterious gravitrons moving with the same speed (the speed of light), which carry the gravitational force. The electromagnetic force causes the electrons to go around an atomic nucleus, much as the gravitational force causes the earth to go round the sun. Or taking the Information Theory perspective, you may think of these quantum particles as communicating a message between nucleus and electrons, telling the electrons to go around the nucleus, or communicating a message beween the sun and the earth, telling the earth to go around the sun.
The photons comprise a wide spectrum, ranging from the cosmic rays, a hundredth of a nanometer long (about a tenth the size of a small atom), all the way down to radio waves, several miles long. The photo-electric transducers in my eyes are tuned to a narrow band, about a third of the way in between. The band begins with the violet (400 nanometers) and progressing through the spectrum of blue, green, yellow, and orange, ends with the purple-red (720 nanometers). Photons, like other quantum particles, have a ghostlike character. They can suddenly disappear and just as quickly reappear. A photon, for example, visible as it comes from the sun to us, becomes invisible when it strikes an atom; its energy is used up for shifting an electron of the atom to a farther orbit. And it's not just a matter of becoming lost to sight. The photon particle actually changes character and converts to a virtual photon--a dematerialized one, as it were. And when the electron shifts back to a near orbit, the photon rematerializes and is given off by the atom as a visible particle. Such quirky behavior is common fare in the quantum realm --whole hosts of particles can spring unbidden from the void. And that is not easy for the average "good country person" to make friends with. Indeed, quantum theory was initially greeted with strong skepticism and a number of crucial points were in hot dispute. Neils Bohr, a pioneer of the field, summed it up when he quipped, "Anyone who isn't confused by quantum theory, doesn't really understand it."
I know from my research into my failing eyes that the act of seeing is realized progressively as photons which have traveled 93,000,000 miles from the sun strike the atoms of an object in our visual field, shifting an electron of an atom on the object to a farther orbit causing a new particle to be emitted from the object to travel through the eye lens where it is refracted and focused on the retina at the back of the eye. The lens actually isn’t the main focusing element in the human eye. Our lens is important for adjusting focus but it’s the transparent front of the eye, called the cornea, that accounts for most of the focusing power in our eyes. And why is that? Well light bending, or as physicists say, “refraction", occurs when light meets an interface between two different materials. Since the cornea forms the interface between air and the eye, it bends incoming rays. Now, light isn’t refracted the same amount at every interface — all materials have a unique “refractive index” that describes how light propagates through them--and the bigger the difference between the refractive indices at an interface, the more light bends.
A point source emits waves of light which radiate in ever-expanding circles about the point. The pupil of an eye, looking at the source, will see a small portion of the wavefront. The curvature of the wavefront as it enters the pupil is determined by the distance of the eye from the source. As the source moves farther away, less curvature is exhibited by the wavefronts. It is the wavefront curvature which determines where the eye must focus in order to create a sharp image. If the eye is an infinite distance from the source, plane waves enter the pupil. The lens of the eye images the plane waves to a spot on the retina. The spot size is limited by the aberrations in the lens of the eye and by the diffraction of the light through the pupil. It is the angle at which the plane wave enters the eye that determines where on the retina the spot is formed. Two points focus to different spots on the retina because the wavefronts from the points are intersecting the pupil at different angles. Classically no observation can be made with less than one photon -- the basic particle, or quantum, of light-striking the observed object. In the past several years, however, physicists in the increasingly bizarre field of quantum optics have learned that not only is this claim far from obvious, it is in fact, incorrect. For we now know how to determine the presence of an object with essentially no photons having touched it.
According to the rules of quantum mechanics, wave interference cccurs whenever there is more than one possible way for a given outcome to happen, and the ways are not distinguishable by any means. Basically, a quantum system can be trapped in its initial state, even though it would evolve to some other state if left on its own. The possibility arises because of the unusual effect that measurements can have on quantum systems. The phenomenon is called the Quantum Zeno effect, because it resembles the famous paradox raised by the Greek philosopher Zeno, which denied the possibility of motion to an arrow in flight because it appears "frozen" at each instant of its flight. It is also known as the watched-pot effect, a reference to the aphorism about boiling water. We all know that the mere act of watching the pot should not (and does not) have any effect on the time it takes to boil the water. In quantum mechanics, however, such an effect actually exists -- the measurement affects the outcome. This principle is called the projection postulate which states that "that for any measurement made on a quantum system only certain answers are possible. Moreover, after the measurement, the quantum system is in a state determined by the obtained results." We see what we have the capability of deconstructing in our minds.
The pupil of our eye is constructed of millions of neurons. Each of these neurons has a body. At the left is an example of a neuron found in the retina,which is known as a ganglion cell. The small brown part is called the soma, which is the Greek word for body. The soma is also known as the cell body, and it contains many organelles, which are analogous to the organs of the human body. One well known organelle is the nucleus, the place where DNA sits. Fine branches called neurites emanate from the cell body. These are the signature of a neuron. The longest neurite of a ganglion cell is called an axon. It exits the eye, travels a long way through the optic nerve, and branches out in the brain. The axons of ganglion cells are the only carriers of visual information from the eye to the brain. All right, so neurons have bodies, and neurons also get excited, just like us. I said before that ganglion cells send visual information to the brain, but how do their axons transmit information?
The receptive field of a ganglion cell is determined by its shorter neurites, called dendrites. The receptive field of a ganglion cell is at the same location in the retina as its dendrite's cover. But, typically, covers a slightly larger area. Roughly speaking, this is because the dendrites receive signals from other neurons in the retina. Why does that matter? Does some deep truth anchor these metaphorical flights? Here's the deep truth. All these mental states correspond to the spike of many neurons inside our brains. Since a neuron is part of the brain, the anthropomorphism of neurons is similar to synecdoche, a figure of speech in which the part refers to the whole or vice versa.
Photoreceptor cells in the retina generate signals when they are stimulated by light. One end of the photoreceptor contains opsins, special molecules that are activated by light. The other end of the photoreceptor secretes a neurotransmitter called glutamate, which is sensed by other neurons in the retina and stimulates some to action. When light strikes an opsin, it triggers a sequence of events that are collectively known as phototransduction.
Phototransduction changes the photoreceptor's voltage, which propagates to the other end of the cell and changes the rate of glutamate secretion. Interestingly, the maximum secretion of glutamate by photoreceptors occurs in darkness. When light hits the photoreceptor, the secretion of glutamate decreases. In other words, light is the absence of glutamate. Therefore, we could say light is the absence of darkness. At least for the photoreceptors. Photoreceptors make synapses onto bipolar cells, another class of retinal neuron. Bipolar cells aren't all the same, but come in different types. Some bipolar cells are activated by glutamate. They are called OFF cells because they're activated by darkness.
Some bipolar cells are inhibited by glutamate. They're called ON cells because they're activated by light. In all, the retina contains about a dozen types of bipolar cells. Roughly half are on types and half are off types. Therefore, neither light nor darkness is primary for bipolar cells. Rather, we might say these cells contain a Manichean representation of the visual world. The distinction between on and off also holds for ganglion cells, which receive synapses from bipolar cells. Ganglion cells are the outputs of the retina, sending their axons to downstream areas in the brain. Cells in these downstream areas can also be divided into on and off types.
The axons of bipolar cells branch out in the inner plexiform layer of the retina, which can be subdivided into on and off sublayers. If you were shrunk to microscopic dimensions, stood on a cell body, and looked up at a bipolar cell axon, you would see the axons of off types of bipolar cellsbranching out in the off sublayer just above you. And the axons of on types branching out in the on sublayer farther above. In other words, looking up into the inner plexiform layer would reveal a kingdom of light overlying a kingdom of darkness. How can this single neurotransmitter, glutamate, have diametrically opposed effects on on and off bipolar cells?
In general, neurons receive signals by sensing neurotransmitter molecules. Sensing is carried out by receptor molecules,which sit in the external membrane of the neuron. On and off bipolar cells both contain receptor molecules that sense glutamate. However, they contain different types of receptor molecules, which produce opposite voltage changes in the cell when they sense glutamate.
So for physics, darkness is the absence of light. For the photoreceptors, light is the absence of darkness. For the bipolar cells, light and dark are equal and opposite forces. In the third century AD, the Persian prophet Mani proposed a similar theological solution. He explained the coexistence of good and evil by invoking two deities, one reigning over the kingdom of light and the other the kingdom of darkness. They were equal in power, so good was eternally at war with evil. In the next centuries, this Manichean religion swept the ancient world, reaching as far west as the Roman Empire and as Far East as China. Christian emperors regarded the religion as heresy and fought back by executing its believers.
I mentioned before that neurons have axons that leave the eye and travel to the brain, and are therefore the output cells of the retina. This particular ganglion cell is activated by a bright spot at the center of its receptive field called the "visual stimulus." This is the neuron's response to that stimulus. Now let’s make the spot larger. Something surprising happens: the response is less. With more light striking the retina, you might have expected the response to increase, but the opposite happened. Yet this is no surprise if neurons are indeed like people. Think of light as wealth, and spiking as happiness. A neuron becomes less activated when its neighbors receive visual stimulation, much as a person becomes less happy when neighbors are wealthier. So a ganglion cell’s activity is driven by a comparison between two zones of its receptive field: the center, and the surround. Light in the center causes this ganglion cell to become more activated, whereas light in the surround causes it to become less activated. A ganglion cell computes the difference between the stimulation of its center and the stimulation of its surround, and the result determines whether it spikes.
Our example ganglion cell above had an On center and an Off surround. At the left is another type of ganglion cell, which is activated by a darkspot in the center of its receptive field. This Off center cell has an On surround, meaning that darkness in the surround causes the cell to become less activated. In general, the center and surround of a ganglion cell receptive field have opposing effects on its response, and are said to be “antagonistic.” Why might ganglion cells have these “center-surround” receptive fields?
Consider a page in a book. It looks more or less the same whether it’s read in bright sunshine or in a dim room: white paper, blackletters. Yet if we measure the light actually entering our eyes from the paper and the letters in each environment, we see something quite astounding. In the bright sun, the black letter sends a much larger amount of light into our eyes as the white paper in the dim room. This demonstrates that the visual system has evolved to be more sensitive to the invariant features of objects (like the contrast between the paper and letters), and less sensitive to the actual amount of illumination received and reflected by those objects. This phenomenon, sometimes known as “discounting the illuminant,” is important because the absolute amount of light arriving on the retina is enormously different between high noon and dusk. Center-surround receptive fields reduce the retina’s sensitivity to the absolute amount of light, making the retina perform relative comparisons of light at neighboring locations, and enabling us to see in situations with drastically different lighting.
We have seen that the retina is a jungle of many different cell types that connect to each other. Now I would like to try and determine whether this set of connections - the retina’s “form”, if you will - are what enable computation - its “function”. Because bipolar cells are much smaller than ganglion cells, the receptive field is about the same as the area covered by the dendrites of the ganglion cell. So those connections explain the location and size of the receptive fieldcenter. What about its polarity, (that is, if it’s On or Off?). It turns out that the connectivity of ganglion cells with bipolar cells tends to follow a specific rule. On center ganglion cells receive synapses from On bipolar cells, and Off center ganglion cells receive synapses from Off bipolar cells. In short, On is wired to On, and Off to Off. This rule is an example of “wiring specificity,” and enables ganglion cells to “inherit” On and Off from their synaptic inputs.
How does the wiring of the retina implement this rule of connectivity? The answer lies in our previous division of the inner plexiform layer into the “kingdom of light” and “kingdom of darkness.” Recall that the inner plexiform layer contains entangled branches of neurons, and is sandwiched between two layers of cell bodies. The axons of bipolar cells grow from one cell body layer, and the dendrites of ganglion cells grow from the other cell body layer. These axons and dendrites meet in the inner plexiform layer, where they synapse with each other. The axons of On bipolar cells and the dendrites of On center ganglion cells branch out in the “kingdom of light.” Because the branches overlap with each other, they are able to form synapses. Similarly, the axons of Off bipolar cells and the dendrites of Off center ganglion cells overlap with each other (and make synapses) in the “kingdom of darkness.”
Now let’s move on to the surround of the ganglion cell. To explain this part of the receptive field, we must invoke amacrinecells, another class of retinal neuron that extends branches in the inner plexiform layer. A typical amacrine cell receives excitatory synapses from bipolar cells as illustrated on the left, and makes inhibitory synapses onto ganglion cells as illustrated on the right. Therefore, the amacrine cell mediates a pathway from bipolar to ganglion cells, and this pathway is sign-inverting due to the inhibitory synapses. If the branches of the amacrine cell extend sideways, they can enable a On bipolar cell outside the dendritic area of the ganglion cell to have an inhibitory effect on an On-center ganglion cell, thus giving rise to an Off surround. To summarize, the center of the ganglion cell receptive field is due to a sign-preserving direct pathway, in which a bipolar cell directly synapses onto a ganglion cell. The surround of the receptive field is due to a sign-inverting indirect pathway, in which a bipolar cell synapses onto an amacrine cell, which in turn synapses onto a ganglion cell. This simple explanation isn’t the whole truth. At best, it’s an approximation. Amacrine cells come in many types. Neuroscientists haven’t succeeded in cataloging them all, and don’t even know how many types there are. And the rules of connectivity governing bipolar, amacrine, and ganglion cells are mostly unknown. More important than the details is the moral of the story. As should be clear by now, the form and function of the retina are intimately related. Receptive fields can be explained using the physical layout of axons and dendrites inside the inner plexiform layer. The workings of the mind may seem ethereal, but in the end they stand firmly upon a microscopic architecture. In the case of the retina, Louis Sullivan had it right! He once wrote: “It is the pervading law of all things organic and inorganic, of all true manifestations of the head, of the heart, of the soul, that the life is recognizable in its expression, that form ever follows function. This is the law.”
Motion sensing in vision allows for an organism to detect motion across its visual field. This is crucial for detecting a potential mate, prey, or predator, and thus it is found both in vertebrates and invertebrates vision throughout a wide variety of species although it is not universally found in all species. In vertebrates, the process takes place in retina and more specifically in retinal ganglion cells, which are neurons that receive input from bipolar cells and amacrine cells on visual information and process output to higher regions of the brain including, thalamus, hypothalamus, and mesencephalon. Direction-selective or DS ganglion cells are very reclusive and weren't seen for the first time until the 1980s. When researchers finally caught a glimpse, they were surprised to find that the dendrites of each DS ganglion cell looked like two pancakes when viewed from the side. A new type of retinal neuron also burst onto the scene called the starburst amacrine cell. It looks like exploding fireworks from one view, and a single pancake when viewed from the side. This neuron comes in both on and off types. The on type is a pancake at about 2/3 the depth of the inner plexiform layer of the retina, or IPL. The off type is a pancake at about 1/3 the IPL depth.
The starburst cell is known for being unconventional. A textbook neuron has many dendrites and one axon emanating from the cell body. But all branchesof the starburst cell look similar to each other. A starburst cell has no axon-- shocking, isn't it?
Its branches are called neurites- to emphasize that, for these neurons, the axon-dendrite distinction does not hold. The dendrites of a textbook neuron receive input signals, and the axon conveys output signals to other neurons. Each starburst neurite is both input and output. The output synapses are located on the outer third of each neurite, and enable the starburst cell to send signals to other neurons. The inner 2/3 of each neurite contain input synapses which receive signals from other neurons. A textbook neuron's axon sends signals in the form of electrical pulses known as action potentials or spikes. Since a starburst cell lacks an axon, you might guess that it also lacks action potentials. This is indeed the case. The electrical signals in a starburst cell are analog rather than digital. They are continuously graded voltages.
A starburst cell can sing many tunes simultaneously. In 2002, researchers observed that a starburst neurite is activated by a visual stimulus moving outward from the cell body to the tip of the neurite, but not by a stimulus that moves inward. It's possible to activate one neurite without activating the others.Since all neurites of a cell prefer outward motion, each neurite has its own preferred direction. It doesn't make sense to speak of a single preferred direction for a starburst cell. Each neurite functions independently, singing its own tune. A starburst cell is not a single artist but a chorus of voices. Starburst neurites also intermingle with the dendrites of DS ganglion cells. In the image at the right you can see the two pancakes of a DS ganglion cell lining up exactly with the on and off starburst pancakes. This also got neuron watchers speculating that starburst cells make synapses onto DS ganglion cells.
Indeed, it was confirmed that they do, but the relationship is surprisingly selective. Starburst neurites of all directions pass through the dendritic arbor of a dendritic ganglion cell, yet only those with the opposite preferred direction connect. As you can see here, the preferred direction of the DS ganglion cells is rightward. It receives synapses from a starburst neurite that points leftward. The wiring diagram also contains a less flamboyant kind of neuron-- the bipolar cell, which makes synapsesonto the DS ganglion cell. To understand the functioning of this circuit, it's important to know whether synapses are excitatory or inhibitory. The bipolar ganglion synapses are excitatory, so bipolar cells tend to activate the ganglion cell. The starburst ganglion synapses are mainly inhibitory, so starburst neurites tend to prevent the ganglion cell from being activated. For motion in either direction, bipolar cells excite the ganglion cell. For rightward motion, the starburst neurite remains silent, so the ganglion cell is activated by the bipolar cells.
Motion in the opposite direction activates the starburst neurite, inhibiting the ganglion cell, and preventing it from being activated by the bipolar cells.Therefore, the wiring of this circuit is consistent with the rightward preferred direction of the ganglion cell. Now imagine that the connectivity of starburst neurites and DS ganglion cells were indiscriminate rather than specific. The ganglion cell would receive synapses from a starburst neurite that points leftward as well as rightward. The ganglion cell would receive the same amount of starburst inhibition from motion in either direction. So it would no longer be direction selective. According to this explanation, the DS of ganglion cells is inherited from their starburst inputs, owing to a specificity of wiring. This answer raises yet another question-- why is it that starburst neurites are direction selective?
Back up at the beginning of the Seeing the Quantum World section of this document I described the "now you see it, now you don't aspect" that is but proper behavior in the quantum sphere; its strictly according to Hoyle as far as quantum logic goes--and physically, perfectly legal because the momentum of the photon and the extremely short distance it moves during its conversions satisfy Heisenberg's uncertainty condition. Now I would like to muse about how our daily reasoning goes about dealing with factoids stored by the set of mechanico-electric, chemo-electric, and photo-electric transducers in my eyes, ears, nose, touch, and taste systems--when I decide to put two and two together and come up with a conclusion. I might do that in various ways, but what they all ultimately come down to is an exchange of the language connectives and and or. These serve as our operators of logic, and together with not (the operator for negation), they constitute a complete set for all our reasonings. The mathematical logician George Boole cut through this in the nineteenth century. He showed that any logic or arithmetic task can be reduced to three operations: addition, copy, and negation: that is, any such task, no matter how complex, can be accomplished by a combination of the corresponding operators and, or, not.
That combination follows certain rules--Boole called them the laws of thought. Under those rules--specifically, under what became known as the "distributive law" of logic--the operators and and or can be exchanged. This is our commonsense modus operandi--its what Hercule Poirot does when he lets loose with "either the butler did it or the maid did it," or what we do when we say that 2a + 2b equals 2(a + b). Such reasoning, however, would lead to utter nonsense if it were applied to the quantum world. Not that there is no logic in that world; and, or, not can serve as operators there, and do but they must be combined differently, as they follow a different distributive law. In fact, quantum systems can perform any logic and arithmetic operation--and with stupendous speed.
The Anthropic Principle tells us that evolution's strategy when it brought forth the sensory nano-demons in the earth's biomass was not to produce a faithful representation of the outside world, but in producing a discriminating picture useful to the organism in the struggle for survival. Our sensory brain thus is blatantly pragmatic. And that's not just a quirk of its visual sector, but a feature of our entire sensory brain. Ever since the 1930s, when the great neurophysiologist Edgar Adrian tapped some of the sensory information lines in a frog, physiologists have been probing the lines in all sorts of organisms, including humans, and the results leave no doubt what kind of information serves as the primary database for our perceptions: discriminative information, that is, information allowing the organism to tell one object apart from another or one environment from another. And what could be better for that than the quantum states of atoms! These states are distinctive features of atoms. They are reliable identity badges--atomic ID cards as good as they come. And they get thumbed through for us by the photons in our environment at no cost. Those genial particles scan the world for us in passing, as it were, and even throw a rainbow or two into the bargain.
Thus, we can glimpse the outline of an evolutionary strategy. It's a strategy aimed at the physics bottom: Evolution fixed on the existing parity between the quantum states of atoms and the quanta of solar photons, and betaking herself of two of the most commonplace energy fields on earth (the fields of electrons and photons), turned their intersections to her advantage. Or look at it this way, Evolution's way: there were two overlapping energy fields in our environmental space, an electron field whose quanta were electrons and an electromagnetic field whose quanta were photons, and when and where those quanta matched, there was information to be had for nothing. And ever the scrooge, that is what she went after.
For more than 70 years my brain down there in the petri dish has been acquiring a series of instant sensory perceptions down the line of increasing disorder that we call time. Think of them as a series of snapshots strung out in a line--it's what usually gets plotted on an x, y diagram from left to right. But such neat lines can be deceptive. They hide an interesting, :aspect of time: a structure. Depending on the direction that we look, the line has a different texture--a sort of grain. In the forward direction, that is, from left to right--or what we usually call the future--the events in the snapshots are undetermined, whereas backwards, the past, they are determined; and that is the real work of my observing brain. We may normally not be conscious of this work at the structural edge. Nevertheless, it influences, indeed directs, our behavior in a thousand and one ways--our short and long range planning; our rainy day policies; our betting on the horses, the stock market, business ventures, and so on. In all of that grain in time structure is taken for granted, as we hold it implicit that we can influence the run of things in the forward direction, but not in the opposite one. That has been the way of things in the macroscopic domain of our universe since the Big Bang. The universe was born in a moment of high information--an exceptional moment. In fact, so exceptional that even now, 13.8 million years later, the effects are still working themselves out through the system.
It now appears that the time structure in the molecular domain was the evolutionary niche for neuron development. First, neurons entered into small-scale associations. The members of those associations (initially two or three) formed information loops--positively and negatively controlled cybernetic loops. In information terms, this development was still relatively modest, just enough for fast reflex reactions. The descendents of those primordial loops are still in us today, forming the basis of our spinal reflexes or their equivalents in the cranial nerves. The second stage came several hundred million years later and was far more ambitious. Vast numbers of neurons were then on hand, providing the wherewithall for an association on a much larger scale, the neuron trellis. This association had much more information power than the old cybernetic loops. It had substantial memory and computer capacity, enough to take advantage of recurrent information in the rear of the time structure, to render the undetermined for more determined--or put in less abstract terms, it could calculate, from recurrent information of the past, the odds of future occurrences. In short the neuron trellis is an anticipation machine.
The thermodynamic cost of this forecognition is immense. Even in the case of modest forecognitions, we are dealing with negative entropies many orders of magnitude higher than in the case of the reflex actions of neuronal loops. So it was probably not before well into the multicell era that there was enough information capital on hand to defray the cost. But once that hurdle was overcome and the first trellises were launched, there was no stopping them. Under the perennial evolutionary pressure for fast reactions, ever larger and better trellises would develop, giving their owners a decisive edge in the struggle for life.
And what could have been more decisive than forecognition in that struggle! These creatures were able to jump before it got too hot, run away before a predator pounced, gather food before hunger set in, or tell from the wag of a tail or from a mien whether it was friend or foe.
And so it went for about a billion years, the trellis branching out in all directions, its memory and computer capaccity expanding until nature's darling appeared. With a big bulge in the front of his head, he must have been a strange sight for the other hominids. But the bulge made the skull cavity large enough to accommodate a trellis of a trillion neurons. And with that he was able to reach into the future as no one ever had before: he could compute at a glance at what angle an arrow would hit the mark, or tell from a gathering cloud that the weather would change, or tell from the position of stars when the winter would come and how many kernals of corn to plant to survive it.
By our deepest nature, we humans float in a world of familiar and comfortable but quite impossible-to-describe abstract patterns, such as: "fast food", and "clamato juice", "tackiness" and "wackiness", "Christmas bonuses" and "customer service departments", "wild goose chases" and "loose cannons", "crackpots", and "feet of clay", "slam dunks" and "bottom lines", "lip service" and "elbow grease", "dirty tricks" and "doggie bags", "solo recitals" and "sleaze balls", "sour grapes" and "soap operas", "feedback" and "fair play", "goals" and "lies", "dreads" and "dreams", "she" and "he" ---and last but not least, "you and "I". I am talking about the concepts in my mind and your mind that these terms designate -- or what I have elsewhere referenced as the corresponding symbols in our wetware. Because of our relatively large size, most of us never see or deal directly with electrons, or in the many laws of electromagnetism. Our perceptions and actions focus on far larger, vaguer things, and our deepest beliefs, far from being in electrons, are in the many macroscopic items that we are continually assigning to our high-frequency and low-frequency mental categories (such as "fast food" and "doggie bags" on the one hand, and "feet of clay" and "customer service departments" on the other), and also in the perceived causality, however blurry and unreliable it may be, that seems to hold among these large and vague items.
Our keenest insights into causality in the often terribly confusing world of living beings invariably result from the well-honed acts of categorization at a macroscopic level. For example the reasons for a mysterious war taking place in some remote country like Afghanistan might leap into some sharp focus for us when an insightful commentator links the war's origin to an ancient conflict between certain religious dogmas. On the other hand, no enlightenment whatsoever would come if a prominent physicist tried to explain the war by saying it came about thanks to trillions upon trillions of momentum-conserving collisions taking place among ephemeral quantum mechanical specks. The point that I am trying to make is that we perceive essentially everything in life at this level, and essentially nothing at the level of the invisible components that, intellectually, we know we are made of. There are, I concede, a few exceptions, such as our modern awareness of the microscopic causes of disease, and also our interest in the tiny sperm-egg events that give rise to a new life, and the common knowledge of the role of microscopic factors in the determination of the sex of a child. The general rule is that we swim in a world of everyday concepts, and it is they, not micro-events, that define our reality.
So how is it that a stone, a plant, a star, can take on the burden of being; and how is it that a child can take on the burden of breathing, and how through so long a continuation and cumulation of the burden of each moment one on another, does any creature bear to exist, and not break utterly to fragments of nothing: these are matters too dreadful and fortitudes too gigantic to meditate long and not forever to worship.
I am reading a book on ‘Sociobiology’ by Edward O. Wilson. He is helping me to understand that self-knowledge is constrained and shaped by the emotional control centers in the hypothalamus and limbic systems of the brain. These centers flood our consciousness with all the emotions—hate, love, guilt, fear and others—that are consulted by those who wish to intuit the standards of good and evil. The hypothalamus-limbic complex has the purpose of balancing the cruelty of natural selection’s survival of an individual organism with feelings of guilt and altruism ‘knowing’ that in evolutionary time; the individual organism counts for almost nothing.
In a Darwinist sense the organism does not live for itself. Its primary function is not even to reproduce other organisms, it reproduces genes, and it serves as their temporary carrier. Each organism generated by sexual reproduction is a unique, accidental subset of all the genes constituting the species. Natural selection is the process whereby certain genes gain representation in the following generations superior to that of other genes at the same chromosome positions. When new sex cells are manufactured in each generation, the winning genes are pulled apart and reassembled to manufacture new organisms that, on the average, contain a higher proportion of the same genes. But the individual organism is only their vehicle, part of an elaborate device to preserve and spread them with the least biochemical perturbation. The famous old aphorism that a chicken is only an egg’s way of making another egg has been modernized: the organism is only DNA’s way of making more DNA. More to the point, the hypothalamus and limbic system are engineered to perpetuate DNA.
In the process of natural selection, then, any device that can insert a higher proportion of certain genes into subsequent generations will come to characterize the species. One class of such devices prolonged individual survival. Another promotes superior mating performance and care of the resulting offspring. As more complex social behavior by the organism is added to the genes techniques for replicating themselves, altruism becomes increasingly prevalent and eventually appears in exaggerated forms. This brings us to the central problem of sociobiology: how can altruism, by which definition reduces personal fitness, possibly evolve by natural selection. The answer is kinship: if the genes causing the altruism are shared by two organisms because of common descent, and if the altruistic act by one organism increases the joint contribution of these genes to the next generation, the propensity to altruism will spread through the gene pool. This occurs even though the altruist makes less of a solitary contribution to the gene pool as the price of its altruistic act.
So as I strive to compose a good individual of myself, this composition might be destructive to my family, and what preserves my family can be harsh on both me and the tribe to which my family belongs, what promotes my tribe can weaken my family and destroy me, and so on upward through all the permutations of levels of organization among humankind. Counteracting selection on these different units will result in certain genes being multiplied and fixed, others lost, and combinations of still others held in static proportions.
Social behavior, like all other forms of biological response, is a set of devices for tracking changes in our environment. No organism is ever perfectly adapted. Nearly all of the relevant parameters of its environment shift constantly. Some of the changes are periodic and predictable, such as light-dark cycles and the seasons. But most are episodic and capricious, including fluctuations in food items, predators, random alterations of temperature and rainfall within seasons, and others. The organism must track these parts of its environment with some precision, yet it can never hope to respond correctly to every one of the multifactorial twists and turns—only come close enough to survive for a little while and to reproduce as well as most.
Organisms solve the problem with an immensely complex multiple level tracking system. At the cellular level, perturbations are damped and homeostasis maintained by biochemical reactions that commonly take place in less than a second. Processes of cell growth and division, some of them developmental and some merely stabilizing in effect, require up to several orders of magnitude more time. Higher organismic tracking devices, including social behavior, require anywhere from a fraction of a second to a generation or more for completion. All the responses together form an ascending hierarchy.
Even more profound changes occur at the level of entire populations during periods longer than a generation. In ecological time populations wax or wane, and their age structures shift, in reaction to environmental conditions. When the observational platform is prolonged still further, to about ten or more generations, the population begins to respond perceptively by evolution. Long-term shifts in in the environment permit certain genotypes to prevail over others, and the genetic composition of the population moves perceptively to a better adapted statistical mode.
A clear trend in the evolution of organismic hierarchies is the increasingly fine adjustments made by larger organisms. Above a certain size, multicellular animals can assemble enough neurons to program a complex repertory of instinctive responses. They can also engage in more advanced forms of learning and add a hypothalamus-limbic system of sufficient complexity to regulate the onset and intensity of many of our behavioral acts.
We call these advanced forms of learning ‘directedness’ or the relative ease with which certain associations are made and acts are learned, and others bypassed even in the face of strong reinforcement. Most of the human brain is like an exposed negative waiting to be dipped into the developer fluid offered by experience. When exploratory behavior leads one or a few humans to a breakthrough enhancing survival and reproduction, the capacity for that kind of exploratory behavior and the imitation of the successful act are favored by natural selection. The enabling portions of the anatomy, particularly the brain, will then be perfected by evolution.
Socialization is the sum total of all social experiences that alter the development of an individual and consists of all of the processes that encompass most levels of organismic responses. The ultimate refinement in environmental tracking is tradition, the creation of specific forms of behavior that are passed down from generation to generation by learning. Tradition possesses a unique combination of qualities that accelerates its effectiveness as it grows in richness. It can be initiated, or altered, in a single successful individual, it can spread quickly, sometimes in less than a generation, through an entire society or population, and it is cumulative.
True tradition is precise in application and often pertains to specific places and even to successions of individuals. Consequently families, societies, and populations can quickly diverge from one another in their traditions, a phenomenon known as ‘tradition drift’. One of the highest forms of tradition, by whatever criterion we choose to judge it, is of course religion.
So if altruism has been an overall benefit to our collective human gene pool, what underlies our propensity for regularly killing our fellow humans? It seems that we have developed some kind of deeper desire for a ‘faith’, a desire to feel there’s someone up above who is directing things and really cares about what’s going on. There’s a desire to have a coherent worldview: That there is a rhyme and reason for everything we do, and all the terrible things that happen to people — people die, children get leukemia — there’s got to be some reason for it. Meaninglessness is, well, meaningless. It’s dispiriting, depressing and discouraging. Nobody wants reality to resemble a Kafka novel.
Before humans learned how to make tools, how to farm or how to write, they were telling stories with a deeper purpose. The man who caught the beast wasn’t just strong. The spirit of the hunt was smiling. The rivers were plentiful because the river king was benevolent. In society after society, religious belief, in one form or another, has arisen spontaneously. Anything that cannot immediately be explained must be explained all the same, and the explanation often lies in something bigger than oneself.
Whenever people sense the presence of a puzzling and momentous force, they want to believe that there is a way to comprehend it. Once there was a belief in the supernatural, there was a demand for people who claim to fathom it. Judging by the hunter-gatherer societies that exist on the planet today, there was a supply to meet the demand. Though most hunter-gatherer societies have almost no structure in the modern sense of the word--little if any clear-cut political leadership, little division of economic labor--they do have religious experts. So do societies which are a shade more technologically advanced; societies that, though not fully agricultural, supplement their hunting and gathering with gardening or herding.
The emergence of a shaman, of religious leadership, was then a natural enough thing. Primordial religion consisted partly of people telling each other stories in an attempt to explain why good and bad things happen, to predict their happening, and if possible to intervene, thus raising the ratio of good to bad. Whenever such people--hunter-gatherers, stock analysts, whatever--compete in the realm of explanation, prediction, and intervention, some of them get a reputation for success. They become leaders in their field. Through such competition did shamanhood arise and sustain itself.
There is much additional evidence that in shamanism lay the origins of formal politics. Though there have been societies with shamans but no acknowledged political leader, there have been few if any societies with a political leader but no religious experts. Even shamans lacking explicit political power can exert great influence. They have often been counselors in matters of war and peace. When his society was contemplating the invasion of a neighboring people, and the shaman saw unfavorable omens, he would encourage diplomacy, if the omens were good, he would urge war. In addition to this kind of marshalling of antagonism, shamans have at times created it. Perhaps the most common way for a shaman to carry antagonism beyond the society is, having failed to cure an illness or improve the weather, to blame a shaman from a nearby people, like a modern politician who diverts attention from domestic failures by rattling the saber. One of religion's most infamous modern roles, fomenter of conflict between societies, has been part of the story from the very beginning.
To maintain our species on its journey to other stars we will be compelled to drive away from the superstitions of our past toward total knowledge right down to the levels of the neuron and the gene. When we have progressed enough to explain ourselves in these mechanistic terms, and our cultural evolution becomes a true social ‘science’, we will inherit a universe divested of illusions and lights. Here, our hominid ancestors may feel somewhat alienated, strangers in a strange land. In the words of Camus: ‘His exile is without remedy since he is deprived of a lost home and the hope of a promised land’.
Images of Greece 1989