1-Consciousness

consciousness

People can be awake and alert {consciousness}| [Baars, 1988] [Baars, 1995] [Block, 1995] [Block et al., 1997] [Bogen, 1995] [Chalmers, 1993] [Chalmers, 1996] [Chalmers, 2000] [Chalmers, 2002] [Grossman, 1980] [Lamme, 2003] [Metzinger, 1995] [Moore, 1922] [Pinsker and Willis, 1980] [Schiff and Plum, 2000] [Searle, 1983] [Searle, 1992] [Searle, 2000] [Tulving and Craik, 2000] [Tulving, 1993]. Conscious people are not asleep, in coma, or stuporous. Alertness can be high, normal, or low. People, thoughts, beliefs, and other mental states can be conscious or unconscious in different degrees. People cannot consciously turn consciousness off and on. Perceptions and emotions can become conscious. Consciousness is qualitative, not quantitative.

properties: analog

Consciousness appears to be analog, not digital. Consciousness does not have discrete elements.

properties: complexity

Conscious systems must have sufficient parts and relations to hold sufficient information to model perception processes and store results.

properties: heterogeneity

Consciousness and subjective experience have beliefs, thoughts, hopes, expectations, propositional-attitude states, and large narrative structures.

properties: indicators

No structures, behaviors, or functions indicate that people or animals experience sensations.

properties: intentionality

Perhaps, beliefs and purposes about specific objects are necessary to consciousness. However, moods seem to be non-specific. Perhaps, moods are perception and feeling groups. Memory represents high-level object-recognition objects but is not conscious until recalled. Before consciousness, sense processing represents low-level features and relations for later processing. Representations can have no viewpoint and be non-specific, with no object, no proposition, and no coherence.

Meditation or ecstasy can attain consciousness states that seem empty or silent and have no intention, with no beliefs or knowledge [Bogdan, 1986] [Woodfield, 1982].

properties: location

Consciousness has no physical location. It is in mental space.

properties: mental and physical interaction

Brain is physical, so if mental aspects are not physical, how can brain affect them and how can they affect brain? Physiological studies and computer studies seem to show that physical processes can account for all emotions, moods, body sensations, perceptions, cognition, and behavior, which therefore do not need extra non-physical processes. Consciousness follows physical laws, because brain makes consciousness.

properties: mental state

Consciousness and unconsciousness are mental states.

properties: mind

Consciousness is necessary for experience but not for mental activity.

properties: nomic possibility

People can imagine humans that have no experiences but function normally. Such creatures are logically possible. Can such creatures be possible in this universe {nomically possible}? It is not probable that such creatures are possible in this universe, since humans are not probable.

properties: plurality

Consciousness is not a pure, single, and simple entity but is a mixture, is plural, and is complex.

properties: psychological function

Perhaps, consciousness has objective psychological functions, such as decision making or providing information.

properties: result not process

People cannot be conscious of processes. Observing disturbs quantum-mechanical and consciousness-creating events, so processes are unobservable. Observing ends and creates observable results.

properties: serial and parallel

Conscious activities, like stream of consciousness, are serial. Consciousness has many sensations in parallel.

properties: space

Observations and sensations share space.

properties: speed

Conscious activities have low speed compared to computers but enough speed to control human behaviors in useful time intervals.

behavior

Behavior can excite or inhibit consciousness, but consciousness may or may not directly affect behavior. Consciousness is between perception and action but does not require perception or action. Perhaps, consciousness requires ability to change behavior. Perhaps, consciousness requires self-movement, to gain information and have purpose.

biology: animals

People typically think that mammals have consciousness and that lower animals do not have consciousness. Perhaps, vertebrates, such as fish, amphibians, reptiles, and mammals, can be conscious, because they can sleep and wake.

biology: physical correlate

Perhaps, consciousness has objective physical correlates, such as chemical or electrical activity. However, consciousness is not physical.

biology: biological level

Perhaps, consciousness can operate at high, intermediate, and/or low biological levels: cell organelles, neurons, neural assemblies, brain modules, and/or whole brains [Seager, 1999].

biology: body

Perhaps, consciousness requires body, for sensation and action.

biology: thalamus

Thalamus has parvalbumin-containing cells that focally go to cortex and has calbindin-containing cells that diffusely go to cortex [Jones, 1998]. Parvalbumin cells may be about conscious contents. Calbindin cells may be about minimal consciousness.

biology: development

Consciousness develops as organism sense and motor capabilities mature.

Prematurely born infants, after seven months, have some consciousness. They explore voluntarily. They act to see what reaction is. They soon recognize mother smell and sounds. They react to womb chemicals and other environmental factors, with emotional responses. They have moods. They have body sensations. They can time actions and perceptions and use rhythms.

Newborns can imitate and share reciprocal social activities.

biology: evolution

Natural selection modifies consciousness as sense and motor capabilities evolve. Mind has many incidental properties {accident, mind} that evolve together with consciousness. Perhaps, consciousness results from brain-evolution accidents. Perhaps, consciousness is non-functional or has only neutral effect. No trait always associates with consciousness. Observing differences among animal sense organs does not show what consciousness requires. Consciousness is not something that was adaptive before but is not adaptive now.

cognition

Consciousness does not require attention.

Perhaps, consciousness requires imitation.

Perhaps, mind requires language to organize thought.

Consciousness requires immediate and/or iconic memory to integrate separate things. Long-term memory and short-term memory are not necessary for consciousness.

Consciousness is between perception and action but does not require perception or action.

Perhaps, to observe, consciousness requires self. People feel that they are thought agents.

Consciousness does not require thoughts.

symbol

Perhaps, mind requires symbol use and symbol systems.

arousal

Brain perceptual and motor activity can increase and decrease. Consciousness requires high activity level {arousal}|. When awake, people can affect their arousal level.

levels

Arousal levels are comatose, stupor, consciousness, and special awareness. Coma, anesthesia, and deep sleep have low arousal. Persistent vegetative state, vegetative state, minimally conscious state, stupor or drowsiness, and awakeness have high arousal.

behavior

Low arousal causes omissions during behavior. High arousal causes overcorrection and other commission errors.

cognition

Increased arousal improves long-term memory but not short-term memory. Perhaps, arousal level relates to decision-making rate.

biology

Arousal level depends on brainstem activity. Norepinephrine, epinephrine, dopamine, and serotonin are slow-acting neuromodulators, come from brainstem, and affect arousal and sleep. Mesencephalic reticular formation, thalamic intralaminar nuclei, and thalamic reticular nuclei {arousal system} stimulate thalamus and cortex to cause waking and sleep states. Cortico-striatopallidal-thalamocortical loops underlie arousal. To block arousal requires damage to both reticular formation and intralaminar nuclei. As little as one gram of tissue damage can block consciousness. People can recover from small lesions.

Arousal level depends on corpus-callosum activity. People with no cortex can sleep and awake. Frontal lobe activity relates to arousal. Locus coeruleus regulates arousal.

awareness

People can know that organisms, oneself, objects, features, times, or locations are present {awareness}|. Awareness is about currently observed time and space, including imagined things. It is not about too far away, too small, too large, or completely unknowable things, such as gods or spirits. Awareness is not about past or future, memories or imagination [Baddeley and Weiskrantz, 1993] [Block, 1995] [Block et al., 1997] [Chalmers, 1993] [Chalmers, 1996] [Chalmers, 2000] [Chalmers, 2002] [Lamme, 2003] [Metzinger, 1995] [Tulving and Craik, 2000] [Tulving, 1993].

levels

Coma and vegetative state have no awareness. Anesthesia, deep sleep, and minimally conscious state have low awareness. REM sleep, dreaming, stupor or drowsiness, and awakeness have high awareness.

types

Awareness of perceptions, moods, emotions, or feelings {direct awareness} is transparent to mode. Awareness of properties, relations, statements, or situations {propositional awareness} is opaque to mode, because it depends on concept information [Dretske, 1995].

biology

Thalamocortical activity depends on meaningful-stimulation intensity and determines awareness level. Cortico-striatopallidal-thalamocortical loops underlie awareness. Perhaps, all mammals can have awareness.

meaning

Awareness is not about meaning and so differs from experience.

consciousness theory

Complete and consistent theories {consciousness, theory} {theory of consciousness} explain when, why, and how consciousness arose in the material world [Baars et al., 2003] [Baum, 2004] [Blakemore, 1997] [Block, 1985] [Calvin, 2004] [Davies and Humphreys, 1993] [Foster and Swanson, 1970] [Fuster, 2002] [Geary, 2004] [Greenfield, 2002] [Griffin, 2000] [Grimm and Merrill, 1988] [Josephson and Ramachandran, 1979] [Kim, 1993] [Kim, 2000] [Kosslyn and Koenig, 1995] [Lakoff and Johnson, 1999] [Libet, 2004] [Llinás and Churchland, 1996] [McDowell, 1994] [Oakley, 1985] [Porter and Schama, 2004] [Quartz and Sejnowski, 2002] [Rosen, 1991] [Rosenthal, 1991] [Scott, 1995] [Searle, 2004] [Stich and Warfield, 2003] [Striedter, 2004] [Swanson, 2002] [Torey, 1999] [van Frassen, 1980] [Villanueva, 1991] [Zimmer, 2004]. Good theories explain sense qualities, mental states, consciousness properties, consciousness purposes, consciousness prerequisites, brain structures, brain functions, biology, evolution, and development. Good theories describe how to build models to simulate consciousness processes and structures. Good theories use mathematical, computational, physical, and linguistic concepts [Blackmore, 2004] [Horgan, 1996] [Koch, 2004] [Kuhn, 1962] [Spencer-Brown, 1969].

questions

Questions about consciousness include nature, purpose, functions, structures, causes, states, prerequisites, properties, and relations. What are observers, and how do they evolve and develop? What is external three-dimensional space, and how does it evolve and develop? What are sensations, and how do they evolve and develop? What is knowing, and how does it evolve and develop?

process

Universe and life cannot come from nothing or non-material substance. Universe and life arose as natural matter and energy rearrangements, according to physical and mathematical laws. People have been able to locate universe and life beginnings in time and space and to imagine how they arose from previous substances and properties, according to universal laws. Perhaps, consciousness arose as natural matter and energy rearrangements, according to physical and mathematical laws. Perhaps, consciousness needs no new substance types and no events contradicting or supervening physical laws. Perhaps, consciousness did not come from nothing or non-material substance. Creating new substances from existing substances by means other than universal laws involves gods, mysteries, and/or miracles [Alexandrov et al., 1984].

mind

Mental processes are thinking, reasoning, having feelings, being aware, experiencing, having conscience, believing, fantasizing, dreaming, seeing, hearing, tasting, smelling, and feeling temperatures and touches {mind}|. Different from brain, mind is not physical structure, has no physical properties, and does not obey physical laws. Mind has no space extension. Mind has no time interval. Mind is indivisible into units and relations, but it does have mental structures. Minds are purposive, goal-oriented systems. Perhaps, mammals have something like mind.

machine consciousness

Trying to implement mind on machines {machine consciousness, mind} {machine modeling of consciousness} {artificial consciousness} {synthetic consciousness} can help understand consciousness. People do not know what causes consciousness but can say something about machine ability to have experience. Perhaps, machines can develop, perceive, experience, have emotions, and have self [Brooks, 1997] [Brooks, 2002] [Brooks et al., 1998] [Churchland and Sejnowski, 1992] [Dreyfus, 1979] [Haugeland, 1985] [Kurzweil, 1999] [Lloyd, 1989] [Lloyd, 2003] [Lucas, 1961] [Simon, 1981] [von Neumann, 1958] [Weizenbaum, 1960] [Winograd and Flores, 1986] [Winston and Brown, 1979]. Machines can anticipate behavior, select actions, pay attention, make decisions, create art, simulate physical structures and events, learn facts and skills, simulate personality, plan actions, predict future situations, time events, use logic, and remember facts. Machines can do several things simultaneously. However, cognition is not consciousness.

possibility

Perhaps, consciousness cannot be in machines, because consciousness is brain illusions, must be forever mysterious, is too complex, is only concepts, or is incomplete and inconsistent.

analog

Perhaps, only analog machines can be conscious, because sensations appear to be analog.

level

Perhaps, machines can have only low-level subjective experiences.

biological tissue

Living systems have characteristics that differ from non-living physical systems [Schrödinger, 1944]. Perhaps, only biological tissue can give rise to mind and/or consciousness. Perhaps, consciousness requires structures or properties that only emerge in cellular electrochemical processes. Perhaps, consciousness requires structures or properties that only emerge over multiple reproductive cycles, because minds must have templates [Searle, 1992] [Searle, 1997].

complexity

Perhaps, to feel subjective experiences, machines must perform high-level functions, such as being kind, knowing beauty, being friendly, laughing, being moral, having goals, having motivation, being in love, using language, using metaphors, being creative, thinking about itself, solving new problem type, or having sensations. Perhaps, machines must perform complex tasks or carry out complex rules. Perhaps, machines need only to carry out many simple rules over long times to have intentions and goals [Nehaniv, 1998].

mind or soul

Perhaps, systems must create separate minds or souls to have subjective experiences. Perhaps, only minds can have or feel sense qualities. Such minds can differ from human minds, and sense qualities can qualitatively differ from human sense qualities. Besides creating mind, systems must create mechanisms for mind to affect physical world.

non-local interactions

Physical systems have only local immediate interactions among forces and other intensive quantities. Physical events at space and time points depend only on what is happening at that point. Living systems use information from different times and places to find patterns. Perhaps, consciousness requires non-local interactions.

robot

Perhaps, only robots can be conscious, because they must gather information, and machines without bodies cannot be conscious.

models

Machines that model self-consciousness can have parts that have continuous intelligent activity, such as solving problems, perceiving, and giving action orders. A part can register what is happening in other parts and analyze, remember, learn, and report, noting causes/inputs, results/outputs, efficiency, efficacy, and errors. Machine consciousness researchers include Axel Cleeremans and Douglas Lenat. Machines can have emotions or moods as reactions to system states, output, or input.

Machines can have complex world-and-self representations (Igor Aleksander) (Pentti Haikonen) (Owen Holland and Ron Goodman). Neural nets can recognize images and analyze them.

Machines can have complex world representations and imagine, plan, and predict (Igor Aleksander).

Machines can converse with humans, such as Intelligent Distribution Agent (Stan Franklin), which uses global workspace to integrate information and has lower-task modules.

Machines can have perceiving as reactive processes, thinking as analysis processes, and executive control for supervision or conflict resolution (Aaron Sloman and Ron Chrisley).

Virtual machines {CogAff schema} {H-CogAff architecture} can think about consciousness and combine cognition and emotion affect. Affect has three levels: reactive, deliberative, and management. Cognition has three levels: perception, analysis, and action. Therefore, affect and cognition can interact in nine ways.

Machines can have many simple components connected into modules and whole, as neurons connect, such as Corollary Discharge of Attention Movement (John Taylor) and Cyberchild (Rodney Cotterill).

ACT* model (John Anderson) has declarative knowledge in a semantic network with parallel spreading activation and has procedural knowledge in a production system with parallel matching.

Production systems (Hunt and Lansman) can have serial matching and semantic networks with parallel spreading activation.

Retrieving memories in parallel is unconscious, while using complex serial algorithms is conscious (Logan, Stanley, and Neal) (Hesketh).

Implicit connectionist relations and categories are like unconsciousness, and explicit associations and spatiotemporal context are like consciousness (Bower).

CLARION model (Sun) uses distributed representation, for not directly accessible information, and symbolic local representation, for directly accessible information.

Systems can require representation activation above threshold (Bowers).

Systems can use production rules or perception symbols as representations and work with several "chunks" or combine simple chunks into new complex chunks (Servan-Schreiber, Anderson, and Rosenbloom).

Systems can be dynamic and go through transient states or reach stable-state attractors, in which everything is consistent (Mathis and Mozer) (O'Brien and Opie).

1-Consciousness-Concepts

anoesis

Mental states can have consciousness but no thought {anoesis}|.

extended cognition

Mental states can be about parts that have conceptual relations {extended cognition}, such as sequences or spatial relations. Examples are machine-assembly parts diagrams.

interpretationism

Perhaps, there is no consciousness, but people interpret {interpretationism} cognitions as phenomena.

money analogy

Just as money has monetary value, brain has mental content {money analogy}. Just as monetary value cannot exist without money, mental things cannot exist without brain. Monetary value does not relate to money physical aspects, and mental content does not relate to brain physical aspects. Rather, concept-relationship complexes give meaning to monetary value and mental things [Seager, 1999].

phenomenal causation problem

Sensations can have no physical effects, because physical laws are enough causes for perceptions {problem of phenomenal causation} {phenomenal causation problem}. Sensations cause no force or energy. Mental things can affect only mental things. However, phenomenal qualities must be causal, because people can know they have had experience. However, if mental life has no causation, consciousness need not be causal [Seager, 1999].

phenomenal judgment

Mental states {cognitive state} can involve subjective experiences {phenomenal judgment}. People can report such mental states.

subdoxastic state

Mental states {subdoxastic state, consciousness} can be inaccessible to consciousness and unavailable for use in propositions. For example, people unconsciously compare pupil size using low-level features, during sense pre-processing. Cerebellum computational processes use low-level features.

1-Consciousness-Kinds

consciousness types

Consciousness has philosophical types {consciousness, types}, such as core consciousness [Damasio, 1999] or primary consciousness [Edelman and Tononi, 2000].

creature consciousness

Perhaps, animals have consciousness {creature consciousness}. Perhaps, only mental states are conscious.

microconsciousness

Feature consciousness {microconsciousness} can be independent of other-feature consciousness [Zeki, 1998] [Zeki and Bartels, 1999].

access consciousness

Mental representations for rational thought and action, such as remembering or understanding, are a consciousness type {access consciousness} (a-consciousness). Access consciousness is under conscious control and includes self-consciousness, creativity, discrimination, generalization, and behavior flexibility [Block, 1995] [Block, 1996] [Block et al., 1997].

phenomenal consciousness

Sense qualities and experiences are a consciousness type {phenomenal consciousness} (p-consciousness). Phenomenal consciousness is not under conscious control [Block, 1995] [Block, 1996] [Block et al., 1997].

autonoetic consciousness

Self {autonoetic consciousness} depends on memory [Tulving and Donaldson, 1972] [Tulving, 1983].

noetic consciousness

Perceptions {noetic consciousness} do not depend on memory [Tulving and Donaldson, 1972] [Tulving, 1983].

fact-consciousness

People can know facts about perceptions {fact-consciousness} [Dretske, 1988] [Dretske, 1995].

thing-consciousness

People can perceive without knowing {thing-consciousness} [Dretske, 1988] [Dretske, 1995].

1-Consciousness-States

mental state

People can be awake, asleep, stuporous, or comatose, or have other whole-brain physiological states {mental state}| {state of mind}. Mental state is mainly about consciousness level. Mental state depends on arousal level and awareness level. Mental states have no oscillation, growth, or decay.

Mental states are processing states. Mental states are not physical states, because same mental state can have different physical forms.

types

Mental states can be conscious, experienced, and subjective and result in experience, pain, sensations, emotions, and moods. Conscious mental states can be similar, such as seeing red and seeing orange, or different, such as seeing blue and seeing red or seeing and hearing.

Mental states can be pre-conscious or below awareness threshold.

Mental states can evaluate or categorize perception states and body states: arousal level, emotion type, success or failure, and pleasure or pain.

Natural mental states include awakeness, drowsiness, light sleep, dreaming, REM sleep, and deep sleep. Awakeness has high arousal and high awareness. Drowsiness has medium arousal and medium awareness. Light sleep has low arousal and low awareness. Dreaming has low arousal and high awareness. REM sleep has low arousal and medium awareness. Deep sleep has very low arousal and very low awareness.

Damage, disease, drugs, or exceptional circumstances cause impaired mental states. Impaired mental states include coma, vegetative state, minimally conscious state, and locked-in syndrome. Coma has no arousal and no awareness. Vegetative state has high arousal and no awareness. Minimally conscious state has high arousal and low awareness. Locked-in syndrome has high arousal and high awareness. Drug mental states include coma and anesthesia. Anesthesia has almost no arousal and almost no awareness. Exceptional mental states include hypnosis, sleepwalking, near-death experiences, and mystical experiences.

awakeness

People in waking states {awakeness} are conscious and experience sense qualities. Awake includes automatism, pain, pleasure, sense qualities, thinking, and threat. Just before or after sleeping, consciousness has low alertness and arousal.

cause

Awakeness results when brainstem nuclei excite cortex and thalamus non-specifically. Medial column receives, along pyramidal tract, from cortex, cerebellum, and senses and sends, in ascending reticular activating system, to intralaminar thalamic nuclei, which send to striatum and cortex to activate cortex and control waking and sleeping.

control

Pons reticular activating system controls awakeness. It has norepinephrine, serotonin, dopamine, and acetylcholine secreting neurons with pathways to brainstem neurons. Reticular activating system neurons can inhibit afferent axons from senses and efferent axons to muscles.

sleep

Hypothalamus superchiasmatic nucleus starts NREM sleep and controls progress through NREM sleep. REM sleep activation goes from pons to lateral geniculate to occipital lobe.

non-consciousness

People can be conscious and have no sensations and no awareness {non-consciousness}|. Most actions, body functions, brain functions, and perceptions have non-consciousness, such as during reflexes, eye saccades, eye blinks, attentional blinks, sense habituation, sense saturation, orientation response, flight-or-fight responses, fast innate responses, instinctual behavior, habits, walking, reaching, and skilled movements. Voluntary motor actions and skills are not conscious. Non-conscious activities make few errors, have high speed, have one type and do or find one thing, and do not vary. Non-conscious activities can happen in parallel. Non-conscious activities cannot inhibit behavior. Conscious and non-conscious contents do not affect or interfere with non-conscious activities.

semi-conscious

Biofeedback can control heartbeat rate, extremity temperature, and extremity sweating, but people have no voluntary muscle or gland control, do not feel anything specific, and do not know how control works. People cannot feel or control blood chemical concentrations, even with biofeedback.

conscious then non-conscious

Learning to swim, bicycle, play sport, or perform skilled actions is under conscious control, but, after learning, these activities do not use conscious control or attention to bodily feelings.

conscious or non-conscious

Performing skilled procedures, such as reading, can be automatic or can use attention. Skilled behaviors, situations involving divided attention, somnambulism, and involuntary regulatory responses can happen with or without consciousness. Stimulus intensity below objective-threshold level is too low for perception. Stimulus intensity above objective-threshold level causes perception. At subjective-threshold level, people begin to detect sense qualities [Dehaene et al., 1998] [Morris et al., 1998] [Morris et al., 1999] [Whalen et al., 1998].

always conscious

Behaviors can be always conscious, such as remembering, deciding, willing, choosing, and talking to oneself [Berns et al., 1997] [Cheesman and Merikle, 1986] [Cleeremans et al., 1998] [Curran, 2001] [Destrebecqz and Cleeremans, 2001] [Ellenberger, 1970] [Holender, 1986] [Jacoby, 1991] [Kolb and Braun, 1995] [Merikle, 1992] [Merikle et al., 2001] [Reingold and Merikle, 1990].

cognition

Memory uses non-conscious activities. Perception uses non-conscious activities. People can have perception without consciousness [Marcel, 1983] [Marcel and Bisiach, 1988] [Merikle et al., 2001] [Peirce and Jastrow, 1885] [Sidis, 1898].

zombie

Perhaps, people can be always non-conscious, like rocks, plants, and invertebrates. It is possible to imagine people doing everything we do but like zombies.

subconsciousness

Stimuli can be near sensation or perception threshold, other stimuli can mask them, or brain can be in low-alertness low-consciousness states, so feelings, sensations, and perceptions are secondary, weak, or below awareness {subconsciousness}|.

1-Consciousness-States-Exceptional States

exceptional human experience

In lucid dreaming, hypnotic regression, mental flow, religious ecstasy, mystical visions, and psychic states {exceptional human experience}, people can feel unity with nature [White, 1990].

automatism

When awake, people can perform skilled or random behaviors without consciousness {automatism}|. People cannot remember automatisms. Deep sleep or disoriented state can follows automatism.

Epileptic mental state lasts for several minutes, is in temporal lobe, impairs awareness or has unconsciousness, and involves chewing, smacking lips, moving arms or hands in organized but purposeless patterns, laughing, acting scared, and using isolated words.

Sleepwalking is automatism [Callwood, 1990].

After hypnosis, people can perform skilled behaviors on command.

ecstasy

Obsessive love can involve excited mental states {ecstasy, state}.

higher consciousness

Mental states {higher consciousness} can have higher-than-normal alertness, sensation, perception, and awareness. Inspiration is awake and conscious but unaware.

hypnosis

People can go into trances {hypnosis, state} {mesmerism}, in which they are susceptible to dissociation and/or suggestion [Lynn and Rhue, 1991]. Perhaps, hypnosis involves dissociation [Janet, 1929] [Prince, 1906]. Newer theory {neo-dissociationist theory} {neo-dissociation theory} attributes hypnosis to dissociation. Conscious perception and/or behavior have inhibition, but unconscious perception and action continue [Hilgard, 1986].

biology

Nerve signals reach cerebral cortex and cause normal-amplitude electrical responses, so only conscious sense responses decrease or dissociate. Electroencephalograms are similar to awake EEGs and do not resemble sleep EEGs. Suggestion can lower anterior-cingulate-cortex activity and lower pain. Sulfones and urethane cause hypnosis.

effects

Hypnosis can influence human performance and provoke sense deception, hallucination, anesthesia, analgesia, and post-hypnotic amnesia. Dissociations, not suggestions, cause hypnotic anesthesia and analgesia. Suggestion can affect waking state, as well as hypnotic state. Hypnotized subjects do not perform actions against their morals.

When hypnotized, people seem to suppress self. Hypnotist can assume executive control. When hypnotized, selves {hidden observer} still know what hypnotized body is doing and feeling.

Hypnotized people know everything but do not let the knowledge into consciousness. In hypnosis, nerve signals reach cerebral cortex and cause normal-amplitude electrical responses, so only conscious sense responses decrease or dissociate. When hypnotized, people unconsciously know what hypnotized body is doing and feeling. Hypnosis is conscious and experiences sense qualities but is unaware.

Hypnosis does not increase long-term memory retrieval. Long-term memories retrieved under hypnosis are unreliable. Memories about early infancy are probably not true. People can remember what happened while under hypnosis.

Hypnosis can cause physical effects. Stomachs can swell in phantom pregnancies. Limbs can become stiff. Blisters can appear where people think that heat was. Stigmata can appear on hands. Itching can start.

Hypnosis does not confer special strength or abilities.

Hypnosis cannot cure organic nervous or mental illnesses.

behavior

Hypnotized people explain strange actions by rationalizations {trance logic}. Hypnotized people do not introspect.

Hypnotized people can choose not to do suggestions and can stop hypnotized states. People act as they think they should.

Hypnosis often involves playing roles, and roles can be just acting. Perhaps, hypnosis involves role-playing or faking. If people have hidden observers, they are also role-playing or faking [Colman, 1994] [Spanos, 1991] [Wagstaff, 1994].

properties

Hypnosis has enhanced suggestibility. Hypnosis has sustained mental concentration. Hypnosis restricts attention to small field. Hypnotized people have less time sense. Hypnosis voluntarily suspends initiative and will. Hypnotic trance involves identification with others. Hypnosis has incomplete contact with reality. Hypnosis reduces self-consciousness and critical appraisal.

Hypnosis is a social situation with subject and hypnotizer. Hypnotic trance involves rapport between subject and hypnotist. Hypnotist personality does not make much difference.

requirements

Medical hypnosis requires sympathetic but authoritarian relationship between doctor and patient and cooperative attitude by patient. Hypnosis requires passivity. Hypnotic trance involves ability to pretend and fantasize.

Hypnosis susceptibility correlates with treating fiction works as real, identifying with parents strongly, blurring fantasy and reality, pretending, and believing people. Alternatively, hypnotic susceptibility increases if people have same temperament as opposite-sex parent, equally in males and females. Leisure-activity similarity is more important than work or professional-value similarity. If people do not identify with either parent, they are not susceptible to hypnosis.

Motivation is not enough for hypnotizing. Hypnosis does not require relaxation. Imagination has no relation to hypnotizing.

factors

Children between 8 and 12 hypnotize more easily than older or younger children. Children below age 8 cannot concentrate. Children above age 12 are more critical. Women and men are equally hypnotizable and hypnotize to same depth [Lynn and Rhue, 1991]. Personality type does not correlate with hypnosis.

comparisons

Hypnosis is not like sleep or dreaming. Hypnosis involves lethargy, drowsiness, and diminished contact with reality, like sleep. However, muscles do not relax, and reflexes are normal.

Both hysteria and hypnosis involve dissociation.

sensory deprivation state

After prolonged low stimulation, people feel stress, have poor eye focusing, lose visual size constancy, lose visual shape constancy, have hallucinations, and become disoriented {sensory deprivation state} {sensory deprivation reaction}. Sensory deprivation states are conscious, with experienced sense qualities, but unaware. The "I" or self persists through sensory deprivation. Sensory deprivation can cause mental confusion, paranoid delusions, fear, and panic, but some people welcome sensory deprivation. People can see subjective phosphene sparks or light patterns after deprivation.

suspended animation

From coldness or drugs, body functions can slow greatly {suspended animation}|.

trance

Hypnosis-like or automatism-like states {trance} can be conscious, with experienced sense qualities, but unaware. Tribal shamen can go into trances. Hypnosis and sleepwalking are trances.

1-Consciousness-States-Hallucination

hallucination

People can have sense perception in the absence of, or unrelated to, external stimuli {hallucination, state} [Ffytche et al., 1998] [Frith, 1996] [Green and McCreery, 1975] [Manford and Andermann, 1998] [Vogeley, 1999]. In hallucinations, red can seem blue, high voice can sound low, sweet can seem sour, and pain can be pleasurable.

Visual hallucinations are most common and typically show real persons. People see hallucinations as objects in space but know them to be false perceptions. Colors are typically reds, oranges, and yellows [Siegel and Jarvik, 1975] [Siegel and West, 1975] [Siegel, 1977] [Siegel, 1992]. Motions in hallucinations are often rotations or radial motions [Siegel and Jarvik, 1975] [Siegel and West, 1975] [Siegel, 1977] [Siegel, 1992]. Spirals and wiggly lines, circles, concentric figures or tunnels, webs, repeated lines, and intense colors are common in hallucinations [Bressloff et al., 2002] [Cowan, 1982] [Klüver, 1926]. People can have sound hallucinations [Gurney et al., 1886] [Sidgwick et al., 1894] [West, 1948].

behavior

People are passive during hallucination and feel that they have no control over recurring images and obsessions. The "I" or self persists through hallucinations.

perception

Other information cannot correct hallucinations. People cannot distinguish hallucination from perception {argument from illusion, hallucination}, except later by comparison and memory. Perception, dream, and hallucination experiences and sense qualities are similar.

causes

High arousal, low vigilance, perception impairment, reality-testing impairment and reduction, high expectation, long wakefulness or busyness, sickness, starvation, sensory deprivation, prolonged low stimulation, sleep deprivation, and rituals with rhythmic movements or sounds cause hallucinations.

Dreaming has visual hallucinations, such as hypnagogic hallucination and hypnopompic hallucination. Hypnosis can provoke hallucinations. Prolonged isolation causes anxiety and hallucinations. People with autistic thinking have hallucinations. People with paranoia have hallucinations. People with schizophrenia have hallucinations, typically voices talking to or about them.

Perhaps, memory release or imagination inhibition, when normal sensory data flow stops or changes, causes hallucinations [Jackson, 1887].

Launey-Slade Hallucination Scale indicates if people are susceptible to hallucinations.

causes: biology

Temporal-lobe stimulation can cause hallucinations. Anti-opiate drugs and phenothiazines cause hallucinations.

Epileptics can have autoscopy. People with migraine headaches can have autoscopy. Females have more hallucinations.

comparisons

Illusions are perceptions that look different than actual metric measurements. Illusions and hallucinations have similar sense qualities.

Imagery is distinguishable from hallucination. Imagery is under voluntary control, while hallucination is not. Hallucinations are about unreal or idiosyncratic objects or events, while imagery is about physical and cultural reality [Frith, 1995] [Slade and Bentall, 1988].

Near-death experiences are similar to autoscopic hallucinations.

autoscopy

People can see their clear, monochromatic, transparent, life-sized, and moving mirror image {doppelganger} {autoscopy}|. People typically see face, head, and/or trunk at one meter for several seconds. Social factors can determine forms that ghosts take. Images copy postures, facial expressions, and movements. Autoscopy lasts several seconds. Autoscopy occurs mostly at late night or dawn.

Autoscopy can happen during stress, fatigue, or disturbed consciousness. Delirious patients with parieto-occipital lesions, people with migraine attacks, and epileptics can have autoscopy.

Charles Bonnet syndrome

People who become blind can hallucinate {Charles Bonnet syndrome} [Ffytche, 2000] [Ffytche and Howard, 1999] [Ramachandran and Blakeslee, 1998].

heautoscopic experience

Out-of-body experience can involve seeing one's own body {heautoscopic experience}.

near-death experience

People can have visions when in danger, in hospital, or during attempted suicide {deathbed visions} {near-death experience}| (NDE). Near-death experiences can have tunnels or entry into darkness, out-of-body experiences, bright lights or emergence into light, peaceful and loving feelings, strange worlds, life-history memories, and choices to go back to the living world [Moody, 1975] [Morse, 1990] [Morse, 1993] [Parnia and Fenwick, 2002] [Parnia et al., 2001].

There can be peaceful feelings, out-of-body experiences, entries into darkness, visions of light, and emergences into light {Greyson NDE scale} [Ring, 1980].

Experiences can be regressions to childhood states. Mind feels love, peace, acceptance, and pureness, with focused attention, no criticism, and no available alternatives. Most near-death experiences are pleasant, but some are like hell [Parnia et al., 2001] [van Lommel et al., 2001].

stages

People first hear noises or move fast down tunnels or valleys. Then they feel that they are outside body but in same physical environment. Loneliness and timelessness feelings follow, with low emotions. People are invisible to others and cannot communicate. People feel no weight or other sense qualities. People feel peace, calm, joy, and love. People can know others' thoughts. Then friends or relatives that have died already come as spiritual helpers. Among them is a being of light, with personality. This being asks mental questions about readiness for death. Then people see a fast, accurate summary of their life from childhood to present. Then a barrier or border, a no-return line, approaches. However, people feel that they should go back and live, because it is not yet time, they have not yet done something, or people are calling them back. Then, preceded by unconsciousness, return to physical body is through head. Afterward, people feel that they must try to learn and love, with no fear of death or judgment and no worries about heaven or hell.

causes

Perhaps, unusual brain states cause near-death experiences {dying brain hypothesis}, as anoxia, stress, and fear activate brain [Blackmore, 1993].

Brain is often clinically dead or damaged {brain dead}, but experience can have happened before that [van Lommel et al., 2001].

No drugs cause near-death experiences [Parnia et al., 2001] [van Lommel et al., 2001].

comparisons

Near-death experiences are similar to high brain carbon-dioxide levels. Near-death experiences are similar to well-being feelings caused by brain endorphins. Near-death experiences are similar to autoscopic hallucinations. Near-death experiences are similar to LSD experiences. Near-death experiences are similar to sensory isolation experiences. Near-death experiences have no typical physiological symptoms [Parnia et al., 2001] [van Lommel et al., 2001].

out-of-body experience

Hallucinating people can see world from locations outside physical body {out-of-the-body experience} {out-of-body experience}| (OBE). One-fifth to one-quarter of people have at least one OBE during their lifetimes, often as children. Out-of-body experiences typically last from seconds to minutes [Blackmore, 1992] [Green, 1968]. Out-of-body experience can involve heautoscopic experience. Imagined-world model or representation replaces normal perceptual model. From above, people see imagined models. Models project what people see from another viewpoint. People feel that they perceive from positions different from head position [Alvarado, 1982] [Alvarado, 1992] [Blanke et al., 2002] [Grüsser and Landis, 1991] [Morris et al., 1978] [Penfield, 1955] [Penfield, 1958] [Tart, 1968].

If original body stays behind, people feel that they are in body or have no body [Green, 1968].

The experience feels like real life, not like dreams, and is often life-changing [Gabbard and Twemlow, 1984].

causes

Muscular relaxation, exhaustion, monotonous sounds, and certain drugs can disrupt both sense input and body image to make OBE. Out-of-body experiences typically happen when people relax and voluntary muscles are not moving, so internal stimulation is low. Body image lessens, as in drowsiness [Blackmore, 1992] [Green, 1968]. OBE can happen when outside stimulation is low.

People can have out-of-body experiences in depersonalization reactions.

Drugs that relax body and reduce body image can induce out-of-body experiences [Morse, 1990] [Persinger, 1983] [Persinger, 1999] [Shermer, 2000].

Near-death experiences often involve out-of-body experience.

Perhaps, out-of-body experiences involve temporal lobe [Morse, 1990] [Persinger, 1983] [Persinger, 1999] [Shermer, 2000].

comparisons

OBE relates to hypnotizability [Blackmore, 1996] [Gackenbach and LaBerge, 1988] [Irwin, 1985].

OBE relates to imagination, absorption, and belief in psi [Blackmore, 1996] [Gackenbach and LaBerge, 1988] [Irwin, 1985].

More lucid dreaming correlates with more out-of-body experiences [Blackmore, 1996] [Gackenbach and LaBerge, 1988] [Irwin, 1985].

Out-of-body experience is like vivid dreaming. OBEs are like dreams that people know are dreams. Out-of-body experiences are similar to stage one dreaming.

factors

OBE has no relation to age, education, gender, mental health, or religion.

pseudohallucination

Hallucinations {pseudohallucination} can be as vivid and immediate as perceptions, but people do not realize they are false. Pseudohallucinations are subjective responses to isolation or intense emotional need.

1-Consciousness-States-Impaired States

coma

Patients can have few reflexes, no reactions to sense stimuli or body signals, no awareness, no arousal, no consciousness, no experiences, no voluntary movements, and no waking {coma, mental state} {comatose}|. Patients keep eyes closed. Patients typically do not recover.

causes

Both-hemisphere brainstem-nuclei trauma or oxygen deprivation can cause coma. Posterior upper brainstem arousal-system damage can cause coma. Coma always involves anterior and posterior intralaminar thalamus nuclei damage. Rostral pons and dorsal midbrain damage, or mesencephalic reticular formation and thalamus damage, causes coma for one to seven days. Paramedian thalamic damage causes long-term coma [Giacino, 1997] [Plum and Posner, 1983] [Schiff, 2004] [Schiff and Plum, 2000] [Zafonte and Zasler, 2002] [Zeman, 2001].

Metrazol induces coma but is no longer used in psychiatric treatment. Insulin induces coma.

minimally conscious state

Patients can sleep and wake, have some sensory reactions, have some voluntary movements, and have some self or environment awareness {stupor} {minimally conscious state}. Stuporous means semi-conscious, semi-aware, and few sensations.

causes

Damage to cortico-striatopallidal-thalamocortical loop disconnects frontal lobes, basal ganglia, and thalamus. Bilateral anterior medial cortex, basal ganglia, and basal forebrain damage causes stupor.

A schizophrenia type involves excitement and then stupor.

types

Bilateral anterior-medial-cortex, basal-ganglia, and basal-forebrain damage, typically from anterior cerebral-artery aneurysm, can cause no motion, except to look around (akinetic mutism). Medial-caudal-thalamus, medial-dorsal-mesencephalon, caudate-nucleus, globus-pallidus, and medial-forebrain-bundle damage can cause no memory, slow behavior {slow syndrome}, and apathy.

Extensive temporal-lobe, parietal-lobe, and occipital-lobe junction damage can prevent self or environment awareness but allow coordinated behavior (hyperkinetic mutism).

vegetative state

Patients can have no voluntary movements, can have no reactions to sense stimulation or body signals, can have intermittent arousal and eye opening, and can sleep and wake {vegetative state}|. They can have reflexes and eye blinks [Celesia, 1997] [Laureys et al., 2000] [Laureys et al., 2002].

causes

Both-hemisphere brainstem-nuclei trauma or oxygen deprivation can cause vegetative state. Permanent vegetative state patients have bilateral thalamic damage but little cerebrum damage.

time

People can stay in vegetative state more than 30 days {persistent vegetative state}. People can stay in vegetative state for much longer {permanent vegetative state}.

unconsciousness

People can be in mental states in which they have no voluntary movements, have no sensations, have no perceptions, have no awareness, do not experience sensations, have no event or object memories, and have no functioning mind {unconsciousness, state}|. Unconsciousness is not awakeness, sleeping, coma, stupor, nor vegetative state. Body functions automatically. Unconscious people cannot use habits or perform voluntary behaviors. Unconscious people have no sensations or perceptions. Unconscious people cannot use declarative memories. All mammals can become unconscious, and ability to become unconscious indicates previous consciousness [MacIntyre, 1958].

causes

Unconsciousness occurs when people are asleep and not dreaming, have received a brain concussion, have finished an epileptic episode, have anesthesia, or have fainted.

Trauma from high physical pressure, such as concussion, causes brainstem damage. Low blood-oxygen concentration, low blood-glucose concentration, low blood flow, and low blood pressure affect brainstem. Blood nitrogen-gas bubbles can affect brainstem neurons [Forster and Whinnery, 1988] [Rossen et al., 1943] [Whinnery and Whinnery, 1990].

1-Consciousness-States-Meditation

meditation

People can learn to suspend physical and mental responses to stimuli {meditation}. Typically, learning to meditate takes practice over long time. People can suspend judgment, analysis, planning, and emotion. People can ignore anxiety. People can feel nothingness, silence, self-expansion, transcendence, immanence, divine knowledge, enlightenment, cosmic consciousness, oneness, samadhi, or satori [Deikman, 1966] [Deikman, 2000] [Farthing, 1992] [Newberg and D'Aquili, 2001] [Wallace and Fisher, 1991] [Watts, 1957]. People can feel that there is no self, because responses are low [Austin, 1998]. Meditation is conscious but unaware, with experienced sensations. Mental states achievable by meditation can have or appear to have no representations.

Meditation is not daydreaming or drowsiness, because it involves alertness, concentration, and control [Fenwick, 1987]. True meditation does not block outside stimuli from consciousness.

concentration

Meditation concentrates on objects, locations, actions, or thoughts. Meditation suppresses attending and orienting. While concentrating, people ignore thoughts or attend to other thoughts without further thought.

Concentration can be on thoughts, narratives, or descriptions, such as Spiritual Exercises of Ignatius of Loyola [1500 to 1600] or Four Divine Abidings of Theraveda Buddhism. Four Divine Abidings are kindness, compassion, happiness, and calm.

Concentration can be on images or their properties, as in Tantric-Buddhism and Tibetan-Buddhism Vajrayana, including guru yoga.

Concentration can be on koans, as in Zen-Buddhism Rinzai School and Soto School. Mumonkan or Gateless Gate and Hekiganroku or Blue Cliff Record have koans.

Concentration can be on mantras, as in Hinduism and Transcendental Meditation. The Jesus Prayer of Eastern Orthodoxy is mantra-like.

Concentration can be on actions, such as breathing.

Concentration can be on locations, such as mandalas or points between eyes.

biology

Meditation does not change left brain/right brain activity [Austin, 1998] [Fenwick, 1987] [Newberg and D'Aquili, 2001] [Ornstein, 1977] [Ornstein, 1992] [Ornstein, 1997].

Meditation EEG differs from sleep or awakeness EEG. EEG theta and delta rhythms increase during meditation. Right and left hemispheres synchronize more [Bagchi and Wenger, 1957] [Kasamatsu and Hirai, 1966].

methods

Meditation requires low light and sound. Meditators can face blank walls in quiet rooms. Meditators can concentrate on one stimulus, such as attending to breathing, saying mantras, saying koans, or looking at low-contrast objects. Meditation can use repeated movements, like thumb touching fingertips in succession or breathing from abdomen, not chest [Austin, 1998].

comparisons

Meditation often leads to daydreaming, but then it is not meditation [Austin, 1998] [Fenwick, 1987].

Meditation often leads to sleeping, but then it is not meditation [Austin, 1998] [Fenwick, 1987].

In religion, prayer can be meditation.

Resting is just as good at reducing arousal and dealing with stress as meditation [Farthing, 1992] [Holmes, 1987].

religion

Meditation is common in various religions [Ornstein, 1986] [Ornstein, 1992] [Ornstein, 1997] [West, 1987].

Zen Buddhism has hua tou, shikantaza, and zazen. Meditation can use prayer wheel. Meditation exercises can develop concentration to achieve pure insight and tranquility {vipassana nana}. Meditation can achieve serenity and mindfulness {sammapatti, meditation}, the highest dhamma.

In Hinduism, magic sound repetitions {mantra} can concentrate mind on gods. Om Mani Padme Hum {jewel in center of lotus} is a Hindu mantra. Icon contemplation can concentrate mind on gods. Yoga is meditation. Meditation and concentration try to identify human mind with, or allow possession by, God or truth. Meditation reveals true self, by reaching stages.

The Sufism Islam branch is a mystical philosophy and uses meditation for personal union with God. Sufism is about divine illumination, not behavior. Meditation is to attain higher-reality knowledge.

1-Consciousness-States-Meditation-Zen

lotus position

Meditation uses sitting positions. Standing up is too stimulating, and lying down leads to sleep. Good sitting positions {lotus position}| can have no tension or pain but keep meditator alert {full lotus position} {half lotus position} {Burmese position}. Meditators can sit on low benches with knees on floor and lower legs under bench. Hands can be palm up or palm down, on knees or in lap.

hua tou

Concentrative meditation {hua tou} pays attention to one object or event, such as breathing or chant.

shikantaza

Meditation methods {shikantaza} can be just sitting, being attentive to everything.

zazen

In Zen Buddhism, open meditation {zazen} is consciousness without response, with open eyes looking at a plain wall.

chi shi

In Zen, the pure-consciousness state can stop breathing {chi shi}. While person is still conscious, nerve-activity level reduces until breathing stops, for 30 seconds, and then normal breathing resumes.

1-Consciousness-States-Mystical Experience

mystical experience

People can have unwilled ineffable insightful feelings {mystical state} {mystical experience}. People feel spiritual or divine presence, deep meaning, and/or unity with universe. People feel that everything is blissful, joyful, simple, and clear. People can feel nothingness, silence, self-expansion, transcendence, immanence, divine knowledge, enlightenment, cosmic consciousness, oneness, samadhi, or satori. Mystical experience can seem sacred or holy [James, 1902] [Kennett, 1972]. Mystical states are conscious but unaware with experienced sensations.

levels

Mystical experience can have different stages or levels. People can have insight into non-physical existence or divine and good power {awakening, mystical}. People can choose to become pure, live correctly, and discipline self to reach divine level {purgation}. People can receive enlightenment or feel divine presence or ultimate reality {illumination, mystical}. People can feel that self is preventing them from reaching ultimate level or that effort is never enough {dark night, mystical}. People can feel loss of self and unity with ultimate {union, mystical}. People can feel that they have no more self. People can feel surrounded by colored light. People can feel calm, bliss, and joy. People can experience all physical reality intensely. People can experience consciousness clearly and purely.

properties

During mystical experience, people are passive with no will or identity. People feel outside time and space or experience unlimited space and eternal time. People can sense a happy, ineffably good, complete, and dominant spirit, or an evil, horrible, and repulsive spirit.

People have mystical experience from half-hour to several hours.

causes

Depression and despair can trigger mystical state, as can meditation, prayer, nature, art, music, and worship.

LSD and psilocybin cause mystical experiences.

memory

People cannot describe or think about mystical feelings that they had before [Underhill, 1920].

cosmic consciousness

People can feel immortal and/or infinite {cosmic consciousness}, at one with universe [Bucke, 1901] [Stace, 1960].

docta ignorantia

People cannot know God {docta ignorantia}, because he combines opposites. People can know the infinite only mystically [Nicholas of Cusa, 1440].

flash in mysticism

Unseen power or mysterious light {flash, mysticism} {illumination, mysticism}, felt in head, seems to possess tribal chieftains, priests, or medicine men.

lotus ladder

In Hinduism, Kundalini yoga takes practitioner through stages {lotus ladder} from everyday dullness, to sex, to power and achievement, to compassion, to self and sex conquest, to god-like vision, and to pure ecstasy.

oceanic boundary loss

Religious and mystical selfless states end {Ozeanische Selbstentgrenzung} {oceanic boundary loss}.

prophecy

People can feel that they receive insight {prophecy} {revelation, mysticism} from God or angel. Prophecy is knowledge about mystical experiences [Avicenna, 1020]. However, different revelations reflect personal lives and contradict each other.

religious ecstasy

Ecstasy can involve religion {religious ecstasy, mystical}. Mystical experience is often religious experience. People can feel that they experience something, beyond physical world or throughout physical world, that is divine, powerful, and good. People can feel God's presence [Hardy, 1979] [Persinger, 1999]. People can feel that they have no individual self but are part of something divine. People can feel possession by spirits. Religious ecstasy is conscious but unaware.

Buddhism

In Buddhism, ecstasy is one Eightfold-Path component. Buddha felt nirvana and nothingness, with no individualness and total mystical knowledge. In Shin Buddhism in China or Pure Land Buddhism in India, meditators can repeat mantras {nembutsu} {namu amida butsu} about the Cosmic Buddha (Amida) to try to reach nirvana, feel insight about themselves, and go beyond ordinary life and consciousness to the pure land. Emptiness {netti} with no thoughts or sensations is pure consciousness or being. The Cosmic Buddha combines the Buddha of Boundless Light (Amitabha) with the Buddha of Boundless Life (Amitayus). The actual embodied Buddha was Shakyamuni Buddha.

Christianity

Gianlorenzo Bernini depicted religious ecstasy in his Ecstasy of St. Theresa sculpture. In Christianity, people can feel God and have deep knowledge and understanding, as described by Ekhart.

Perceptions and facts mirror the finite, so people can know the finite world by perception. Finite world is contingent and temporal. Concepts mirror the infinite. Infinite world is absolute and without time. People cannot know the infinite, because finite and infinite have no relations. People cannot know God (docta ignorantia), because he combines opposites. People can know the infinite only mystically [Nicholas of Cusa, 1440].

Greek mythology

Asia-Minor and Greece cult {cult of Dionysius} was about nature, ecstasy, and passion [-600 to -450].

Hinduism

In Hinduism, people can feel bliss {tasting the sweetness} {savikalpa samadhi} in awareness of god. Devotional yoga {bhakti yoga} concentrates on god and its qualities. Atman joins with Brahman {becoming the sweetness} {nirvikalpa samadhi}. People can feel insight about themselves, going beyond ordinary life and consciousness, with no thoughts or sensations, only emptiness. In the Advaita School, this is the highest meditation state. Kundalini yoga takes practitioner through lotus-ladder stages from everyday dullness, to sex, to power and achievement, to compassion, to conquest of self and sex, to vision of God, and to pure ecstasy.

Judeo-Christian

Ecstasy allows miracles and prophecies. In this mystical state, people have feeling of knowing, not only desire to know. People can prepare for this state and be worthy, by love, truth, faith, prayer, and will and sense suppression. However, ecstasy is God's gift [Philo Judaeus, 40].

Sufism

Islam has a mystical philosophy that uses meditation for personal union with God. Sufism is about divine illumination, not behavior. Meditation is to attain higher-reality knowledge. Sufism has seven stages to salvation: repentance, abstinence or fear of God, piety and detachment, poverty, patience or ecstasy, trust in and surrender to God, and contentment.

Taoism

In Taoism, tao (way or path) is transcendent, as ultimate reality, and immanent, as universe itself. Tao is order, serenity, and grace in life. Tao emphasizes simple living, with no desires, much contemplation, and few activities. Taoism values spontaneity, naturalness, and openness. In Esoteric Taoism, tao is psychic power of societal links and so relates to mysticism. In popular Taoism, tao relates to magic.

1-Consciousness-States-Sleep

sleep

Brain chemical cycles cause awakeness and sleep {sleep, state}. Sleep can be unconscious or have dreaming.

causes

Monotony, warmth, and restricted movement make people sleepy. Waiting for something that cannot happen yet can make people sleepy. Regular physical exercise, good-quality firm mattress, warm but ventilated room, malted milk drink, and sexual satisfaction at bedtime promote good sleep. Deep sleep can follow epilepsy.

causes: biology

Melatonin induces sleep at night {sleep inducer} and maximizes just before morning. Neurosteroid induces sleep, can be analgesic at high concentration, and comes from cholesterol or progesterone. Sleep peptide is in brain, cerebrospinal fluid, and cerebral blood and can induce sleep.

Brain stops making monoamine neurotransmitters. Monoamine oxidase breaks down monoamines. Monoamines no longer excite motor neurons, and acetylcholines excite motor neurons. However, monoamines still go to eye-muscle nerves. When asleep, amygdala inhibits pons, which activates medial medulla, which inhibits motor neurons.

awake

When awake, forebrain inhibits amygdala, which excites pons, which inhibits locus coeruleus, which excites muscles. Monoamines block sleep by exciting motor neurons. At awakening, acetylcholine is low, and serotonin and norepinephrine are high.

brain

Arousal system, hypothalamus, locus coeruleus, raphé nuclei, and reticular nucleus affect sleep. During NREM sleep, thalamus-cortex pathways have inhibition. During REM sleep, thalamus-cortex pathways have no inhibition but receive only small input.

Pons reticular activating system has norepinephrine, serotonin, dopamine, and acetylcholine secreting neurons and has pathways to brainstem neurons. Reticular activating system neurons can inhibit afferent axons from senses and efferent axons to muscles.

animals

Higher invertebrates and chordates have rest phases. Sleep is only in vertebrates. Fish and amphibia sleep briefly or just rest. Ancient reptiles have only NREM sleep. Recent reptiles and birds have NREM sleep and some REM sleep. Mammals have NREM sleep and more REM sleep. Mammals who are more immature at birth have more REM sleep. For mammals, REM sleep is at highest percentage at birth and decreases with age. Larger mammals sleep more. In dolphins, one hemisphere NREM-sleeps for several hours, then other hemisphere NREM-sleeps, so they can continue to breathe.

Sleep is an instinct. Sleep evolved separately from dreams [Horne, 1988].

amount

In all species, sleep amount is directly proportional to waking metabolic rate. Animals with higher body temperatures, shorter reaction times, and more fat sleep longer. Birds and mammals that are not secure from predators sleep only for minutes at a time. Predators, who can sleep safely, sleep longer.

Newborns sleep 80%, with seven or eight naps per day. 12-to-18-month-old toddlers sleep 50%. Three-year-old children sleep 40%, and REM sleep is 20% of sleep. Teenagers and adults sleep 30%. Older adults have shorter and more broken sleep.

In adults, sleep amount is proportional to body weight.

purposes

Sleep causes more protein synthesis and less cellular work and so aids growth. Perhaps, sleep simplifies brain processes by removing alternative pathways. Perhaps, simple brain-activity patterns repeat and return neurons to sense and motor readiness.

animal hypnosis

A sleep-like state {animal hypnosis} can follow extreme stimulation.

sleep deprivation

People who have little sleep {sleep deprivation} cannot stay awake, have frequent small sleeps, fail to notice things, and have no attention. After little sleep, attention fails first. Little sleep for many days can cause rising temperature and then death. CX717 maintains performance after sleep deprivation. Inadequate sleep causes most fatigue.

somnambulism

People can get up from sleep and walk automatically {somnambulism}| {sleepwalking}. For example, children can walk half-asleep to lavatory and return to bed. Sleepwalking is an automatism and can be without consciousness [Broughton et al., 1994] [Callwood, 1990] [Jacobson et al., 1965] [Kavey et al., 1990] [Masand et al., 1995] [Moldofsky et al., 1995] [Puccetti, 1973] [Revonsuo et al., 2000] [Schenck and Mahowald, 1998] [Vgontzas and Kales, 1999]. Sleepwalking is unconscious and unaware, with no sensations.

properties

Sleepwalking lasts up to 30 minutes. Sleepwalking has purposeful movements. People can avoid obstacles and return to bed. They typically have poor coordination, are clumsy, and are unreliable. Sleepwalkers do not go anywhere unusual. Motions are smaller than normal. Eyes are open. Somnambulism can happen during orthodox sleep early at night, with large slow EEG waves, because muscle output has no inhibition. Sleepwalking occurs more in deep sleepers.

Sleepwalking can have talking. Night terror can accompany sleepwalking.

factors

Sleepwalking is more frequent with daytime anxiety.

Sleepwalking is more common among children.

Sleepwalking is hereditary.

comparisons

Sleepwalking trances are like hysterical dissociation. People look dazed, preoccupied, and unresponsive.

memory

After waking, people do not remember sleepwalking.

1-Consciousness-States-Sleep-Dreaming

dreaming

Dreams {dreaming} are free association narratives about self, with typical movements and surroundings [Aristotle, -350] [Cavallero and Foulkes, 1993] [Krakauer, 1990] [Louie and Wilson, 2001] [Malcolm, 1959]. Dreaming is unconscious and unaware but experiences sensations. In dreams, consciousness does not monitor cognitions.

sleep

Dreams can happen during rapid-eye-movement deep sleep [Hobson et al., 1998].

Orthodox sleep has little dreaming. Non-rapid-eye-movement-sleep dreams are mostly when first falling asleep or before waking. People remember them as well as REM-sleep dreams, but they are less interesting and have different subjects [Braun et al., 1998] [Hobson et al., 1998].

Sleep evolved separately from dreams [Horne, 1988]. Perhaps, dreams just happened when sleep evolved [Flanagan, 2000].

properties

Dreams are typically about play, recreation, and home, not current events, work, or exotic things. Dreamers are in the action, not just watching things happen. Dreams are not just watching a show. Dreams typically have strangers and friends, who are typically same age as dreamer. Family members appear less often. Both sexes appear equally. People typically change into someone else.

Almost all dreams have movements, with movement illusions. Dreams never violate arithmetic or geometry laws. Dreams have conscious episodes, each with consistent features. Episodes have no connections. However, people can distinguish one night's dreams from other-night dreams.

Dreams do not have reading, writing, or conversations between people, but may have implied conversations. People never dream rational analysis, only associations. Dreams tend to project meaning onto stimuli.

Dreams seem like movements in and through real scenes during stories, but typically have false perceptions and false beliefs, with poor memory.

One-third of dreams have color. People can always have or never have color dreams.

Complex dreams commonly have incongruity, unspecified objects, and some discontinuity. Adults and children have same proportions of discontinuity, unspecified objects, and incongruity. Adults have more complex and bizarre dreams than children do. Children's dreams are more about family and friends.

emotion

Dreaming has mostly anxiety, less frequently joy, and even less frequently anger. One-third of dreams have happy feelings. Dreams are mostly pleasant but can have anger and apprehension. Sadness, shame, and remorse are infrequent. Least common emotions are affection and eros. Erotic dreams are less than 10% of adult dreams.

Dream misperceptions can increase anxiety, and anxiety can increase misperceptions. One-third of dreams have strong anxiety and fear. Two-thirds of dreams have anxiety, fear, guilt, or sadness. As dreams continue, they get sadder. Dreams with anxiety do not have penile erections.

Dream emotion levels correlate with heart rate and skin potential. If heart beats faster and breathing rate increases, dream has anxiety. Dreams have more aggression than waking life. Emotional reactions to dream events are appropriate. Men and women have same dream emotions.

movements

Jerky eye movements, limb twitches, face twitches, middle-ear muscle twitches, and sudden respiratory changes are phasic REM-sleep components. Muscle relaxation and penile erections are tonic features. As night progresses, REM periods contain more phasic components, and dreams are more active and less passive. Limb movements relate to dreams with movement. Small face, finger, head, and limb twitches, with most other muscle activity suppressed, show dream is about running, flying, or swimming. Dreams have rapid eye movements that can follow dream movements. Large eye movements relate to dream content [LaBerge, 1985] [LaBerge, 2000]. Dreams have dilated pupils.

perception

Perception during dreaming uses same brain regions as perception during awakeness. The strongest dream perception is visual. Dream visual images are typically in color. Audition perceptions are weak. Touch, temperature, taste, and smell perceptions are very weak.

brain damage

People blind since birth have only auditory dreams. If blindness is in primary cortex, dreams have no seeing. Secondary-cortex-damage blindness allows seeing in dreams.

People are faceless in dreams of people who cannot identify faces [Kaplan-Solms and Solms, 2000] [Solms, 1997].

Patients with hemi-neglect cannot see dream right or left half.

development

20-week-old fetuses have REM sleep, indicating dreaming [Empson, 2001].

causes

Dreams are about recent events or ongoing problems. Events around sleeper during dreams often are in dreams. Human brain can respond to word meanings during sleep and have related dreams. Depressed people have dreams that contain failure and loss.

comparisons

Dreams have more characters and settings than fantasies. Unlike fantasies, dreams are not menacing and do not cause paranoia. In dreams, people often change into someone else, which never happens in fantasies.

Dreaming is like delirium, not dementia. Dreams have time and place disorientation, visual hallucinations, distractibility, attention deficit, recent memory loss, and insight loss, like hallucinations. Dreaming is like organic mental syndrome, such as caused by drugs or Alzheimer's disease.

Out-of-body experiences are similar to stage one dreaming.

behavior

Dreams do not change awake behavior [Hobson, 2002].

will

People cannot will dreams, though they can will in dreams if not in deep sleep. People cannot be responsible for dreams, so dreams cannot be sins.

interpretation

Dream-interpretation theories are invalid [Hobson, 2002] [Webster, 1995].

purposes

Perhaps, dreams help consolidate memories [Hobson, 2002] [Vertes and Eastman, 2002]. Perhaps, dreams help clear brain memory circuits and help to selectively forget [Crick and Mitchison, 1983].

Perhaps, dreams are activity rehearsals and are like playing or practice [Humphrey, 1983] [Humphrey, 1986] [Humphrey, 1992] [Humphrey, 2002].

Perhaps, dreams are rehearsals or practice against threats {threat simulation theory, dream} [Rossetti and Revonsuo, 2000] [Revonsuo, 2000].

brain

Dreams start in pons-geniculate-occipital (PGO) system, which locus-coeruleus catecholamines activate. Pons controls reticular activating system [Braun et al., 1998] [Hobson et al., 1998]. Perhaps, dreams are forebrain interpretations of midbrain signals. During dreams, brain blocks sense input.

If people are conscious or dreaming, high-amplitude electroencephalography waves arise in pons, radiate to geniculate body, and then go to occipital cortex.

Brainstem is active in REM sleep, and REM sleep has different transmitters from NREM sleep. Brainstem multiple motor-pattern generator excitations cause increased sense qualities [Empson, 2001].

Dreams have low cortex output and input, so brainstem inhibition from cortex is low. During dreams, cortex has no motor-neuron output. Area V1 and areas nearby deactivate during dreaming, while fusiform gyrus and medial temporal lobe activate. For dreams to have sense qualities, such as sight, sense primary cortex must be functioning. Removing visual cortex causes visual dreams to cease. If area V1 has damage, people can still have visual dreams.

Frontal cortex has low activity during dreaming.

daydreaming

Idle thinking {daydreaming}| can be conscious but unaware and experience sensations. While awake and in unchanging environments, people talk and daydream more, and then talk and daydream less, in 90-minute to 100-minute cycles. Drug frontal-lobe damage makes people have no daydreaming.

false awakening

People can dream that they are waking {false awakening}. During false awakening, people can hallucinate {metachoric experience}.

hypnagogic hallucination

As people fall asleep, they can have brief dreamlets {hypnagogic hallucination} {hypnagogic image}. Images can be vivid. Human will can control hypnagogic states [Maury, 1848].

hypnopompic hallucination

As people wake up, they can have brief dreamlets {hypnopompic hallucination} {hypnopompic image} [Mavromatis, 1987].

latent dream level

Dreams have two levels, actual dream {manifest dream level} and unconscious symbolizations {latent dream level}. Perhaps, symbols are repressed wishes.

lucid dreaming

In some dreams {lucid dreaming}|, dreamers know that they are dreaming [Blackmore, 1992] [Gackenbach and LaBerge, 1988] [Green, 1968] [Hearne, 1978] [Hobson, 2002] [van Eeden, 1913]. More lucid dreaming correlates with more out-of-body experiences.

night terror

Children age 10 to 14 can have terror, shrieking, and sleepwalking {night terror}| {pavor nocturnis} in orthodox sleep early at night. Night terrors are more frequent with greater daytime anxiety. People never remember night terrors in the morning.

nightmare

Scary dreams {nightmare}| are about anxieties and can happen during REM sleep, later at night. Having nightmares is hereditary.

1-Consciousness-States-Sleep-Problems

sleep problems

People can have trouble sleeping {sleep, problems}. Depression has shortened sleep, with no deep non-REM sleep and earlier, longer, and more intense first REM sleep. Fever-causing peptides from bacteria increase non-REM sleep but not REM sleep.

bed-wetting

Nighttime bed urination {bed-wetting} can happen during orthodox sleep early at night. It is more frequent with daytime anxiety.

REM-sleep behavior disorder

During sleep, brain may not inhibit motor neurons {REM-sleep behavior disorder} (RBD). Pons lesions can allow movements during REM sleep.

sleep paralysis

Paralysis {sleep paralysis}| {night nurses' paralysis} can begin before REM sleep or stay after REM sleep, as well as when just falling asleep or in narcolepsy [Parker and Blackmore, 2002] [Spanos et al., 1995]. In sleep paralysis, people can be afraid, hear noises, float, or feel presences, weight on chest, touches, or vibrations [Cheyne et al., 1999] [Persinger, 1999].

1-Consciousness-States-Sleep-Problems-Narcolepsy

narcolepsy

Daytime sleepiness, muscle-tone loss, and/or consciousness loss {narcolepsy}| can follow laughing or stress.

Brain pathway that causes muscle-movement loss during sleep has changes. Forebrain inhibits amygdala, which excites pons, which inhibits locus coeruleus, which excites muscles. Amygdala inhibits pons, which activates medial medulla, which normally inhibits motor neurons.

Perhaps, narcolepsy is an autoimmune disorder [Guilleminault et al., 1976] [Guilleminault, 1976] [Siegel, 2000].

Narcolepsy relates to an antigen {human leukocyte antigen} (HLA).

Hypocretin peptide neurotransmitter mutations can cause mammalian narcolepsy.

cataplexy

In people with narcolepsis, anger, fear, laughter, anticipation, or joy can cause sudden voluntary-muscle relaxation {cataplexy}| [Wu et al., 1999]. Cataplexy maintains consciousness.

1-Consciousness-States-Sleep-Sleep Cycle

sleep cycle

When sleeping, people go through four non-REM-sleep stages {sleep cycle}, separated by short REM-sleep periods. Sleep cycles last 90 minutes and have short dreaming stage 1, then stage 2, then stage 3, then stage 4, then stage 3, then stage 2, then dreaming stage 1, and then waking. In stage 1, heart rate and respiration rate increase, and brain is active [Dement, 1972]. Sleep gets deeper through the night. Deep sleep is greatest at 2 AM.

AIM model

Awake/NREM-sleep/REM-sleep cycle has different properties at each stage {AIM model}. Sleep-cycle stages have different Activation levels, Input and output, and neurotransmitter Modulation.

activation

Activation is from pons reticular activating system and has pathways to nearby brainstem areas, thalamus, and spinal cord. Awakeness and REM sleep have high-frequency low-amplitude EEG waves. NREM sleep has low-frequency high-amplitude EEG waves. Stage II NREM sleep has distinctive sleep-spindle EEG.

Cortical regions differ in activation cycles, input, output, and modulation. Hypothalamus superchiasmatic nucleus starts NREM sleep and controls progress through NREM sleep. REM sleep activation goes from pons to lateral geniculate to occipital (PGO). Reticular formation blocks spinal-cord sense and motor activity during REM sleep [Hobson, 1989] [Hobson, 1994] [Hobson, 1999] [Hobson, 1999] [Hobson, 2002] [Hobson et al., 1998].

input and output

Reticular activating system neurons can inhibit afferent axons from senses and efferent axons to muscles. For awakeness, input comes from outside, and output goes to muscles. For NREM and REM sleep, inputs only come from inside, with no muscle output.

modulation

Modulation is by norepinephrine, serotonin, dopamine, and acetylcholine secretions from pons reticular-activating-system neuron axons. Awakeness has high norepinephrine, serotonin, and dopamine and low acetylcholine. REM sleep has low norepinephrine and low serotonin but moderate dopamine and high acetylcholine. NREM sleep has neither high nor low neurotransmitter levels.

Cholinergic axons go to amygdala and multisensory posterolateral cortex and fire when eyes move.

cycles

Sleep has four or five cycles. First cycle has long deep NREM sleep and short REM sleep. Last cycle has long REM sleep and short shallow NREM sleep.

non-rapid eye movement sleep

Regular sleep {orthodox sleep} {non-rapid eye movement sleep}| {NREM sleep} {light sleep} has only small eye movements.

properties

Consciousness is not present in slow-wave sleep. NREM sleep has little dreaming but seems to have "thinking". Both REM and non-REM sleep can have talking. Words relate to thoughts or dreams.

amount

NREM sleep is 80% of human sleep.

animals

Only vertebrates have NREM sleep. Ancient reptiles have some NREM sleep. Recent reptiles and birds have NREM sleep and little REM sleep. Mammals have NREM sleep and REM sleep.

In dolphins, one hemisphere NREM-sleeps for several hours, then other hemisphere NREM-sleeps, so they can always breathe.

causes

Melatonin, which brain makes more at night, promotes NREM sleep. During NREM sleep, acetylcholine changes from low to high. During NREM sleep, serotonin and norepinephrine change from high to low.

NREM sleep releases growth hormone, decreases adrenaline and corticosteroids levels, and increases cortisol and testosterone.

Raphe-system serotonin acts on thalamus layer-five and layer-six neurons to start light sleep.

Serotonin constricts pupils.

biology

NREM sleep has low frontal cortex activity, low cortical activity, high limbic activity, and high forebrain sleep-on-cell activity.

In NREM sleep, nerve cells synchronize at low frequency.

Hypothalamus superchiasmatic nucleus starts NREM sleep and controls progress through NREM sleep.

purposes

Perhaps, non-REM sleep reduces free-radical damage.

rapid eye movement sleep

Sleep {paradoxical sleep} {rapid eye movement sleep}| {REM sleep} {deep sleep} can have dreaming.

properties

REM sleep has limited consciousness. REM sleep has detailed dreams. Both REM and non-REM sleep can have talking. Words relate to thoughts or dreams. REM sleep completely relaxes most body muscles and stops many reflexes but has rapid eye movements. In men, REM sleep has penis erections. During REM sleep, mammals have no temperature control.

amount

Paradoxical sleep is 20% of sleep.

20-week-old fetuses have REM sleep, indicating dreaming. For mammals, REM sleep is at highest percentage at birth and decreases with age. Three-year-old children and adults sleep 20% in REM sleep.

REM sleep diminishes with anxiety.

Recent reptiles and birds have NREM sleep and little REM sleep. Mammals have NREM sleep and REM sleep. Mammals who are more immature at birth have more REM sleep.

causes

REM sleep has high acetylcholine, from brainstem, but low serotonin and norepinephrine, from sense input.

REM sleep diminishes with adenosine, barbiturate, benzodiazepines, depressants, interleukin, and sedatives.

biology

REM sleep has high limbic activity, low cortex input and output, no sense input, and no motor neuron output. REM sleep-on cells are highly active. REM sleep has faster brain blood flow than wakeful rest.

Awakening sense thresholds are highest in REM sleep, except for stage-4 sleep.

REM sleep activation goes from pons to lateral geniculate to occipital lobe (PGO).

factors

Men and women have same REM-sleep activation system and REM sleep amounts. In mental defectives, REM sleep percentage is proportional to intelligence level.

purposes

Perhaps, REM sleep is for monoamine decrease. REM sleep is probably not for readiness or memory consolidation.

1-Consciousness-Sense

sensation

Information from light, sound, liquid chemicals, air chemicals, temperature, pressure, and motion stimulate sense receptors, which change neurons, which affect brain states {sensation}|. Sensation is local and does not establish current environment or organism state.

types

Senses {sense} are carotid body, defecation, hearing, hunger, kinesthesia, magnetism, nausea, pleasure, pain, smell, taste, thirst, touch, urination, vestibular system, and vision.

properties

Sensations have intensities, qualities, times, and locations. Vision spectrum has one octave with no higher harmonics, colors mix, and area fills. Hearing spectrum has ten octaves, pitches do not affect each other, and area does not fill. Touch uses cell translation to indicate pressure and stress. Smell and taste use vibrations to indicate bonding.

Sense-property matrix shows properties that senses share and how property values vary among senses. Matrix columns are senses: vision, hearing, touch, temperature, kinesthesia, vestibular system, smell, and taste. Matrix rows are sense space, time, intensity, and frequency categories. For space, categories are inside-body/outside-body and continuous/discrete. For time, categories are fade/not-fade and continuous/discrete. For intensity, categories are low-magnitude/middle-magnitude/high-magnitude. For frequency or quality spectrum, categories are blending/not-blending and one-octave/more-octaves. Sensations relate two or more separated points within one psychologically simultaneous time interval and so are non-local.

similarities

Different senses have similar sense qualities. Sound and vibration are similar, because sound is fast vibration. Hearing, temperature, and touch involve mechanical energy.

Whites, grays, and blacks relate to temperature, as do warm and cool colors. White relates to vibration as noise. Sight affects balance.

Smell and taste mix. Sight, taste, and smell use chemical reactions. Smell and fluid-like touch mix. Taste and fluid-like touch mix.

causes

Sensations depend on physical light-frequency ranges; sound-frequency ranges; taste-molecule acidities and polarities; smell-molecule shapes, sizes, and vibrations; temperature increases and decreases; or tension, torsion, and compression changes.

biology

Sensation involves cerebellum, inferior occipital lobe, inferotemporal cortex, lateral cerebellum, and ventral system. Sensation can vary neuron number, diameter, length, type, molecules, membranes, axons, recurrent axons, dendrites, cell bodies, receptors, channels, and synapses. Sensation can vary neuron spatial arrangement, topographic maps, neuron layers, and neuron networks. Neurons can vary firing rates, sums, thresholds, neurotransmitter packets per spike, packet sizes, synapse shapes, and synapse sizes. Dendrites and axons can have different numbers, lengths, connections, and patterns to detect sequences, shapes, functions, and relations.

biology: sensors

Sensor properties match stimuli, and sense-surface events mirror physical-object events. Light sensors form pigment surface, and physical surfaces have pigments. Sound sensors vibrate at same frequencies as source vibrations. Touch receptors have strains, and skin surfaces have strains. Taste and smell receptors are molecules that are complementary to sensed molecules.

biology: network

Human nervous systems have integrated central and peripheral nerves that form a three-dimensional network, a space lattice. Variable lattice spacing can make space continuous. Perhaps, lattices have write and read connectors, like touch screens or magnetic-core memories.

biology: carrier wave

A carrier wave with constant amplitude and frequency can have frequency modulation (at higher frequencies) or amplitude modulation (at smaller amplitudes). Perceptual cortex appears to have physical carrier waves with frequencies of 20 to 80 Hz, on which amplitude-modulation patterns occur to represent perception. Sensory inputs form the carrier wave.

space

Skin-surface touch receptors can detect space contours. Muscle and tendon proprioception receptors can detect space distances and angles. Smell and taste systems work with touch skin-surface receptors. Touch, proprioception, smell, and taste systems make body-periphery space. Hearing can locate sounds in space outside body. Vision can locate objects in space outside body and measure distances and angles. Human brains connect outside space to body-periphery space, to make egocentric space.

evolution

Senses evolved to detect energy types. Sense receptors evolved to capture the most-useful stimuli. Brain evolved to represent the most-useful information. Body structures and processes evolve from previous designs, which constrain evolution. Evolution has no plan or pattern. First sense responded to high-intensity physical energy, was undifferentiated, and caused avoidance, withdrawal, or approach behavior.

activity principle

Perception attributes have active neuron groups {activity principle, perception}.

essential node

Specific sense qualities need specific brain regions {essential node} [Adolphs et al., 1999] [Zeki, 2001].

intermodal perception

Senses can work alone {unimodal perception}. At most neuraxis levels, sense inputs converge {intermodal perception}. Object relationships depend on intermodal connections, not just vision. Taste and smell, and touch and kinesthesia, have strong connections.

animals

All animals use intermodal and unimodal perception. Humans and apes recognize objects through fast intermodal processes and slower unimodal processes.

effects

Intermodal is better than unimodal for response reliability, impulse number, peak impulse frequency, and discharge-train duration. Intermodal sense associations can anticipate sequential stimuli from different sense modes.

learning

Learning in one sense does not transfer to another sense.

development

At human, ape, and monkey birth, object perception does not separate input into separate senses, uses one process involving all senses, and does not analyze features. Later, humans separate stimuli into different senses by cerebral-cortex inhibitory mechanisms, analyze sense features using symbols, and then combine features intermodally. For example, vision-cortex lip-movement analysis and auditory-cortex tone-and-sound-location analysis coordinate.

intelligence

Mentally retarded and dyslexic children have more difficulty with multisensory stimuli than with unimodal stimuli.

space and senses

Visual and phenomenal spaces {space and senses} are bounded three-dimensional manifolds, with objects and events.

length units

Distances between retinal ganglion cells make fundamental visual-length units. See Figure 1.

angle units

Fundamental length units establish angle units.

triangulation and distances

Using length and angle units, triangulation can find planar distances. See Figure 2.

intensities and distances

Perhaps, stimulus intensity versus distance graphs to sigmoid curve. See Figure 3.

convergence and distance

For all senses, stimuli are in larger space, and signals converge on smaller neuron arrays. See Figure 4.

translation matrix and distance

From distance information, topographic-map local neuron assemblies calculate translation matrices that place oriented surfaces away from brain at space points.

timing mechanism

Brain timing alternates excitation and inhibition. See Figure 5.

mass center

For flexible structures with only internal forces, mass center does not move. Outside forces move mass center. See Figure 6.

topographic maps

Retinal ganglion cells, thalamic neurons, and cortical neurons form arrays with equal spacing between neurons. See Figure 7.

surface orientation

Surfaces perpendicular to sightline have highest intensity. Surfaces at smaller angles have lower intensities. See Figure 8.

Surfaces perpendicular to light-source direction have highest intensity. Surfaces at smaller angles have lower intensities. See Figure 9.

processes: spatial and temporal relations

Modified ON-center and OFF-center neurons can detect spatial and temporal relations. For example, neuron can have horizontal band at center to detect space between two objects, band above to detect object above, or band below to detect object below.

processes: spatial layout

For positions, features, objects, scenes, and events, observing systems use object and object-property placeholder configurations to represent spatial layouts. Object and object property placeholders include smooth texture, rough texture, enclosed space, and open space. Observing systems replace object and object property placeholders with values.

Mathematical functions can represent spatial layouts. Functions with parameters or roots can describe surface and region boundaries. Waves with parameters and/or samples can describe functions and repeating or cyclic perceptions. Distributions with samples can describe surfaces and regions. Space distances and angles can describe shapes and patterns.

processes: space and time development

Body movements cause correlated sensations. As babies move body and limbs, they encounter air, fluid, and solid surfaces, including own body. For example, walking and running establish airflow gradient from front to back. Correlating sensations and movements, brain builds position and relation memories and places surfaces in body-centered space. From surfaces, brain builds horizontal ground, front and back, up and down, right and left, vertical, straight-ahead, and across. From directions and coordinates, brain learns what happens when body moves from place to place and so locates body parts and surfaces in space.

From length information, brain builds before and after memories and makes event sequences. From sequences, brain builds overall sequence and absolute beginning and end and past and future. From time coordinates, it learns what happens when it moves from time to time and locates body parts and surfaces in time.

properties: three dimensions

Midbrain tectum and cuneiform nucleus map three-dimensional space using multimodal neurons, whose axons envelop reticular thalamic nucleus and other thalamic nuclei. Spatial processing involves frontal lobe.

properties: continuity

Perceptual space never breaks into discrete parts during movement or blinking. Space has no twinkling, vibration, or oscillation. Perceptual space has no discontinuities. Visual processing is neuron size, but perceptions are much greater size. Neuron assemblies overlap and represent different sizes. Visual processes add and decay over time. Visual processing averages over time and space.

specific nerve energies

Sensation type depends on special neurons {doctrine of specific nerve energies} {specific nerve energies doctrine} {specific nerve energies law} {law of specific nerve energies}, not on what stimulates them. The applied physical energy does not matter. Stimulating retina with light or pressure makes only sights. Sending sense receptor signals to, or electrically stimulating, nerve fibers makes only one sensation.

perception evolution

Perception evolved, from Protista to humans {perception, evolution} {evolution, perception}.

protozoa

Stimulus Detection: Cell-membrane receptor molecules respond to pressure, light, or chemicals.

Potential Difference: Cell-membrane ion channels actively transport ions across membrane, to build concentration gradients and set up electric-voltage differences, and open and close to vary membrane potential locally.

marine metazoa

Neurons and Glands: Ectoderm develops into sense receptors, nerves, and outer skin. Mesoderm develops into muscles and glands, which release hormones to regulate cell metabolism. Endoderm develops into digestive tract.

Neuron Coordination: Sense receptors and neurons have membrane electrical and chemical connections, allowing information transfer and cell coordination.

Nerve Excitation: Excitation raises membrane potential to make reaching impulse threshold easier or to amplify stimuli.

Nerve Inhibition: Inhibition damps competing weaker stimuli to leave stronger stimuli, or more quickly damps neuron potential back to resting state to allow timing.

Bilateria

Bilateral Symmetry: Flatworms have symmetrical right and left sides and have front and back.

Ganglia: Neuron assemblies are functionally organized.

deuterostomes

Supporting Systems: Flatworm embryos have enterocoelom; separate mouth, muscular gut, and anus; and circulatory system. Embryo inner tube opens to outside at anus, not head.

Chordata

Body Structure: Larval and adult stages have notochord and elongated bodies, with distinct heads, trunks, and tails and repeated body structures.

Nervous System: Chordates have head ganglion, dorsal hollow nerve, and peripheral nerves.

Reflexes: Sense receptors send electrochemical signals to neurons that send electrochemical signals to neurons that send electrochemical signals to muscle or gland cells, to make reflex arcs.

Interneurons: Interneurons connect reflex arcs and other neuron pathways, allowing simultaneous mutual interactions, alternate pathways, and networks.

Association: Interneurons associate pathway neuron states with other-pathway neuron states. Simultaneous stimulation of associated neurons modifies membrane potentials and impulse thresholds.

Attention: Association allows input acknowledgement and so simple attention.

Circuits and Sequences: Association series build neuron circuits. Outside stimulation causes electrochemical signal flows and enzyme releases. Circuit flows calculate algorithms and spread stimulus effects over time and space. Circuits have signal sequences. Circuit sets have signal patterns.

Receptor and Neuron Arrays and Feature Detection: Sense-receptor and neuron two-dimensional arrays detect spatial and temporal stimulus-intensity patterns, and so constancies, covariances, and contravariances over time and/or space, to find curvatures, edges, gradients, flows, and sense features.

Topographic Maps and Spatial and Temporal Locations: Neuron arrays are topographic, with spatial layouts similar to body surfaces and space. Electrochemical signals stay organized spatially and temporally and so carry information about spatial and temporal location. Topographic maps receive electrochemical-signal vector-field wave fronts, transform them using tensors, and output electrochemical-signal vector-field wave fronts that represent objects and events.

Memory: Secondary neuron arrays, maps, and circuits store associative-learning memories.

Recall: Secondary neuron arrays, maps, and circuits recall associative-learning memories, to inhibit or excite neuron arrays that control muscles and glands.

vertebrates/fish

Brain: Hindbrain has motor cerebellum and sleep, wakefulness, and sense ganglia. Midbrain has sense ganglia. Forebrain has vision occipital lobe, hearing-equilibrium temporal lobe, touch-temperature-motor parietal lobe, and smell frontal lobe.

Balance: Vestibular system maintains balance.

fresh-water lobe-finned fish

Hearing: Eardrum helps amplify sound.

amphibians

Early amphibians had no new sense or nervous-system features.

reptiles

Cortex: Paleocortex has two cell layers.

Vision: Parietal eye detects infrared light.

anapsids, diapsids, synapsids, pelycosaurs, pristerognathids

Early anapsids, diapsids, asynapsids, pelycosaurs, and pristerognathids had no new nerve or sense features.

therapsids

Hearing: Outer ear has pinna.

Thermoregulation: Therapsids have thermoregulation.

cynodonts, Eutheria, Tribosphenida, monotremes, Theria

Early cynodonts, Eutheria, Tribosphenida, monotremes, and Theria had no new nerve or sense features.

mammals

Neocortex: Neocortex has four cell layers.

Vision: Vision sees color.

Stationary Three-Dimensional Space: Vision has fixed reference frame and stationary three-dimensional space.

insectivores

Vision: Forward vision has eyes at face front, and eye visual fields overlap.

primates, prosimians, monkeys

Early primates, prosimians, and monkeys had no new nerve or sense features.

Old World monkeys

Vision: Vision is trichromatic.

apes

Vision: Chimpanzees and humans over two years old can recognize themselves using mirror reflections and can use mirrors to look at themselves and reach body-surface locations.

anthropoid apes

Frontal Lobes: Neocortex frontal lobes are about memory and space, planning and prediction.

hominins

Multisensory Cortex: Neocortex has multisensory regions and two more cell layers, making six layers.

humans

Brain: Frontal lobes have better spatial organization. Parietal lobes have better communication. New associational cortex is for perception and motion coordination. Language: Neocortex has language areas.

1-Consciousness-Sense-Field

inside sense

Posture, movement, and pain perception {inside sense, field} detect stimuli from inside body.

outside sense

Sight, hearing, touch, taste, and smell perception {outside sense, field} detect stimuli from outside body.

1-Consciousness-Sense-Physiology

sense physiology

Brain processes make sensations {sense, physiology}. Intensity is about amplitude, flux, and energy. Spatial location and extension are about size, shape, motion, number, and solidity. Time interval is about sequences, frequency, and before and after. Quality is about timbre.

physiology

Senses measure intensive quantities (pressure, temperature, concentration, sound, and light) using receptors that accumulate energy, an extensive quantity, on small surfaces over time intervals. Absorbed energy displaces mass and electric charge and becomes potential energy. Sense-cell altered-molecule potential energies can transfer energy to other molecules. Light-energy absorption changes retinal-receptor-molecule atom arrangements. Sound-energy absorption moves inner-ear hair-cell hairs and basilar membrane. Mechanical energy absorption stretches skin touch receptors. Heat energy absorption or loss moves cell receptor membrane in skin hot-or-cold receptor cells. Mechanical-energy absorption by smell and taste receptors bonds molecules to receptors and alters molecule atom arrangements.

Senses analyze signal-wave amplitude, phase, and frequency differences and ratios to make spatial, temporal, intensity, and frequency patterns. Information flows represent intensive quantities.

To detect, neurons can sum inputs to add and pass thresholds. To sum, neurons can take continued sums and so perform integration. To model physical interactions, neurons can adding logarithms to multiply. To find solutions, factors, probabilities, combinations, and permutations, neurons can sum logarithms to find continued products. To perform algebra and calculus operations, neuron assemblies calculate sums, differences, products, divisions, mu operations, differentials, integrals, exponentials, and logarithms. To perform geometric operations, neuron assemblies calculate rays, splines, lines, lengths, distances, angles, boundaries, areas, regions, region splits, region joins, volumes, triangulations, and trilateralizations. To use spaces, neuron assemblies detect coordinates, directions, coordinate origins, spatial positions, vectors, matrices, tensors, symmetries, and groups. To use objects, neuron assemblies detect self, not-self, patterns, features, objects, and object relations.

signals

Electrical signals can vary in amplitude, speed, frequency carried, rate, noise, sensitivity, threshold, attack and decay slope, phase, integration, dissemination, feedback, feedforward, control, querying, alternation, regulation, filtering, and tuning. Chemical signals can vary in type, concentration, diffusion, active transport, release, packet size, reactivity, and energy release.

signals: continuous/discrete

Brain has discrete neurons, neurotransmitter packets, nerve impulses, and molecules. Discrete processes can transfer and store information without degradation, perform logic operations, and represent categories.

Sense stimuli are discrete. Light is a photon stream. Sound is a phonon stream. Smells and tastes have individual molecule binding. Temperature and pressure are individual molecule movements. Receptors convert stimulus energy into ion and molecule motions. However, particles are small and many, and act on millisecond time scales. Over macroscopic space and time, stimuli appear continuous in intensity, spatial location and extension, time location and duration, and quality.

signals: vibrations

Touch receptors can detect mechanical vibrations up to 20 to 30 hertz, which are also the lowest frequency vibrations detected by hearing receptors. Below 20 Hz, people feel pressure changes as vibration, rather than hearing them as sound. Images flashed at 20-Hz rate begin to blend. 20-Hz is also maximum breathing, muscle-flexing, and harmonic-body-movement rate. Muscle contractions up to 20 times per second make "butterflies" in tummy, trembling with anger or fear, damping of depression, or excitations of joy. Animals can have spring-like devices that allow higher muscle-vibration rates.

effects

Sensations tend to cause reflex motor actions, which brain typically suppresses. Sensations excite and inhibit brain processes.

Sensations from voluntary muscles provide feedback after actions, for reward and punishment [Aristotle, -350].

measurement

Brain can measure relative and absolute distances, times, masses, and intensities. Measurements have accuracy, precision, reproducibility, selectivity, and sensitivity.

measurement: units

Mass, length, and time are fundamental measurements. During development, brain measures intensity ratios to build measurement units. Brains calculate distances using triangulation, linear perspective, and geometry [Staudt, 1847] [Veblen and Young, 1918]. Brain can detect distance difference of one degree arc. Brains can measure mass by linear or angular acceleration or by moment around axis, using combined sight and touch. Brain can detect mass difference of 100 grams. Perhaps, some neurons signal at millisecond and longer intervals to provide brain clocks for time measurement. Brain can detect time difference of 0.03 milliseconds.

measurement: accumulator

To measure extensive quantities, chemical or electrical accumulators can sum an intensive quantity sampled over time or space.

measurement: contrast

Neurons perceive relative intensity differences and intensity ratios. For example, eye receptors respond mainly to illumination changes, not to steady light. Receptors detect change over time. Receptor pairs detect differences over space.

processes

Perception factors stimuli into irreducible features, objects, and events.

processes: paths

Complex systems have enough parts, connections, and subsystems to have and regulate internal flows. Brain has a central flow and many other pathways and circuits. Central processing stream uses synchronized sequential signals, with feedback, feedforward, and other regulatory signals. Reticular activating system and brainstem start depolarization streams and so are basic to consciousness. Cerebrum constructs streams of consciousness.

processes: test signals

Like radar or sonar, brain scanning sends parallel signals through brain regions to obtain return-signal patterns.

processes: space

Neurons detect constants, variables, first derivatives, and second derivatives to determine distances and times and so create space and time, using extrapolation, interpolation, differentiation, integration, and optimization.

processes: motion minimization

Brain spatial and time coordinates minimize and simplify object motions, and number of objects to track, using fixed reference frames. Fixed reference frames make most object motions two-dimensional straight-line motions, which aid throwing and catching. In moving reference frames, more objects appear to move, and motions are three-dimensional curves.

processes: nulling

In size-weight illusions, mass discrimination seems to use nulling. Nulling can explain Weber-Fechner stimulus-sensation law.

processes: operations

Local sensory operations involve finding boundary, determining boundary orientation, increasing contrast, decreasing similarities, and detecting motion [Clarke, 1995]. Global sensory operations involve head and body movements, object trajectories, feature comparisons, and event sequences.

processes: resonation

To resonate, neuron pairs excite interneuron, which excites both neurons equally, while each paired neuron inhibits other paired neuron. If paired neurons fire asynchronously, interneuron signal has low amplitude and no frequency. If paired neurons fire synchronously, interneuron signal has high amplitude at input-signal frequency. Changing number of neurons and synapses traversed, or changing axon lengths, changes frequency.

Resonance detects synchronicity and so association. Interneurons can send resonating signals forward to other neurons.

processes: sampling

Body moves sense organs to sample different space regions over time. Directed movements gain information about critical features in critical locations at critical times. Birds and other animals move and then pause, every few seconds, to gather information [Matthews, 1973].

Perhaps, sampling uses attention mechanisms to decide to which location to move. Perhaps, sampling uses production systems to decide what to sample next. Perhaps, sampling uses template matching to recognize or categorize samples.

processes: statistics

Sense processing uses many neurons and so uses statistics.

processes: synchronization

Resting neurons send signals that adjust synapse properties and axon lengths, to coordinate timing among neuron sets. Synchronized signals lengthen or shorten pathways and quicken or slow synapses, to align time and space metrics.

processes: tensor

Sense-organ-receptor-, neuron-, and motor-neuron-array inputs are scalar or vector fields. Array uses a tensor function to transform field to output new vector field. Output vector field goes to cortical analysis or muscle and gland cells. Muscle cells contract in one direction with varying strength. Muscle-contraction vector fields have net contraction.

processes: timing

Brain neurons can send time signals at regular millisecond and/or longer intervals to act as clocks. Brain-timing-mechanism oscillation phases or periods can time perceptual events and body movements. At different times and positions, brain clocks run at different speeds for different purposes [Bair and Koch, 1996] [Bair, 1999] [Marsálek et al., 1997] [Nowak and Bullier, 1997] [Schmolesky et al., 1998].

Accumulation processes, such as adding energy units, can record time passage. Decay processes, subtracting energy units from total, can record time passage. Cycles can measure intervals between peaks. Tracking times requires processes that persist over time and whose later states causally depend on earlier states.

processes: wave modulation

Nerve signals can use wave-frequency modulation and wave-amplitude modulation to represent frequency and intensity.

processes: whole body

Brain, peripheral nervous system, and motor system interconnect, and sense qualities involve brain and body. For example, stroking skin can make people feel sense qualities in other body locations. Music and visual patterns can evoke whole body changes. Moods integrate senses, motor system, and body into overall feelings. Surprised people draw in breath and pull back, because drawing in breath helps one pull back, and body pulls away from what is in front.

speed

Brain processes sounds faster than sights. Brain processes colors faster than shapes. Action pathway is faster than object-recognition pathway. Brain calculates eye movements faster than voluntary movements [Revonsuo, 1999].

speed: information processing rate

Neuron information-processing rate is 40 bits per second. Ear information capacity is 10,000 bits per second. Eye can see 50 images per second, so eye information capacity is 500,000 to 600,000 bits per second.

adaptation to sensation

Previous cell stimulation {adapting stimulus} reduces cell response {adaptation, sensation} {sensory adaptation} {sense adaptation}. Receptors have fewer biochemical reactions {receptor adaptation}, because cell has fewer energy storage molecules and cells make energy molecules slower than they use them. Receptors have lower cell-membrane potential gradients, because ions have flowed through membrane channels and active transport is slower than ion flow through open ion channels. After adapting stimulus ceases, cells increase sensitivity and responses.

biofeedback

Monitoring heart rate electronically {biofeedback}| allows learning voluntary heart-rate control.

characteristic delay

Neurons can receive from two eye, ear, or other-sense neurons and detect time, space, or intensity differences. For two spatial positions, cells detect ear time difference {characteristic delay}, eye spatial difference, or smell, taste, or touch concentration or pressure difference.

quality

Sense qualities {quality, sense} depend on opponent and categorization processes.

sum

ON-center neuron can add inputs from two neurons. Brightness depends on adding.

opponent processes

ON-center neuron can receive input from two neurons. Input from one neuron subtracts from input from other neuron. Human color vision uses such opponent processes. (Opponent-process opposites have same information as opponent process.)

continuous

Sum and opponent processes make continuous scales. For example, values can range from +1 to -1.

categorization

To divide ranges into intervals and make discrete categories, neurons use thresholds [Damper and Harnad, 2000]. Comparing different opponent processes can filter to make categories.

response internalization

Stimuli tend to cause muscular or glandular responses. By attending to stimuli or responses, animals can learn to inhibit muscular or glandular responses, so signals only affect brain {response internalization}.

sensory onset asynchrony

Brain can sense simultaneous stimuli at different times {sensory onset asynchrony} (SOA).

1-Consciousness-Sense-Physiology-Intensity

intensity

Perceptions have relative intensities {intensity, sense physiology} at locations.

coding

Axon-hillock membrane potential, axon current, average nerve-impulse rate, or neurotransmitter release can represent intensity.

receptors

Mechanical strains, temperature changes, chemical bonding, cell-hair vibration, and photon absorption change receptor membrane-molecule configurations. Configuration rearrangement changes molecule potential energy. Molecule steady-state configurations have lowest potential energy. Receptors transduce molecule potential-energy change into neurotransmitter-packet release at synapses onto neuron dendrites and cell bodies. Neurotransmitters open or close membrane ion channels to change synaptic neuron-membrane electric potential.

neurons

Synaptic membrane potentials spread to neuron axon hillock, where they add. Every millisecond, if hillock-membrane depolarization exceeds threshold, hillock membrane sends nerve impulse down axon.

threshold

Previous activity and neurohormones change neuron thresholds, so neurons detect current relative intensity, not absolute intensity. Perceptual intensities can be transient or sustained.

irritability of sense

Small stimuli, such as gentle touch, can trigger sense response {irritability, sense}.

sensory transducer

Sense receptors {sensory transducer} convert kinetic or potential energy from mechanical-force touch, temperature, and hearing translations and vibrations, or electrical-force light, liquids, or gases into cell-membrane depolarizations, whose electrical effects pass to neurons.

sustained response

Machine computation is for stepwise analysis. Brain computation is for synthesis over time. Unlike computer programs, sensations can cause ongoing excitation {sustained response} at same location. Sustained responses are like steady states, not equilibrium states or transient states. Sustained responses use invariants and transformations to reach steady state. Neural assemblies have evolved to develop sustained responses. Sustained responses can serve as symbol grounds.

1-Consciousness-Sense-Physiology-Binding

binding

Objects have shape, texture, color, spatial location, distance, surface orientation, and motion. Brain processes object information in separate brain regions at different times and different processing speeds. Perception neural activities associate {binding} all feature and object information at all times [Domany et al., 1994] [Lisman and Idiart, 1995] [Malsburg, 1981] [Malsburg, 1995] [Malsburg, 1999] [Milner, 1974] [Robertson, 2003] [Treisman, 1996] [Treisman and Schmidt, 1982] [Treisman, 1998] [Tsal, 1989] [Wojciulik and Kanwisher, 1998] [Wolfe and Cave, 1999]. Color, shape, depth, motion, and orientation unify into objects and events [Treisman, 2003]. Same-spatial-location features associate. Simultaneous features associate.

attention

Binding typically requires attention. Perhaps, attention enhances attended-object brain processing. Simultaneous attention to features associates them. With minimum attention, adjacent-object property can bind to half-attended object. With no attention, non-conscious information processing can have perceptual binding [Treisman and Gelade, 1980].

short-term memory

Binding requires short-term memory, which holds all object features simultaneously. Short-term memory processing has EEG gamma waves. Perhaps, reverberating brain activity causes gamma waves. However, short-term memory involves more than synchronous or phasic firing [Tallon-Baudry and Bertrand, 1999].

brain processes

Perhaps, binding uses neuron labels, gene patterns, development patterns, frequently repeated experiences, space location, or time synchronization [Malsburg, 1999]. Learned associations link similar features.

Mammal superior colliculus can integrate same-spatial-location multisensory information, but reptiles use only separate sense processes [O'Regan and Noë, 2001]. Strongly firing cortical and thalamic neurons link temporarily. Medial-temporal-lobe system, especially hippocampus, is for binding. Visual-cortex neuron-assembly synchronous firing can represent object images [Engel and Singer, 2001] [Engel et al., 1991] [Engel et al., 1999] [Gray, 1999] [Gray et al., 1989] [Kreiter and Singer, 1996] [Laurent, 1999] [Laurent et al., 2001] [MacLeod et al., 1998] [Malsburg, 1981] [Malsburg, 1999] [Shadlen and Movshon, 1999] [Singer, 1999] [Singer, 2000] [Stopfer et al., 1997] [Thiele and Stoner, 2003]. Perhaps, master maps or central information exchanges synchronize topographic maps.

binding problem

From one stimulus source, brain processes different feature types in separate brain regions, at different times and processing speeds. How does brain associate object features {binding problem}|? Perhaps, brains use common signals for all processes.

Moving spot triggers different motion detectors. How does brain associate two stimulus sources with one moving object {correspondence problem, binding}? Perhaps, brain follows spot from one location to next unambiguously.

Turning one spot on and off can trigger same motion detector. How does brain associate detector activation at different times with one spot? Perhaps, brain assumes same location is same object.

parsing problem

From many stimulus sources, brain processes different objects' feature types in separate brain regions, at different times and processing speeds. How do brains associate object features to objects {parsing problem}|? Perhaps, brains use common signals for processes.

perceptual field

Perhaps, background field {perceptual field} links perceptual locations, synchronizes times, and associates features to objects and events. During development, space and time correlations among sense features and motor movements build perceptual field. First, neurons note other-neuron states and store feature correlations. Next, neuron assemblies note other-neuron-assembly states and store object and movement correlations. Then, larger neuron assemblies work together to store scenes and stories [Desimone and Duncan, 1995] [Flohr, 2000] [Freeman, 1975] [Harris et al., 2003] [Hebb, 1949] [Palm, 1982] [Palm, 1990] [Rowland and Blumenthal, 1974] [Szentagothai and Arbib, 1975] [Varela et al., 2001].

1-Consciousness-Sense-Blood Pressure

baroreceptor

Reduced blood volume decreases blood pressure and stimulates left-atrium, aorta, and carotid low-pressure stretch receptors {baroreceptor}. Baroreceptors stimulate glossopharyngeal and vagus cranial nerves to hypothalamus, which causes pituitary-nerve terminals to secrete arginine vasopressin to constrict blood vessels to increase blood pressure.

1-Consciousness-Sense-Blood Osmolality

osmoreceptor

Increased plasma concentration and higher osmolality stimulate hypothalamus receptors {osmoreceptor}. Hypothalamus causes pituitary-nerve terminals to secrete arginine vasopressin to constrict blood vessels to decrease kidney water loss.

1-Consciousness-Sense-Blood Gases

carotid body

Internal carotid-artery receptors {carotid body}| {carotid sinus} measure blood oxygen and carbon dioxide concentrations, and send signals to control breathing rate and breath-holding response.

1-Consciousness-Sense-Defecation

defecation sense

Rectum sensors {distension receptor, rectum} measure distension and send signals to control discomfort feeling {defecation, sense}.

1-Consciousness-Sense-Electroreception

electroreception

Using ampullae of Lorenzini or tuberous receptors, electric fish can detect electric-field-change information and electric waves {electroreception}|, and send along lateral-line nerve to brain. Rays, skates, sawfish, electric rays, sturgeons, lungfish, sharks, and ratfish or chimaera combine electroreceptor system with other sense modes.

ampullae of Lorenzini

Sharks, skates, electric rays, rays, lungfish, sawfish, sturgeon, and ratfish {chimaera} have skin pores that open into electrically charged gel tubes, which go to ampullae {ampullae of Lorenzini} (Stefano Lorenzini) [1678]. Ampullae have one sensor layer, with calcium ion inflow and potassium ion outflow, that sends to neurons that send along lateral-line nerve to brain.

tuberous receptor

Elephant-nose fish and other mud dwellers emit electric fields and have electric-field receptors {tuberous receptor} that detect electric-field disturbances caused by other-organism movements.

1-Consciousness-Sense-Hearing

hearing

People have inner-ear cochleas {hearing, sense} {audition, sense}, with sense receptors for mechanical compression-and-rarefaction longitudinal vibrations {sound, hearing}. Sounds have loudness intensity and tone frequency. Hearing also analyzes sound-wave phases to locate sound space directions and distances. Hearing qualities include whisper, speech, music, noise, and scream. Hearing can perceive who is speaking, what their emotional state is, and whether they are lying.

physical properties

Hearable events are mechanical compression-and-rarefaction longitudinal vibrations in air and body tissues, with frequencies 20 Hz to 20,000 Hz. Sound-wave frequencies have intensities, amplitude, and phase.

Two frequencies can have harmonic ratios, with small integers in numerator and denominator.

Sound waves ultimately vibrate cochlea hair cells.

neurons

At low frequencies, sound and neuron activity have same frequency. At high frequencies, nerve-fiber activity distribution represents pitch. Neuron firing rate and number represent sound intensity.

properties: aging

Aging can shift tone sequence.

properties: analytic sense

Tones are independent and do not mix. People can simultaneously hear different frequencies at different intensities.

properties: beats

Sound waves can superpose to create lower-frequency beats.

properties: habituation

Hearing does not habituate quickly.

properties: hearing yourself speak

Bone attenuates higher frequencies, so people hear their own speech as more mellow than others do.

properties: individual differences

Sound has same physical properties for everyone, and hearing processes are similar, so hearing perceptions are similar. All people hear similar tone spectrum, with same tones and tone sequence.

properties: memory

Melodies ending in harmonic cadence are easier to remember than those that end otherwise.

properties: opposites

Tones have no opposites.

properties: precision

People easily distinguish tones and half tones and can distinguish quarter tones after learning. Adjacent-quartertone frequencies differ by several percent.

properties: tempo

People can perceive sound presentation speed: slow, medium, or fast.

properties: time

Hearing is in real time, with a half-second delay.

properties: tone relations

Tones have unique tone relations. A, B, C, D, E, F, and G tone-frequency ratios must be the same for all octaves. Tones, such as middle A, must be two times the frequency of same tone, such as lower A, in next-lower octave. Without constant in-octave and across-octave frequency ratios, tone A becomes tone B or G in other octaves. For normal hearing, tones relate in only one consistent and complete way. Tones cannot substitute and can never be other tones.

properties: tone similarities

Similar tones have similar frequencies or are octaves apart.

properties: waves

Tones directly relate to physical sound-wave frequencies and intensities. Sound waves have emissions, absorptions, vibrations, reflections, and transmissions.

properties: warm and cool

Warm tones have longer and lower attack and decay, longer tones, and more harmonics. Cool tones have shorter and higher attack and decay, shorter tones, and fewer harmonics.

evolution

Hearing evolved from fish lateral line, which has hair cells. Hearing uses one basic receptor type. Reptile hair cells have oscillating potentials from interacting voltage-gated-calcium and calcium-gated-potassium channels, so hair vibrations match sound frequencies. Mammal hair cells vibrate at sound frequencies and have sound-frequency oscillating potentials, but they add force to increase vibration amplitude. Perhaps, the first hearing was for major water vibrations.

development

By 126 days (four months), fetus has first high-level hearing.

Newborns react to loud sounds. If newborns are alert, high sound frequencies cause freezing, but low ones soothe crying and increase motor activity. Rhythmic sounds quiet newborns.

animals

Animals can detect three pitch-change patterns: up, down, and up then down. Bats can emit and hear ultrasound. Some moths can hear ultrasound, to sense bats [Wilson, 1971] [Wilson, 1975] [Wilson, 1998]. Insects can use hearing to locate mates [Wilson, 1971] [Wilson, 1975] [Wilson, 1998].

relations to other senses

Hearing, temperature, and touch involve mechanical energy. Touch can feel vibrations below 20 Hz. Hearing can feel vibrations above 20 Hz. Sound vibrates eardrum and other body surfaces but is not felt as touch.

Vision seems unrelated to hearing, but both detect wave frequency and intensity. Hearing detects longitudinal mechanical waves, and vision detects transverse electric waves. Hearing has ten-octave frequency range, and vision has one-octave frequency range. Hearing has higher energy level than vision. Hearing is analytic, but vision is synthetic. Hearing can have interference from more than one source, and vision can have interference from only one source. Hearing uses phase differences, but vision does not. Hearing is silent from most spatial locations, but vision displays information from all scene locations. Hearing has sound attack and decay, but vision is so fast that it has no temporal properties.

Smell and taste seem unrelated to hearing.

absolute pitch

Some people can name heard tones {absolute pitch}|, and this correlates with learning note names when young.

cocktail party effect

People can listen to one speaker when several speakers are talking {cocktail party effect}. Hearing attends to one message stream by localizing sounds using binaural hearing and sound quality and by inhibiting other message streams.

McGurk effect

Seeing lip movement aids auditory perception {McGurk effect}. In humans, sight dominates sound.

otic

Things can be about ears {otic}.

sound dodeconion

Equal-temperament tones can form mathematical groups {sound dodeconion}. The twelve octave tones and half-tones have equally spaced frequencies. A regular 12-vertex dodecagram has points separated by 30 degrees and can represent the twelve tones, and rotations by 30-degree multiples result in same geometric figure.

frequency ratios

Tone pairs have frequency ratios. Octave from middle-C to high-C has tone frequency ratio 2/1. Middle tone, such as middle-G, makes reciprocal tone ratios, such as middle-C/middle-G, 3/2, and middle-G/high-C, 4/3.

1-Consciousness-Sense-Hearing-Anatomy

Eustachian tube

A tube {Eustachian tube}| goes from middle ear to pharynx, to equalize pressure inside and outside eardrums. Pharynx valves close tube when talking but open tube when swallowing or yawning or when outside air pressure changes.

1-Consciousness-Sense-Hearing-Anatomy-Brain

belt area

Area {belt area} adjacent to area-A1 primary auditory cortex can receive from area A1 and respond to complex sound features.

parabelt area

Area {parabelt area} laterally adjacent to belt area can receive from belt area and respond to complex sounds and multisensory features.

tonotopic organization

Cortical frequency-sensitive auditory neurons align from low to high frequency {tonotopic organization}.

1-Consciousness-Sense-Hearing-Anatomy-Brain-Neurons

auditory neuron

Hearing neurons {auditory neuron} receive input from 10 to 30 hair-cell receptors.

frequency

Auditory neurons respond to one frequency, within several percent. Frequencies are between 20 Hz and 20,000 Hz.

intensity

Auditory neurons respond to low, medium, or high intensity. Low-spontaneous-firing-rate neurons {low-spontaneous fiber} are for high-intensity sound and have narrow-band frequency tuning. With no stimulation, their firing rate is less than 10/s. Firing rate rises with intensity {rate-intensity function, neuron}.

High-spontaneous-firing-rate neurons {high-spontaneous fiber} are for low-intensity sound and have broad-band frequency tuning. With no stimulation, their firing rate is greater than 30/s. Firing rate rises with intensity to maximum at low intensity.

Mid-spontaneous fibers are for intermediate-intensity sound. With no stimulation, firing rate is greater than 10/s and less than 30/s.

omega interneuron

Free intracellular calcium ions modulate cricket hearing interneurons {omega interneuron} [Huber and Thorson, 1985] [Sobel and Tank, 1994].

1-Consciousness-Sense-Hearing-Anatomy-Ear

ear

Human hearing organs {ear} have outer ear to catch sounds, middle ear to concentrate sounds, and inner ear to analyze sound frequency and intensity.

1-Consciousness-Sense-Hearing-Anatomy-Ear-Outer Ear

outer ear

Pinna and ear canal {outer ear}| gather and focus sound on eardrum.

pinna

Only mammal ears have a cartilage flap {pinna}| {pinnae}, to catch sounds.

auditory canal

A 2.5-centimeter tube {auditory canal}| {ear canal}, from outside pinna to inside tympanic membrane, protects tympanic membrane from objects and loud sounds.

earwax

Auditory canal has wax {earwax}|. Perhaps, earwax keeps ear canal moist and/or sticks to insects.

eardrum

Thin connective-tissue membrane {tympanic membrane} {eardrum}| is across ear-canal inner end. Tympanic membrane is 18 times larger than oval window.

1-Consciousness-Sense-Hearing-Anatomy-Ear-Middle Ear

middle ear

Eardrum connects to air cavity {middle ear}|.

1-Consciousness-Sense-Hearing-Anatomy-Ear-Middle Ear-Bones

ossicles

Middle ear has three small bones {ossicles}|: hammer, anvil, and stirrup. Two middle ear bones evolved from reptile lower jawbones [Ramachandran, 2004].

hammer bone

Eardrum connects to middle-ear bone {hammer bone}| {malleus}, which connects to anvil.

anvil bone

Hammer bone connects to middle-ear bone {anvil bone}| {incus}, which connects to stirrup. Anvil bone is smaller than hammer bone to concentrate sound pressure.

stirrup bone

Anvil bone connects to middle-ear bone {stirrup bone}| {stapes}, which connects to oval window. Stirrup bone is smaller than anvil bone to concentrate sound pressure.

1-Consciousness-Sense-Hearing-Anatomy-Ear-Middle Ear-Muscles

tensor tympani muscle

Muscles {tensor tympani muscle} attached to malleus can tense to dampen loud vibration.

stapedius muscle

Muscles {stapedius muscle} attached to stapes can tense to dampen loud vibration.

1-Consciousness-Sense-Hearing-Anatomy-Cochlea

cochlea

A coiled trumpet-shaped fluid-filled organ {inner ear} {cochlea}|, 4 mm diameter and 35 mm long, is in temporal bone.

hair cell in cochlea

Inner ear, nearer auditory nerve, has one straight row of 3500 inner hair cells {hair cell, cochlea} and has three S-curved rows with 3500 outer hair cells each (10,500 total). Outer-hair-cell cilia poke through tectorial membrane. Hairs have long part, medium part, and short part, linked by hairs from small tip to medium middle and from medium tip to large middle. Cochlea hair-cell receptors microscopic fibers and microscopic cross-fibers cause resonance between frequencies.

Oval-window movement makes pressure waves, down vestibular canal, which cause middle-canal vertical movement, which slides tectorial-membrane gel horizontally over upright cilia. If pushed one way, hair-cell-membrane potential increases from resting potential. If pushed other way, potential decreases. Inner hair cells send to 10 to 30 auditory neurons.

Outer hair cells can receive brain signals to extend cilia, to stiffen cochlear partition and dampen sound. This reduces signal-to-noise ratio, lowers required input intensity to sharpen tuning, or sends secondary signals to inner hair cells.

1-Consciousness-Sense-Hearing-Anatomy-Cochlea-Window

oval window in ear

Stapes connects to membrane across opening {oval window, hearing}| at cochlea beginning. Oval window is 18 times smaller than tympanic membrane, to concentrate sound pressure.

round window

At base, tympanic canal has soft tissue {round window} that absorbs high pressure.

1-Consciousness-Sense-Hearing-Anatomy-Cochlea-Canals

tympanic canal

Cochlea outside has a canal {tympanic canal} {scala tympani}. Tympanic membrane is over tympanic-canal end. Round window is over tympanic-canal base.

vestibular canal

Cochlea outside has a canal {vestibular canal} {scala vestibuli}.

helicotrema

Tympanic and vestibular canals join at cochlea point {helicotrema}.

middle canal

Cochlea middle has a canal {middle canal} {scala media}.

cochlear canal

Cochlea inside has a canal {cochlear canal}.

1-Consciousness-Sense-Hearing-Anatomy-Cochlea-Membranes

Reissner membrane

Membrane {Reissner's membrane} separates middle canal and vestibular canal.

basilar membrane

In cochlear canal, a coil {basilar membrane} also separates middle canal and tympanic canal. Close to oval window {base, basilar membrane}, basilar membrane is stiff and narrow. At other end {apex, basilar membrane}, basilar membrane is wider and less stiff.

organ of Corti

Basilar-membrane structures {organ of Corti} have 30,000 hair-cell receptors, with stereocilia and fibers. Organ-of-Corti base detects high frequencies, and organ-of-Corti apex detects low frequencies (place code).

tectorial membrane

Gel membrane {tectorial membrane} attaches to end of, and floats in, middle canal and touches outer hair cells.

cochlear partition

Basilar membrane, tectorial membrane, and organ of Corti together {cochlear partition} detect sounds. Cochlear partition is in middle canal.

1-Consciousness-Sense-Hearing-Physiology

hearing physiology

Sounds affect many hair-cell receptors {hearing, physiology}. Hearing finds intensities at frequencies and frequency bands (sound spectrum).

properties: fundamental missing

If people hear harmonics without the fundamental frequency, they hear the fundamental frequency, probably by temporal coding. Amplifying a chord tone causes hearing both tone and its fundamental tone, though fundamental frequency has zero intensity.

properties: octave

Animals conditioned to respond to pitch respond almost equally to its above and below octaves.

properties: phase differences

People cannot hear phase differences, but hearing can use phase differences to locate sounds.

properties: rhythm

Hearing can recognize rhythms and rhythmic groups.

properties: timing

People perceive two sounds less than three milliseconds apart as the same sound.

processes: contrast

Hearing uses lateral inhibition to enhance contrast to distinguish sounds.

processes: damping

Later tones constrain basilar membrane. Lower-frequency later tones constrain basilar membrane more. If later tone is more than 1000 Hz lower than earlier tone, to hear first tone requires high loudness. If later tone is more than 300 Hz higher than earlier tone, to hear first tone requires moderate loudness.

processes: filtering

Hearing integrates over many neurons to filter frequencies to find their individual intensities. Hearing performs limited-resolution Fourier analysis on sound frequencies [Friedmann, 1979].

processes: important sounds

Important sounds use more neurons and synapses.

processes: memory

Previous sound experiences help distinguish current sound patterns.

brain

Because brain is viscous, sound cannot affect brain tissue.

continuity effect

For short sounds in noisy backgrounds, hearing can complete missing sounds or sharpen noisy sounds {continuity effect} {perceptual restoration effect}. Hearing does not fill in short silences with sounds, but sharpens temporal boundaries. Hearing does not know when it fills in.

echo perception

Sound radiates in all directions from sources and reflects from various surfaces back to ears {echo perception}. Hearing can distinguish echoes from their source sounds. Hearing uses binaural signals to suppress echoes.

head-related transfer function

Body and head, including pinnae and ear canals, transmit and absorb different-frequency, different-elevation, and different-azimuth sounds differently {head-related transfer function}.

1-Consciousness-Sense-Hearing-Physiology-Frequency

pitch in hearing

People can perceive sound frequency {pitch, sound}|.

frequency

People can hear ten frequency octaves, from 20 Hz to 20,000 Hz. Lowest frequencies, 20 Hz to 30 Hz, are also highest vibrations detectable by touch.

Shortest hair-cell hair lengths detect highest frequencies. High-frequency tones vibrate basilar-membrane stiff narrow end, far from oval window. Above 3000 Hz, higher hearing neurons respond to frequency, tone pattern, or intensity range.

Low-frequency tones activate all hair cells, with greater activity near oval window and its long-hair hair cells.

sensitivity

People are most sensitive at frequency 1800 Hz.

neuron firing

Maximum neuron firing rate is 800 Hz. After sound frequency and firing rate reach 800 Hz, firing rate drops abruptly, and more than one neuron carries sound-frequency information. After sound frequency and firing rate reach 1600 Hz, firing rate drops abruptly.

characteristic frequency

Auditory neurons have frequency {characteristic frequency} (CF) at which they are most sensitive. The characteristic frequency is at the maximum of the frequency-intensity spectrum (threshold tuning curve). For CF = 500 Hz at 0 dB, 1000 Hz is at 80 dB, and 200 Hz is at 50 dB. For CF = 1100 Hz at 5 dB, 1500 Hz is at 80 dB, and 500 Hz is at 50 dB. For CF = 2000 Hz at 5 dB, 3500 Hz is at 80 dB, and 500 Hz is at 80 dB. For CF = 3000 Hz at 5 dB, 3500 Hz is at 80 dB, 700 Hz to 2000 Hz is at 50 dB, and 500 Hz is at 80 dB. For CF = 8000 Hz at 5 dB, 9000 Hz is at 80 dB, 1000 Hz to 3000 Hz is at 60 dB, and 500 Hz is at 80 dB. For CF = 10000 Hz at 5 dB, 10500 Hz is at 80 dB, 5000 Hz is at 80 dB, 1000 Hz to 2000 Hz is at 60 dB, and 500 Hz is at 80 dB.

critical band

Auditory-nerve channels carry frequency-range {critical band} information.

microphonic electric pulse

For 100-Hz to 6000-Hz sound stimuli, basilar membrane has electric pulses, with same frequency and intensity, caused by potentials from all hair cells, that do not fatigue.

For 20-Hz to 900-Hz sound stimuli, auditory-neuron axons have electric pulses {microphonic electric pulse}, measured in cochlear nerve, with same frequency and intensity [Saul and Davis, 1932]. For 900-Hz to 1800-Hz sound stimuli, auditory-neuron axons have electric pulses with same frequency and one-half intensity. For 1800-Hz to 2700-Hz sound stimuli, auditory-neuron axons have electric pulses with same frequency and one-third intensity. For above-2700-Hz sound stimuli, auditory-neuron axons have electric pulses that do not correlate with frequency and intensity. Perhaps, auditory nerve uses summed potentials of microphonic-electric-pulse envelopes.

phase locking

For below-500-Hz sound stimuli, auditory-neuron-axon signals have same frequency and phase {phase locking, hearing}.

recruitment

Similar frequencies group together to make increasing loudness {recruitment, hearing}.

tone chroma

Tones that share one octave have perceivable sound features {tone chroma}.

tone height

Tone frequency determines low or high pitch {tone height}.

1-Consciousness-Sense-Hearing-Physiology-Frequency-Masking

critical band masking

Noise or tones within two octaves of stimulus frequency can interfere with stimulus perception {critical band masking}. Pure tones mask high frequencies more than low frequencies, because higher frequencies activate smaller basilar-membrane regions. Complex tones mask low frequencies more than high frequencies, because lower frequencies have more energy than higher frequencies [Sobel and Tank, 1994].

preceding tone

Previous-tone {preceding tone} intensity-frequency spectrum affects neuron current-tone response.

two-tone suppression

Different later tone can decrease auditory-neuron firing rate {two-tone suppression}.

1-Consciousness-Sense-Hearing-Physiology-Frequency-Spectrum

audibility curve

At each audible frequency, people have an intensity threshold {audibility curve}.

equal loudness curve

At each audible frequency, specific sound-pressure levels (SPL) cause people to hear equal loudness {equal loudness curve}.

isointensity curve

At constant amplitude, auditory-neuron firing rate depends on frequency {isointensity curve}. For amplitude 20 dB at characteristic frequency, firing rate is 180 per second. For amplitude 20 dB at 500 Hz below or 500 Hz above characteristic frequency, firing rate is 50 per second. For amplitude 20 dB at 1300 Hz to 1400 Hz above characteristic frequency, auditory neurons have spontaneous firing rate.

threshold tuning curve

At each frequency, people have a sound-intensity threshold {threshold tuning curve}.

1-Consciousness-Sense-Hearing-Physiology-Frequency-Timbre

timbre of sound

Same-intensity-and-pitch sounds can have different harmonics {timbre, sound}|. Rapid timbre changes are difficult to perceive.

clarity of tone

Clear tones {clarity, tone} have narrow frequency band. Unclear tones have wide frequency band.

fullness of tone

Full tones {fullness, tone} have many frequency resonances. Shallow tones have few frequency resonances.

shrillness of tone

Shrill tones {shrillness} have higher frequencies. Dull tones have lower frequencies.

stridency and mellowness

Sounds with many high-frequency components seem sharp or strident {stridency}. Tones with mostly low-frequency components seem dull or mellow {mellowness}.

1-Consciousness-Sense-Hearing-Physiology-Intensity

hearing intensity

People can hear sound energies as small as random air-molecule motions. {hearing, intensity} {sound intensity}. Because oval window is smaller than eardrum, sound pressure increases in middle ear. Middle-ear bones increase sound intensity by acting as levers that convert distance into force.

distortion

High sound intensities can strain materials past their elastic limit, so intensity and/or frequency change.

frequency

For same stimulus-input energy, low-frequency tones sound louder, and high-frequency tones sound quieter. Smaller hair-cell hairs have faster vibrations and smaller amplitudes.

maximum sound

Maximum sound is when physical ear structures have inelastic strain, which stretches surface tissues past point to which they can completely return.

pain

Maximum sound causes pain.

rate

For amplitude 40 dB to 80 dB at frequency between 2000 Hz below and 50 Hz above characteristic frequency, maximum firing rate is 280 per second {rate saturation, hearing}.

temporal integration

If sound has constant intensity for less than 100 ms, perceived loudness decreases {temporal integration, hearing}. If sound has constant intensity for 100 ms to 300 ms, perceived loudness increases. If sound has constant intensity for longer than 300 ms, perceived loudness is constant.

acoustic reflex

At loud-sound onset, stapedius and tensor tympani muscles contract {acoustic reflex}, to dampen stapes and eardrum vibration.

1-Consciousness-Sense-Hearing-Physiology-Intensity-Rate

attack in sound

Tones can rise quickly or slowly from background noise level to maximum intensity {attack, hearing}| {onset, hearing}. Fast onset sounds aggressive. Slow onset sounds peaceful.

decay in sound

Tones can fall slowly or rapidly from maximum to background noise level {decay, hearing} {offset, hearing}.

1-Consciousness-Sense-Hearing-Physiology-Source Location

source location

Hearing perceives sound-source locations {source location} {sound location}, in space. Most space locations are silent. One space location can have several sound sources. Hearing determines sound location separately and independently of perceiving tones.

azimuth

Hearing can calculate angle to right or left, from straight-ahead to straight-behind, in horizontal plane.

elevation

Hearing can calculate height and angle above horizontal plane. People perceive lower frequencies as slightly lower than actual elevation. People perceive higher frequencies as slightly higher than actual elevation.

frequency and distance

Sound sources farther than 1000 meters have fewer high frequencies, because of air damping.

sound reflection and distance

Sound energy comes directly from sources and reflects from other surfaces. Close sounds have more direct energy than reflected energy. Far sounds have more reflected energy than direct energy. Reflected sounds have fewer high frequencies than direct sounds, because longer distances cause more air damping.

auditory stream segregation

Hearing can separate complex sounds from one source into independent continuous sound streams {auditory stream segregation}.

Sound grouping has same Gestalt laws as visual grouping.

If one ear hears melody with large ascending and descending tone jumps, and other ear hears another melody with large ascending and descending tone jumps, people do not hear left-ear melody and right-ear melody but hear two melodies, different than either original melodies, that depend on alternating-tone proximities.

source segregation

People separate sounds from multiple sources into independent continuous sound streams {auditory scene analysis} {source segregation}. Hearing separates sounds from different locations into independent continuous sound streams {spatial separation, hearing}.

binauralism

Having two ears {binauralism} allows calculating time and amplitude differences between left-ear and right-ear sound streams from same space location.

focusing

Hearing can reject unwanted messages {focusing, hearing}, using binauralism to localize sounds.

interaural level difference

The same sound reaches right and left ear at different intensity levels {interaural level difference} (ILD). Level difference can be as small as 1 dB. Intensity difference reflects stimulus distance, approaching or receding sounds, and body sound damping. Slight head movements are enough to eliminate direction ambiguity. Intensity differences due only to sound distance, or to approaching or receding sounds, are useful up to one or two meters. Beyond two meters, differences are too small to detect.

damping

Pinnae and head bones absorb sounds with frequencies higher than 1500 Hz, according to their frequency-related dampening function. Pinnae and head-bone damping differs on right and left, depending on source location, and hearing uses the intensity differences to determine space directions and distances beyond one or two meters.

brain

Lateral superior olive detects intensity-level differences between left-right ears and right-left ears, to make opponent systems. To find distance, two receptor outputs go to two different neurons, which both send to difference-finding neuron. Opposite-ear output goes to trapezoid-body medial nucleus, which lies beside pons lateral superior olive and inhibits same-ear lateral-superior-olive output. Interaural time difference and interaural level difference work together.

interaural time difference

The same sound reaches right and left ear at different times {interaural time difference, hearing} (ITD), because distances from source location to ear differ, and ears have distance between them. Hearing can detect several microseconds of time difference. Slight head movements are enough to eliminate direction ambiguity. Interaural time difference uses frequencies lower than 1500 Hz, because they have no body damping.

Medial superior olive detects time differences between left-right ears and right-left ears, to make opponent systems. To find distances, two receptor outputs go to two different neurons, which both send to difference-finding neuron. Interaural time difference and interaural level difference work together.

cone of confusion

In a cone {cone of confusion} {confusion cone} from head center into space, sounds have same intensity and timing, because ear timing differences (interaural time difference) and intensity differences (interaural level difference) are zero.

1-Consciousness-Sense-Hearing-Physiology-Techniques

audiometer

Electronic instruments {audiometer}| can test hearing.

microphone effect

Amplified auditory-nerve signals played through speakers sound same as stimulus sounds {microphone effect}.

psychoacoustics

People can study subjective sense qualities or psychological changes evoked by sound stimuli {psychoacoustics}.

1-Consciousness-Sense-Hearing-Music

music and hearing

Physical sound attributes directly relate to music attributes {music, hearing} {hearing, music}. Physical-sound frequency relates to music pitch. Music is mostly about frequency combinations. However, above 5000 Hz, musical pitch is lost. Physical-sound intensity relates to music loudness. Physical-sound duration relates to music rhythm. Physical=sound spectral complexity relates to music timbre.

However, frequency affects loudness. Intensity affects pitch. Tone frequency separation affects time-interval perception. Harmonic fluctuations, pitch changes, vibrato, and non-pitched-instrument starting noises {transient, sound} affect timbre. Timbre affects pitch.

emotion: chords

Chords typically convey similar feelings to people. Minor seventh is mournful. Major seventh is desire. Minor second is anguish. Humans experience tension in dissonance and repose in consonance.

emotion: pitch change

Music emotions mostly depend on relative pitch changes (not rhythm, timbre, or harmony).

emotion: key

Music keys have characteristic emotions. Composers typically repeat same keys and timbre, and composers have typical moods.

song: melody

Note sequences can rise, fall, or stay same. People can recognize melodies from several notes.

song: musical phrase

People perceive music phrase by phrase, because phrases have repeated often and because phrases take one breath. Children complete half-finished musical phrases using tones, rhythm, and harmony.

brain

No brain region is only for music. Music uses cognitive and language regions.

tone in music

Musical pitch makes musical notes {tone, hearing}.

octave

Tones can be double or half other-tone frequencies. Octaves go from a note to similar higher note, such as middle-C at 256 Hz to high-C at 512 Hz. Hearing covers ten octaves: 20 Hz, 40 Hz, 80 Hz, 160 Hz, 320 Hz, 640 Hz, 1280 Hz, 2560 Hz, 5120 Hz, 10240 Hz, and 20480 Hz.

octave tones

Within one octave are 7 whole tones, 7 + 5 = 12 halftones, and 24 quartertones.

overtones

Tones two, four, eight, and so on, times fundamental frequency are fundamental-frequency overtones.

sharpness or flatness

Fully sharp tone has frequency one halftone higher than tone. Slightly sharp tone has frequency slightly higher than tone. Fully flat tone has frequency one halftone lower than tone. Slightly flat tone has frequency slightly lower than tone.

musical scales

Musical scales have tone-frequency ratios. Using ratios cancels units to make relative values that do not change when units change.

equal temperament scale

Pianos have musical tones separated by equal ratios. Octave has twelve equal-temperament halftones, with ratios from 2^(0/12) to 2^(12/12) of fundamental frequency. Frequency ratio of halftone to next-lower halftone, such as C# to C, is 2^(1/12) = 2^.08 = 1.06. Starting at middle-C, ratios of tones to middle-C are 2^0 = 1 for middle-C, 2^.08 = 1.06 for C#, 2^.17 = 1.13 for D, 2^.25 = 1.19 for D#, 2^.33 = 1.26 for E, 2^.42 = 1.34 for F, 2^.50 = 1.41 for F#, 2^.58 = 1.49 for G, 2^.67 = 1.59 for G#, 2^.75 = 1.68 for A, 2^.83 = 1.78 for A#, 2^.92 = 1.89 for B, and 2^1 = 2 for high-C. See Figure 1. F# is middle tone.

equal-temperament scale: frequencies

Using equal-temperament tuning and taking middle-C as 256 Hz, D has frequency 289 Hz. E has frequency 323 Hz. F has frequency 343 Hz. G has frequency 384 Hz. A has frequency 430 Hz. B has frequency 484 Hz. High-C has frequency 512 Hz. Low-C has frequency 128 Hz. Low-low-C has frequency 64 Hz. Lowest-C has frequency 32 Hz. High-high-C has frequency 1024 Hz. Higher Cs have frequencies 2048 Hz, 4096 Hz, 8192 Hz, and 16,384 Hz. From 32 Hz to 16,384 Hz covers nine octaves.

tone-ratio scale

Early instruments used scales with tones separated by small-integer ratios. Tones had different frequency ratios than other tones.

tone-ratio scale: all possible small-integer ratios

In one octave, the 45 possible frequency ratios with denominator less than 13 are: 3/2; 4/3, 5/3; 5/4, 7/4; 6/5, 7/5, 8/5, 9/5; 7/6, 11/6; 8/7, 9/7, 10/7, 11/7, 12/7, 13/7; 9/8, 11/8, 13/8, 15/8; 10/9, 11/9, 13/9, 14/9, 16/9, 17/9; 11/10, 13/10, 17/10, 19/10; 12/11, 13/11, 14/11, 15/11, 16/11, 17/11, 18/11, 19/11, 20/11, 21/11; 13/12, 17/12, 19/12, and 23/12.

tone-ratio scale: whole tones

In octaves, the seven whole tones are do, re, mi, fa, so, la, ti, and do, for C, D, E, F, G, A, and B. The seven tones are not evenly spaced by frequency ratio. Frequency ratios are D/C = 6/5, E/C = 5/4, F/C = 4/3, and G/C = 3/2. For example, C = 240 Hz, D = 288 Hz, E = 300 Hz, F = 320 Hz, and G = 360 Hz. C, D, E, F, and G, and G, A, B, C, and D, have same tone progression. Frequency ratios are A/G = 6/5, B/G = 5/4, C/G = 4/3, and D/G = 3/2. For example, G = 400 Hz, A = 480 Hz, B = 500 Hz, C = 532 Hz, and D = 600 Hz.

tone-ratio scale: halftones

Using C as fundamental, the twelve halftones have the following ratios, in increasing order. 1:1 = C. 17:16 = C#. 9:8 = D. 6:5 = D#. 5:4 = E. 4:3 = F. 7:5 = F#. 3:2 = G. 8:5 = G#. 5:3 = A. 7:4 or 16:9 or 9:5 = A#. 11:6 or 15:8 = B. 2:1 = C.

tone-ratio scale: quartertones

The 24 quartertones have the following ratios, in increasing order. 1:1 = 1.000. 33:32 = 1.031. 17:16 = 1.063, or 16/15 = 1.067. 13:12 = 1.083, 11:10 = 1.100, or 10/9 = 1.111. 9:8 = 1.125. 8:7 = 1.143, or 7:6 = 1.167. 6:5 = 1.200. 17:14 = 1.214, or 11/9 = 1.222. 5:4 = 1.250. 9:7 = 1.286. 4:3 = 1.333. 11:8 = 1.375. 7:5 = 1.400. 17:12 = 1.417, 10:7 = 1.429, or 13/9 = 1.444. 3:2 = 1.500. 14/9 = 1.556, 11:7 = 1.571, or 19:12 = 1.583. 8:5 = 1.600. 13:8 = 1.625. 5:3 = 1.667. 12:7 = 1.714, or 7:4 = 1.75. 16:9 = 1.778, or 9:5 = 1.800. 11:6 = 1.833, or 13:7 = 1.857. 15:8 = 1.875. 23:12 = 1.917. 2:1 = 2.000. Ratios within small percentage are not distinguishable.

tone intervals

Two tones have a number of tones between them. First interval has one tone, such as C. Minor second interval has two tones, such as C and D-flat, and covers one halftone. Major second interval has two tones, such as C and D, and covers two halftones. Minor third interval has three tones, such as C, D, and E-flat, and covers three halftones. Major third interval has three tones, such as C, D, and E, and covers four halftones. Minor fourth interval has four tones, such as C, D, E, and F, and covers five halftones. Major fourth interval has four tones, such as C, D, E, and F#, and covers six halftones. Minor fifth interval has five tones, such as C, D, E, F, and G-flat, and covers six halftones. Major fifth interval has five tones, such as C, D, E, F, and G, and covers seven halftones. Minor sixth interval has six tones, such as C, D, E, F, G, and A-flat, and covers eight halftones. Major sixth interval has six tones, such as C, D, E, F, G, and A, and covers nine halftones. Minor seventh interval has seven tones, such as C, D, E, F, G, A, and B-flat, and covers ten halftones. Major seventh interval has seven tones, such as C, D, E, F, G, A, and B, and covers eleven halftones. Eighth interval is octave, has eight tones, such as C, D, E, F, G, A, B, and high-C, and covers twelve halftones.

tone intervals: pairs

Tones have two related ratios. For example, D and middle-C, major second, have ratio 289/256 = 9/8, and D and high-C, minor seventh, have ratio 9/16, so high-C/D = 16/9. The ratios multiply to two: 9/8 * 16/9 = 2. E and middle-C, major third, have ratio 323/256 = 5/4, and E and high-C, minor sixth, have ratio 5/8, so high-C/E = 8/5. F and middle-C, minor fourth, have ratio 343/256 = 4/3, and F and high-C, major fifth, have ratio 2/3, so high-C/G = 3/2. G and middle-C, major fifth, have ratio 384/256 = 3/2, and G and high-C, minor fourth, have ratio 3/4, so high-C/G = 4/3. A and middle-C, major sixth, have ratio 430/256 = 5/3, and A and high-C, minor third, have ratio 5/6, so high-C/A = 6/5. B and middle-C, major seventh, have ratio 484/256 = 15/8, and B and high-C, minor second, have ratio 15/16, so high-C/B = 16/15.

The ratios always multiply to two. Tone-interval pairs together span one octave, twelve halftones. For example, first interval, with no halftones, and octave, with twelve halftones, fill one octave. Major fifth interval, with seven halftones, such as C to G, and minor fourth interval, with five halftones, such as G to high-C, fill one octave. Major sixth interval, with nine halftones, such as C to A, and minor third interval, with three halftones, such as A to high-C, fill one octave. Major seventh interval, with eleven halftones, such as C to B, and minor second interval, with one halftone, such as B to high-C, fill one octave. Minor fifth interval and major fourth interval fill one octave. Minor sixth interval and major third interval fill one octave. Minor seventh interval and major second interval fill one octave.

tone intervals: golden ratio

In music, ratio 2^0.67 = 1.59 ~ 1.618... is similar to major sixth to octave 1.67, octave to major fourth 1.6, and minor seventh to major second 1.59. Golden ratio and its inverse can make all music harmonics.

tone harmonics

Tones have harmonics {tone harmonics} that relate to tone-frequency ratios.

tone harmonics: consonance

Tone intervals can sound pleasingly consonant or less pleasingly dissonant. Octave tone intervals 2/1 have strongest harmonics. Octaves are most pleasing, because tones are similar. Tones separated by octaves sound similar.

Major fifth and minor fourth intervals are next most pleasing. Major-fifth 3/2 and minor-fourth 4/3 tone intervals have second strongest harmonics.

Major third 5/4 and minor sixth 8/5 intervals are halfway between consonant and dissonant. Minor third 6/5 and major sixth 5/3 intervals are halfway between consonant and dissonant.

Major fourth 7/6 and minor fifth 12/7 intervals are dissonant. Major second 8/7 and minor seventh 7/4 intervals are dissonant, or major second 9/8 and minor seventh 16/9 intervals are dissonant. Minor second 16/15 and major seventh 15/8 intervals are most dissonant.

Ratios with smallest integers in both numerator and denominator sound most pleasing to people and have consonance. Ratios with larger integers in both numerator and denominator sound less pleasing and have dissonance.

Three tones can also have consonance or dissonance, because three tones make three ratios. For example, C, E, and G have consonance, with ratios E/C = 5/4, G/E = 6/5, and G/C = 3/2.

Tone ratios in octaves higher or lower than middle octave have same consonance or dissonance as corresponding tone ratio in middle octave. For example, high-G and high-C have ratio 6/4 = 3/2, same as middle-G/middle-C.

Tone ratios between octave higher than middle octave and middle octave have similar consonance as corresponding tone ratio in middle octave. For example, high-G and middle-C have ratio 3/1. Dividing by two makes high-G one octave lower, and middle-G/middle-C has ratio 3/2.

tone harmonics: beat frequencies

Frequencies played together cause wave superposition. Wave superposition makes new beat frequencies, as second wave regularly emphasizes first-wave maxima. Therefore, beat frequency is lower than highest-frequency original wave.

If wave has frequency 1 Hz, and second wave has frequency 3 Hz, they add to make 1-Hz wave, 3-Hz wave, and 2-Hz wave, because every other 3-Hz wave receives boost from 1-Hz wave. Rising 1-Hz wave maximum coincides with first rising 3-Hz wave maximum and falling 1-Hz wave maximum coincides with third falling 3-Hz maximum, while first falling 3-Hz wave maximum, middle rising and falling 3-Hz maximum, and third rising 3-Hz maximum cancel.

If one wave has frequency 2 Hz, and second wave has frequency 3 Hz, they add to make 2-Hz wave, 3-Hz wave, and 1-Hz wave, because every third 3-Hz wave receives boost from 2-Hz wave. First rising 2-Hz wave maximum coincides with first rising 3-Hz wave maximum, while first falling 3-Hz wave maximum, middle rising and falling 3-Hz maximum, and third rising and falling 3-Hz maximum cancel.

Beat frequency is difference between wave frequencies: 3 Hz - 2 Hz = 1 Hz in previous example. Beat frequencies are real physical waves.

Small-integer frequency ratios have lower beat frequencies and reduce beat frequency interference with original frequencies. Two waves with small-integer frequency ratios superpose to have beat frequency that has small-integer ratios with original frequencies. Two waves with large-integer frequency ratios superpose to have beat frequency that has large-integer ratios with original frequencies.

Middle-C has frequency 256 Hz, and middle-G has frequency 384 Hz, with ratio G/C = 3/2. The waves add to make 384 Hz - 256 Hz = 128 Hz beat wave, with ratio C/beat = 2/1 and G/beat = 3/1.

Middle-C has frequency 256 Hz, and middle-E has frequency 323 Hz, with ratio E/C = 5/4. The waves add to make 323 Hz - 256 Hz = 67 Hz beat wave, with ratio C/beat = 4/1 and E/beat = 5/1.

Middle-C has frequency 256 Hz, and middle-D has frequency 289 Hz, with ratio D/C = 9/8. The waves add to make 289 Hz - 256 Hz = 33 Hz beat wave, with ratio C/beat = 8/1 and D/beat = 9/1.

Middle-C has frequency 256 Hz, and middle-A has frequency 430 Hz, with ratio A/C = 5/3. The waves add to make 430 Hz - 256 Hz = 174 Hz beat wave, with ratio C/beat = 3/2 and D/beat = 5/2.

Middle-C has frequency 256 Hz, and middle-B has frequency 484 Hz, with ratio B/C = 15/8. The waves add to make 484 Hz - 256 Hz = 228 Hz beat wave, with ratio C/beat = 9/8 and B/beat = 17/8.

Shepard tone

Roger Shepard [1964] gradually increased or decreased all tones of a chord, keeping the tones separated by octaves. Pitch repeats when reaching the next octave, so tones rise or fall but do not keep rising or falling {Shepard tone} {Shepard scale}, an auditory illusion.

1-Consciousness-Sense-Hearing-Music-Processing

music processing

Brain recognizes music by rhythm or by intonation differences near main note {music, processing}. Brain analyzes auditory signals into tone sequences with pitches, durations, amplitudes, and timbres. First representation {grouping structure} segments sound sequence into motifs, phrases, and sections. Second representation {metrical structure} marks sequence with hierarchical arrangement of time points {beat}.

time-span reduction

Brain can find phrasing symmetries {time-span reduction}, using grouping and metrics.

prolongational reduction

Brain can hierarchically arrange tension and relaxation waves {prolongational reduction}. In Western music, prolongational reduction has slowly increasing tension followed by rapid relaxation.

1-Consciousness-Sense-Hearing-Problems

hearing problems

Brain-injured people can be unable to distinguish voices but can recognize other sound types {hearing, problems}. If they listen to speech recorded using different voices for different syllables, they cannot understand words.

conductive hearing loss

Middle-ear bone or tendon damage decreases sound amplitude {conductive hearing loss}.

otitis media

Infection causes middle-ear inflammation {otitis media}|, typically in children.

otosclerosis

Middle-ear bones can grow abnormally {otosclerosis}|, affecting hearing.

ototoxic

Adverse conditions {ototoxic} can affect balance or hearing more than other systems.

sensorineural hearing loss

Auditory-nerve or cochlea damage decreases loudness {sensorineural hearing loss}.

1-Consciousness-Sense-Hearing-Theories

critical band theory

Perhaps, cochlea has band-pass filters {critical band theory}.

harmonic weighting

Perhaps, brain detects sounds by adding harmonic frequencies below 20 Hz, weighted by ratios {harmonic weighting}. 360 Hz uses 180/2, 120/3, 90/4, 72/5, 60/6, 51.4/7, 45/8, 40/9, 36/10, 32.7/11, 30/12, and so on. 720 Hz uses 360/2, 240/3, 180/4, 144/5, 120/6, 102.8/7, 90/8, 80/9, 72/10, 65.4/11, 60/12, 51.4/14, 45/16, 40/18, 36/20, 32.7/22, 30/24, and so on.

place coding

At frequencies above 900 Hz, brain detects stimulus frequency by cochlea-hair maximum-amplitude location {place coding} {place theory}, so pitch depends on activity distribution across nerve fibers.

temporal theory

At frequencies below 900 Hz, brain detects stimulus frequency by impulse timing {temporal theory} {temporal code}, because timing tracks frequency. Adjacent auditory neurons fire at same phase {phase locking, code} and frequency, because adjacent hair cells link and so push and pull at same time.

threshold of hearing

Perhaps, sound intensity depends on number of activated basilar-membrane sense cells and special high-threshold cells {threshold, hearing} [Wilson, 1971] [Wilson, 1975] [Wilson, 1998].

volley theory

For frequencies less than 2400 Hz, frequency detection depends on cooperation between neuron groups firing in phase {volley theory} {volley code}. For frequencies less than 800 Hz, auditory-neuron subsets fire every cycle. For frequencies above 800 Hz and less than 1600 Hz, auditory-neuron subsets fire every other cycle. For frequencies above 1600 Hz and less than 2400 Hz, auditory-neuron subsets fire every third cycle {volley principle}. For example, three neurons firing at 600 Hz every third cycle can represent frequency of 1800 Hz.

1-Consciousness-Sense-Hunger

hunger sense

Stomach receptors {hunger sense} measure blood-glucose concentration and send to neurons that cause slow hunger contractions.

specific hungers theory

People can feel that they have nutrient deficiency and be hungry for that nutrient {specific hungers theory} (Curt Richter). This theory is true for salt and sugar but not for vitamins.

1-Consciousness-Sense-Kinesthesia

kinesthesia

Sense systems {kinesthesia}| {kinesthesis} {kinesthetic sense} {proprioception} use mechanoreceptors to detect relative body-part positions, angles, forces, torques, and motions, including position changes during and after movements. Kinesthetic system measures body-point displacements from equilibrium and then calculates relative point-pair distances and point-triple angles. Body movements and outside forces move body points sequentially and change body-point relations in regular and repeated ways, so brain builds and remembers motor patterns that allow muscle coordination and balance. Kinesthesia is not conscious, because it is internal.

relations: touch

Touch detects body-surface pressures and temperatures and coordinates with kinesthesia to determine true distances and times.

relations: proprioception

Kinesthesia includes proprioception.

relations: vestibular system

Kinesthetic system includes vestibular system.

relations: cerebellum

Cerebellum coordinates body movements and communicates with kinesthetic system.

problems

Proprioceptive receptor and nerve inflammation impairs body-position sensation. Nerve damage can impair movement consciousness.

somatosensation

Kinesthesia, touch, and vestibular system {somatosensation} provide body information.

1-Consciousness-Sense-Kinesthesia-Receptors

mechanoreceptor

Kinesthesia-and-touch pressure and vibration receptors {mechanoreceptor} detect relative body-part positions, including position changes caused by movements.

annulospiral ending

Muscle mechanoreceptors {annulospiral ending} code muscle length and muscle-length-change rate and send positive feedback to muscle.

flower spray ending

Muscle mechanoreceptors {flower spray ending} code muscle length, slowly excite flexor muscles, and slowly inhibit extensor muscles.

Golgi tendon organ

To react to fast tendon-length change, tendon mechanoreceptors {Golgi tendon organ} measure tension above (high) threshold, detect inverse-stretch-reflex active contraction, and send negative feedback to muscles attached to tendons.

muscle spindle

Muscle mechanoreceptors {muscle spindle} measure tension.

stretch receptor

Muscles, tendons, joints, alimentary canal, and bladder have mechanoreceptors {stretch receptor}, such as flower-spray endings and annulospiral endings, that detect pulling and stretching. Neck muscle-and-joint stretch receptors indicate head direction with respect to body.

1-Consciousness-Sense-Magnetism

magnetism sense

People can typically detect small magnetic gradients {magnetism, sense}, using receptors related to kinesthesia. For example, muscles can react to weak terrestrial-magnetism changes caused by underground water.

1-Consciousness-Sense-Nausea

nausea sense

Stomach receptors {nausea, sense} measure toxins and send to neurons that cause slow stomach contractions.

1-Consciousness-Sense-Pain

pain

People have acute or dull personal discomfort and avoidance feelings {pain, sense}. Some people cannot feel pain.

physical properties

Painful events include tissue strains and releases of molecules that cause chemical reactions. Molecules vary in size, shape, chemical sites, and vibration states. Chemicals vary in concentration. Painful chemicals chemically bind to tissue chemical receptors.

properties

Pains can be throbbing, burning, dull, or acute/sharp. People perceive pain at body locations and also have overall bad feelings. Lower back pains are the most common. Deviating from chemical and function equilibrium is typically not painful. People in pain can still have humor and laughter.

nature

Perhaps, pain includes dislike or avoidance. Pains are not concepts, observations, or judgments. Pain is not intentional but is only about itself.

brain

Pain uses cerebral cortex and is always conscious. Pain perception uses thalamus and is not conscious. Pain differs in species, because neocortices differ. Squid seem to feel pain.

factors

Prior experience influences pain. Pain anticipation increases pain. Body movement can lessen sharp pain and increase chronic pain. Sensitivity to pain is greatest at 9 PM. Pain sensitivity decreases with age.

senses

Temperature and nociceptive receptor systems interact. Tactile and nociceptive receptor systems interact.

evolution

Humans seem to have higher sensitivity to pain than other mammals. Lower animals have even less pain. Squid seem to feel pain.

development

By 156 days (five months), fetus can have pain. Newborns can have pain. By 4 months, infants have undifferentiated fear reactions to people and animals associated with pain, and so coordinate vision and pain perceptions.

1-Consciousness-Sense-Pain-Anatomy

pain anatomy

The pain system has skin receptors with ion channels, neurons, fibers, fiber tracts, and brain regions. Pain chemical receptors send to dorsal-horn neurons, which send to cortical regions. Cortex and thalamus control pain {pain, anatomy}.

Skin and body receptors (nociceptor) chemically bind endomorphins, prostaglandins, bradykinin peptides, and protein hormones (such as nerve growth factor), molecules released by inflammation and tissue damage [Woolf and Salter, 2000].

fibers

Body organs and mesentery have pain fibers. Internuncial neurons have pain fibers. Pain fibers are A, C, III, IV, and nociceptive fibers. Large myelinated fibers detect moderate stimulation. Small myelinated fibers detect all stimulations. Myelinated fibers detect sharp localized skin pain. Unmyelinated fibers detect dull deep unlocalized body pain. Itching nerves are separate from pain nerves.

brain

Anterior cingulate gyrus, frontal lobe, Lissauer's tract, locus coeruleus, nociceptive system, protopathic pathway, raphé nuclei, reticular formation, sensory reticular formation, sensory thalamus, spinal cord, spinoreticular tract, and spinothalamic tract affect pain. Throbbing pain, burning pain, and sharp pain use different brain regions. Cingulate cortex receives pain information [Chapman and Nakamura, 1999]. Cortex has pain center connected to sense areas. Reticular formation regulates pain.

brain pathways

Feeling pain and reacting to it involve separate pathways. Spinothalamic tract and central gray-matter path carry pain fibers. Internuncial neurons have pain fibers. Body organs and mesentery have pain fibers. Lemniscal tract has no pain fibers but affects pain. Abdominal pain signals travel in subdiaphragmatic vagus nerve to nucleus tractus solitarius, nucleus raphe magnus, and spinal-cord dorsolateral funiculus [Ritter et al., 1992].

1-Consciousness-Sense-Pain-Anatomy-Cells

nerve-associated lymphoid cells

Connective-tissue dendritic cells {nerve-associated lymphoid cells} (NALC) have interleukin-1 binding sites, send to sensory vagus-nerve paraganglia, and are near macrophages, mast cells, and other dendritic cells [Goehler et al., 1999].

paraganglia

Connective-tissue nerve-associated lymphoid cells send to neuron groups {paraganglia} who send along sensory vagus nerve [Goehler et al., 1999].

1-Consciousness-Sense-Pain-Anatomy-Ion Channels

acid-sensing ion channel

Nociceptors can have proton ion channels {acid-sensing ion channel} (ASIC).

N-type calcium channel

Nociceptors and other neurons have special calcium-ion channels {N-type calcium channel} {calcium channel, N-type}. Ziconotide (Prialt), modified cone-snail venom, inhibits N-type calcium channels to lessen pain. Gabapentin (Neurontin) anticonvulsant binds to N-type calcium channels.

TTX-resistant voltage-gated sodium channel

Outside CNS, nociceptors and other neurons have special sodium-ion channels {TTX-resistant voltage-gated sodium channel}.

voltage-gated sodium channel

Nociceptors and all neurons have sodium-ion channels {voltage-gated sodium channel} {sodium channel, voltage-gated} that open by voltage changes.

1-Consciousness-Sense-Pain-Anatomy-Receptors

bradykinin receptor

Nociceptors can have receptors {bradykinin receptor} for small bradykinin peptides, produced by peripheral inflammation.

calcitonin receptor

Dorsal-horn neurons receive input from nociceptors and have calcitonin peptide receptors {calcitonin receptor} {calcitonin gene-related peptide receptor} (CGRP receptor).

capsaicin receptor

Mouth nociceptors can have pepper-molecule receptors {capsaicin receptor} {VR1 receptor}, which also react to high temperature and protons.

hormone receptor

Peripheral pain nerves can add chemical receptors {hormone receptor}. For example, stress hormones can attach to stress-hormone receptors and cause pain [Woolf and Salter, 2000].

nerve growth factor receptor

Nociceptors can have protein-hormone receptors {nerve growth factor receptor} (NGF receptor).

NMDA receptor for pain

All neurons that receive input from nociceptors have glutamate receptors {NMDA receptor, pain}. Dorsal-horn neurons have glutamate receptors with a specific subunit {NR2B subunit}.

neurotrophin tyrosine kinase receptor

NTRK1 gene makes receptors {neurotrophin tyrosine kinase receptor type 1} (NTRK1 receptor). NTRK1-gene mutations can cause a rare autosomal recessive disease (CIPA), with pain insensitivity, no sweating, self-mutilation, fever, and mental retardation.

nociceptor

Skin receptors {nociceptor} can detect pain, to warn about skin damage.

opioid receptor

Many neurons, including nociceptors, have opium-compound receptors {opioid receptor}.

prostaglandin receptor

Nociceptors can have endomorphin receptors {prostaglandin receptor}.

substance P receptor

Dorsal-horn neurons receive input from nociceptors and have substance-P receptors {neurokinin-1 receptor} (NK-1 receptor) {substance P receptor}. Substance P can carry saporin toxin into dorsal-horn neurons and kill them.

1-Consciousness-Sense-Pain-Physiology

pain physiology

Pain control is at first synapse, near spinal cord {pain, physiology}. Prostaglandins block glycine receptors and so excite dorsal-horn neurons. More and wider brain activation indicates more pain [Chapman and Nakamura, 1999]. Drugs can make pain feel pleasurable. The fundamental pain characteristic is repulsion or withdrawal, and the fundamental pleasure characteristic is attraction or advance [Duncker, 1941].

pain causes

Tissue damage, inflammation, and high-intensity stimuli release chemicals that excite nociceptors. Pain detects and measures relative concentrations of pain-causing chemicals released by body inelastic strains or tissue damage. People can distinguish strength and type of pain.

High pressure, high temperature, harsh sound, intense light, and sharp smells and tastes cause neuron changes {pain, causes}. Inflammation or acute-pain aftereffects can cause pain.

Pain involves too much small-nerve-fiber activity, uninhibited by large neurons. Blows to body release histamines, bradykinin, and prostaglandins, which excite neurons. Gut distension causes pain, but gut squeezing, cutting, and burning do not. Infection can amplify pain. Tissue damage can amplify pain. Damaged tissue activates immune cells, which release molecules that excite nerves and glia. Arginine vasopressin, encephalin, endorphin, and substance P can affect pain.

Randomly placed brainstem electrodes produce pain 5% of time. Direct cerebral-cortex stimulation can cause other sense qualities but never causes pain. Cortex stimulation does not decrease pain.

pain effects

Pain causes people to push painful object farther away or to move farther from pain source {pain, effects}. Sharp pain causes withdrawal reflexes, writhing, jumping away, and wincing as people try to alleviate pain. Writhing escapes stimulus or pushes away stimulus. Painful skin stimuli cause flexion reflexes. Muscle contractions inhibit blood flow and squeeze out poisons. To avoid reinjury and allow body to rebuild rather than use, dull and chronic pain reduces overall activity. People can have no reaction to pain.

Pain causes attention to object. People cannot ignore pain caused by high-intensity stimulus. Pain makes other goals seem unimportant. To allow recovery from tissue damage, pain causes attention to damage, such as wounds. To avoid future pain causes, pain triggers learning about possibly painful situations. People also learn pain responses.

Pain can cause anxiety, increase breathing rate, increase blood pressure, dilate pupils, increase sweat, and make time appear to flow more slowly.

gate control theory

Spinal-cord dorsal-horn substantia-gelatinosa neural circuits receive signals from brain and inhibit nerve-impulse flow from spinal cord to brain {gate control theory of pain}|. Large-fiber inputs, such as from gentle rubbing {counterstimulation}, stimulate substantia-gelatinosa neurons to inhibit signal flow, closing the gate. Small-fiber inputs, such as from pinching {diffuse noxious inhibitory control} {counterirritation}, inhibit substantia-gelatinosa neurons to release signal flow, opening the gate. Direct brain signals also inhibit flow and close the gate [Melzack, 1973] [Melzack, 1996].

glial activation

Pain-activated microglia (immune cells) release pro-inflammatory cytokines, which activate glia {glial activation} and cause pain, but other glia types do not release cytokines in response to pain. Spinal glial activation affects nociceptive neurons at NMDA receptors.

Blocking glial activation with drugs blocks pathological pain. Blocking neuron pro-inflammatory-cytokine receptors with drugs does not affect normal pain responses but does decrease exaggerated pain responses. Intrathecal drugs {fluorocitrate} can inhibit glial metabolism. Acids {kynurenic acid} {2-amino-5-phosphonovaleric acid} (AP-5) can prevent such inhibition. Amines {6,7-dinitroquinoxaline-2,3-dione} (DNQX) {picrotoxin} and strychnine do not prevent such inhibition [Ma and Zhao, 2002] [Watkins et al., 2001].

1-Consciousness-Sense-Pain-Physiology-Pain Relief

pain relief

Chemicals, biofeedback, distraction, and imagery can lessen pain {pain relief}. Hypnosis can relieve pain.

Endorphin and dynorphin inhibit pain pathways. Flight-or-fight responses use endorphin neurotransmitters to suppress pain. Aspirin and nitrous oxide alleviate pain. Opiate drugs, such as morphine, are similar to endorphin and suppress pain. Ziconotide (Prialt), modified cone-snail venom, inhibits N-type calcium channels to lessen pain.

analgesia

Adaptation, distraction, or drugs can decrease pain {analgesia, pain}|.

hyoscine sleep

Drugs can make pain be felt but not remembered {hyoscine sleep}|. Twilight-sleep drug, from thorn apples, binds to acetylcholine receptors and affects long-term memory recall.

acupuncture

Inserting large needles at skin locations {acupuncture}| can reduce pain. Acupuncture-needle stimulation activates brain area that makes endorphin and dynorphin to inhibit pain pathways. Traditional acupuncture-needle insertion sites correspond to myofascial-nerve locations. Traditionally, acupuncture makes energy {qi} travel along body meridians.

ice massage

Massaging with ice {ice massage} reduces pain.

transcutaneous electrical nerve stimulation

Stimulating brain area that makes endorphin and dynorphin {transcutaneous electrical nerve stimulation} (TENS) inhibits pain pathways.

1-Consciousness-Sense-Pain-Kinds

allodynia

In undamaged areas, receptors and neurons can have sensitization, so people feel pain from stimuli that are not typically painful {allodynia}.

dysmenorrhoea

Intra-uterine devices can cause uterine pain {dysmenorrhoea}.

extra-territorial pain

People can perceive pain {extra-territorial pain} in undamaged tissue near damaged tissue.

false pain

Without tissue damage or infection, peripheral pain nerves can increase spontaneous activity and cause pain {false pain}.

hyperaesthesia pain

People can be sensitive to touch and have low pain threshold {hyperaesthesia, pain}.

hyperalgesia

Receptor or nerve sensitization can cause greater than normal reaction to pain stimuli {hyperalgesia}.

lightning pain

Tabes dorsalis has shooting pains {lightning pain} [Charcot, 1890].

mirror pain

People can perceive pain {mirror pain}| in undamaged tissue on body side opposite damaged tissue.

neuropathic pain

Chronic pain {neuropathic pain} can persist after nervous-system injury. Injury can change skin receptors {peripheral neuropathic pain}. Injury can change spinal-cord dorsal horn {central neuropathic pain}.

phantom limb

People that lose limbs often feel like they still have limb or feel sense qualities from former region {phantom limb}| [Melzack, 1992] [Ramachandran and Blakeslee, 1998] [Weir-Mitchell, 1872].

1-Consciousness-Sense-Pleasure

pleasure sense

Pleasure feels different in different senses {pleasure, sense}.

causes

Pleasure can result from satisfying desire, overcoming body deficiency or excess, realizing potential, euthumia, eudaimonia, or having pain-free and tranquil state. Pleasure results from intermediate intensity, energy, or concentration on intermediate-size area. Pleasurable stimuli have simple pattern, low contrast, slow variation, slow movement, and relaxed time flow. Light touch, slight warmth or cooling, soothing sound, soft light, and mild smells and tastes can cause pleasure. Absolute intensity, simple or complex pattern, high or low contrast, fast or slow variation, fast or slow movement, physical location, and time of day do not associate with pleasure.

behavior

The fundamental pleasure characteristic is attraction or advance toward stimulus [Duncker, 1941]. Pleasure causes attention to object. Pleasure causes motivation to draw object nearer to increase pleasure. People can ignore pleasure.

effects

Pleasure increases blood flow. Pleasure causes time to appear to flow more rapidly. Pleasure causes liking, preferring, or desiring. Pleasure is rewarding.

To give same pleasure amount later, stimulus intensity must increase.

brain

Medial forebrain bundle runs from forebrain to brainstem and sends to ventral-tegmentum dopamine neurons, which affect forebrain. Pleasure differs in different species, because neocortex differs.

Randomly placed brainstem electrodes produce pleasure 35% of time. Randomly placed brainstem electrodes produce neither pleasure nor pain 60% of time.

nature

Perhaps, pleasure is a cognition that comes after another sensation. Pleasure is not intentional but is only about itself. Desire for pleasure is hard to understand, because desire is for objects, but pleasure is inside oneself.

eudaimonia

Pleasure can come from being virtuous {eudaimonia}.

euthumia

Pleasure can come from being cheerful {euthumia}.

1-Consciousness-Sense-Smell

smell

Chemicals dissolved in air chemically bind to upper-nose odor receptors {smell, sense}| {olfaction}. Smell qualities depend on molecule electrical and spatial-configuration properties, such as shape, acidity, and polarity. Smell is a synthetic sense, with some analysis. People can distinguish 20 to 30 primary odors and more than 10,000 different odors.

physical properties

Smellable molecules include many types of typically hydrophobic volatile substances with molecular weights between 30 to 350. Air-borne molecules vary in size, shape, chemical sites, and vibration states. Air-borne chemicals vary in concentration. Smellable chemicals chemically bind to upper-nasal-passage chemical receptors.

primary-odor receptors

Some people cannot smell camphorous, fishy, malty, minty, musky, spermous, sweaty, or urinous odors (primary odor). Camphorous molecules have multiple benzene rings. Fishy molecules are three-single-bond monoamines. Malty molecules are aldehydes. Minty molecules have a benzene ring and an oxygen-containing side group. Musky molecules have multiple rings. Spermous molecules are aromatic amines. Sweaty molecules are carboxylic acids. Urinous molecules are steroid ketones. Fruity molecules are organic alcohols.

types

Odors can be acidic, acrid or vinegary, alliaceous or garlicy, ambrosial or musky, aromatic, burnt or smoky, camphorous or resinous, ether-like, ethereal or peary, floral or flowery, foul or sulfurous, fragrant, fruity, goaty or hircine or caprylic, minty, nauseating, peppermint-like, pungent or spicy, putrid, spearmint-like, sweaty, and sweet.

Perhaps, the first smells were mating, food, or poison signs.

qualities

Smells can be sweet, acidic, or sweaty. For example, musk, ether, ester, lowery, fruity, and musky are dull, sweet, and smooth. Vinegar and acid are sharp, sour, and harsh.

Smells can be cool, like menthol, or hot, like heavy perfume. For example, menthol is cool, and perfume is hot.

Aromatic, camphorous, ether, minty, musky, and sweet are similar. Acidic and vinegary are similar. Acidic and fruity are similar. Goaty, nauseating, putrid, and sulphurous are similar. Smoky/burnt and spicy/pungent are similar. Camphor, resin, aromatic, musk, mint, pear, flower, fragrant, pungent, fruit, and sweets are similar. Putrid or nauseating, foul or sulfur, vinegar or acrid, smoke, garlic, and goat are similar. Vegetable smells are similar. Ethers are vegetable. Animal smells are similar. For example, caprylic acid and carboxylic acids are animal. Halogens are mineral

Acidic and sweet smells are opposites. Sweaty and sweet smells are opposites.

Smell always refers to object that makes smell, not to accidental or abstract property nor to concept about smell. In contrast, color always refers to object property.

Odors have same physical properties, and smell physiological processes are similar, so odor perceptions are similar, with same odors and odor relations, for people with undamaged smell systems. Smells relate in only one consistent and complete way. Smells do not have symmetric smell relations, so smells have unique relations. Smells cannot substitute or switch.

People can smell specific odors and not others. People can smell sweet as putrid and have other smell exchanges. People can always smell something.

mixing

Smells blend in concordances and discordances, like music harmonics. Pungent and sweet can mix. Pungent and sweaty can mix. Perhaps, smells can cancel other smells, not just mask them.

timing

Brain detects aldehyde smells first {top note, smell}. Brain detects floral smells second {middle note, smell}. Brain detects lingering smells, such as musk, civet, ambergris, vanilla, cedar, sandalwood, and vetiver, later {base note, smell}.

properties

Smell habituates quickly. Smell is in real time, with a half-second delay. Smell short-term memory is poor. Smell strength decreases with age. Fats absorb pungent food odors.

Butyrate and squalene odor patterns identify species members. In mammals, small pheromone amounts establish territories [Pantages and Dulac, 2000]. Humans have strong odors from hair-follicle apocrine glands. Perhaps, human odor warns predators away. Babies have small glands. Stress seems to cause odor. Menses smells like onions.

source location

Olfactory bulb preserves odor-receptor spatial relations. Smell cortex can detect smell location in space. Smell can detect several sources from one location. Smells from different sources can interfere.

diseases

Diabetes smells like sugar or acetone. Measles smells like feathers. Nephritis smells like ammonia. Plague smells like apples. Typhus smells like mice. Yellow fever smells like meat.

emotions

Smells can make people feel disgusted, intoxicated, sickened, delighted, revolted, excited, hypnotized, and pleasured. Smells can be surprising, because smells have many combinations.

evolution

Perhaps, the first smells were mating, food, or poison signs.

Butyrate and squalene odor patterns identify species members. Humans have strong odors from hair-follicle apocrine glands. Stress seems to cause odor. Perhaps, human odor warns predators away. (Babies have small glands.)

In mammals, small pheromone amounts establish territories [Pantages and Dulac, 2000].

development

In first few days, newborns can distinguish people by odor.

relations to other senses

Taste and retronasal-area smell can combine to make flavor. Taste has higher concentration than smell. Smell uses air as solvent, and taste uses water. Smell does not use molecule polarization, but taste does. Smell does not use molecule acidity, but taste does. Smells interfere with each other, but tastes are separate and independent. Taste does not use molecule vibrations, but perhaps smell uses vibrations. Taste and smell are both often silent. Taste and smell have early, middle, and late sensations. Smells and tastes have spatial source.

Smell is at body surface and so has touch. Touch can feel air near smell-receptor cells and react to noxious smells. Touch locates smell-receptor cells in upper nose. Trigeminal nerve carries signals from nose warmth-coolness, touch, and pain receptors.

Trigeminal nerve carries signals from nose warmth-coolness, touch, and pain receptors. Smell is at inner-nose surface and so has touch. Touch can feel air on inner nose and react to noxious odors. Touch locates olfactory receptors in upper nose.

Smell uses tactile three-dimensional space to locate smells in space.

Odor is painful at high concentrations.

1-Consciousness-Sense-Smell-Anatomy

smell anatomy

People have upper-nostril skin areas, with molecule shape, size, and vibration receptors {smell, anatomy}. Smell uses more than 30 odor-receptor types, each with variations, making a thousand combinations. Smell-neuron axons go to older mammal-forebrain rhinencephalon, near frontal lobe, not to thalamus as other sense axons do. Invertebrates have skin odor receptors.

Odor receptors send to olfactory-bulb glomeruli, which send to cortical regions.

cribriform plate

Behind eyebrow, where nose meets skull, is bone {cribriform plate} with many nerve-sized holes, through which olfactory-neuron axons go to olfactory bulb.

1-Consciousness-Sense-Smell-Anatomy-Cells

basal cell

Olfactory epithelium has cells {basal cell} that can become olfactory neurons.

mitral cell

Olfactory-receptor cells send to neurons {mitral cell}, whose top dendrites go to horizontal cells to receive lateral inhibition and whose bottom branches are recurrent collateral axons to spread lateral inhibition. Mitral-cell axons go to anterior-olfactory-nucleus and prepyriform-cortex superficial and deep pyramidal neurons.

olfactory receptor

Olfactory-receptor cilia have molecules that bind odorants. Smell system has a thousand different protein receptors {olfactory receptor}, with seven to eleven major odor-receptor types, which each have a dozen minor types. People have ten million odor-receptors in each nostril. Dogs have 200 million. Odor-receptors die every month, and then new ones grow.

Of 1000 olfactory-receptor genes, 65% are not functional in humans. In Old World monkeys, 30% are not functional. In New World monkeys, 18% are not functional. In dogs, 20% are not functional. Odor-receptor chemical sites are for alcohols, aldehydes, amines, aryls, carboxylic acids, esters, ethers, halogens, ketones, cysteines, thiols, sulfides, or terpenes. Sites can be for small, medium, or large molecules [Firestein, 2001] [Laurent et al., 2001]:

Alcohols that are small, such as methanol and ethanol, smell alcoholy, biting, and hanging.

Alcohols that are medium-chain, such as butanol and octanol, smell sweet and fruity.

Alcohols that are cyclic, such as menthol, smell cool and minty.

Alcohols that are monoterpenoids, such as geraniol and linalool, smell flowery and fresh.

Alcohols that are monophenols, such as phenol and guaiacol, smell burnt and smoky.

Alcohols that are polyphenols, such as cresol, smell tarry and oily.

Aldehydes that are small, such as diacetyl aldehyde, smell buttery.

Aldehydes that are short-chain, such as isovaleraldehyde, smell malty.

Aldehydes that are alkene aldehydes, such as hexenal, smell grassy and herby.

Amines that are alkyl and aryl monoamines, such as trimethylamine and phenethylamine, smell fishy.

Amines that are alkyl multi-amines, such as putrescine, smell spermous.

Amines that are heterocyclic amines, such as pyrroline, smell spermous.

Amines that are heterocyclic aromatic, such as alkyl pyrazines, smell nutty, earthy, and green peppery.

Amines that are heterocyclic aromatic, such as 2-acetyl-tetrahydro-pyridine, smell roasted, fermented, and popcorny.

Aryls that are benzene alkyls, such as benzene, toluene, and xylenes, smell aromatic.

Aryls that are monophenols, such as phenol and guaiacol, smell burnt and smoky.

Aryls that are polyphenols, such as cresol, smell tarry and oily.

Aryls that are polycyclic aromatic hydrocarbons, such as anthracene and pyrene, smell burnt and smoky.

Aryls that are polycyclic in small concave sites, such as camphor, smell camphorous and resinous.

Aryls that are aryl monoamines, such as phenethylamine, smell fishy.

Carboxylic acids that are small, such as acetic acid, smell acrid, vinegary and pungent.

Carboxylic acids that are medium-short polar chains, such as butyric acid (butanoic acid), smell putrid, sweaty and rancid.

Carboxylic acids that are medium-length polar chains, such as caprylic acid (octanoic acid), smell goaty and hircine.

Carboxylic acids that are carboxylic-acid thiols, such as dithiolane-4-carboxylic acid, smell asparagusy and bitter.

Esters that are non-polar chains, such as methyl butyrate, smell sweet and fruity.

Ethers that are linear in concave and trough-shaped sites, such as ethyl methyl ether, smell fragrant, ethereal, floral and flowery.

Ethers that are cyclic, such as dioxacyclopentane, smell earthy, moldy and potatoey.

Halogens, such as fluorine, chlorine, and bromine, smell pharmaceutical, medicinal, pungent, and unpleasant.

Ketones that are heterocyclic, such as furanone and lactones, smell savory and spicy.

Ketones that are alkane ring ketones, such as steroid ketones, smell urinous.

Ketones that are macrocyclic in large concave sites, such as muscone (methylcyclopentade-canone), smell musky and ambrosial.

Ketones that are alkenes with one ring, such as ionones, damascones, and damascenones, smell tobaccoy.

Ketones that are cyclic alkene ketones in V-shaped sites, such as terpenoids and R-(-)-carvone (2-methyl-5-(1-methylethenyl)-2-cyclohexenone), smell minty, spearminty, and pepperminty.

Sulfur compounds that are cysteines, such as gamma-glutamylcysteines and cysteine sulfoxides, smell alliaceous and garlicy.

Sulfur compounds that are carboxylic-acid thiols, such as dithiolane-4-carboxylic acid, smell asparagusy and bitter.

Sulfur compounds that are small thiols, such as methyl mercaptan (methanethiol), smell foul, sulfurous, and rotten.

Sulfur compounds that are sulfides, such as methyl sulfides, smell cabbage-like and rotten at high concentrations.

Terpenes that are cyclic alkene ketones in V-shaped sites, such as terpenoids and R-(-)-carvone (2-methyl-5-(1-methylethenyl)-2-cyclohexenone), smell minty and pepperminty.

Terpenes that are monoterpenoid alcohols, such as geraniol and linalool, smell flowery and fresh.

Terpenes that are isoprenes and monoterpenes, such as isoterpene, smell rubbery.

Terpenes that are sesquiterpenes and triterpenes, such as humulene, smell woody.

Some sites are for both alcohol and terpene, alcohol and aryl, amine and aryl, carboxylic acid and thiol, or ketone and terpene.

Some sites are for carbon chains and rings: alkyls, alkenes, single rings, multiple rings, single heterocyclic rings, multiple heterocyclic rings, single aromatic rings, and multiple aromatic rings.

1-Consciousness-Sense-Smell-Anatomy-Neuron Assemblies

amygdala-hippocampal complex

A limbic-system region {amygdala-hippocampal complex} measures smell associations and emotions.

glomerulus of smell

Olfactory nerves, mitral cells, and tufted cells converge on olfactory-bulb spheres {glomerulus, smell} {glomeruli, smell}. Olfactory receptors send to one lateral glomerulus and one medial glomerulus. Glomeruli receive from one or more olfactory receptors and detect one odor or odor combination.

Grueneberg ganglion

At nose tips, mammals have a ganglion {Grueneberg ganglion} that detects alarm pheromones (Hans Grueneberg) [1973].

Jacobson organ

Mammal nasal-cavity bases have smell neurons {vomeronasal system} {Jacobson's organ} {Jacobson organ} for sex-signal and other pheromones. Axons go to accessory olfactory bulb and then to amygdala [Holy et al., 2000] [Johnston, 1998] [Keverne, 1999] [Stowers et al., 2002] [Watson, 2001].

olfactory bulb

Odor receptors send output directly, left to left and right to right, to 2-mm-diameter brain region {olfactory bulb}| above and behind nose. Olfactory receptors send axons to mitral cells. Mitral-cell axons go to anterior-olfactory-nucleus and prepyriform-cortex superficial and deep pyramidal neurons. Pyramidal neurons send recurrent collateral axons to superficial pyramidal neurons and stellate cells. Pyramidal neurons have post-synaptic apical dendrites that receive from other pyramidal neurons. Tufted cells are local. Olfactory nerves, mitral cells, and tufted cells meet in olfactory-bulb glomeruli. Olfactory bulb preserves odor-receptor spatial relations. Olfactory bulb has fewer neurons than number of odor receptors.

olfactory cortex

Olfactory-bulb signals go to pyriform cortex, amygdala-hippocampal complex, and entorhinal complex {olfactory cortex}.

1-Consciousness-Sense-Smell-Anatomy-Nose

olfactory cleft

Nasal passages guide air onto olfactory epithelium {olfactory cleft} at nose back.

olfactory epithelium

Upper-nose olfactory-cleft mucus cells {olfactory epithelium} {orthonasal olfactory system} are olfactory receptors, basal cells, and supporting cells. In mammals, odor receptors are at nose air-passage top or back. In humans, smell regions are four square-centimeters. Olfactory epithelium is mostly small sensory cells {olfactory sensory neuron} (OSN), with cilia that have odor receptors.

Olfactory region is light yellow in humans and dark yellow or brown in animals. Albinos have white regions and typically have poor smell ability.

retronasal olfactory system

Chewing and swallowing can send odorant up rear nasal tract {retronasal olfactory system}. People think sense qualities are in mouth. Orthonasal olfactory system is about outside environment, while retronasal olfactory system is about nutrients and poisons.

turbinate

Inner-nose ridges {turbinate}| channel inhaled air to olfactory epithelium.

1-Consciousness-Sense-Smell-Odor

odor

Objects can have smell {odor, smell}| to humans. Odorants mix to make odor.

odorant

Molecules can have smell {odorant} to humans. Odorants must be volatile. Airborne-molecule chemical-bond configurations (shapes) and vibration and rotation frequencies and intensities cause smell. Odorant molecules have molecular weight greater than 35 and less than 350, not too small nor too large for olfactory receptors. Odorants are typically hydrophobic.

Pungent odorants are compact non-polar aryl compounds. Sweet odorants are non-polar chain esters. Sweaty odorants are polar chain organic acids. Right-handed and left-handed chiral molecules, like spearmint and caraway, smell different.

primary

People can distinguish 30 primary odorants:

alliaceous and garlicy: cysteine sulfur compounds

aromatic: benzene alkyls

asparagusy, bitter: carboxylic-acid thiols

biting, hanging, alcoholy: small alcohols

burnt, smoky: monophenols and polycyclic aromatic hydrocarbons

buttery: small aldehydes

camphorous, resinous: polycyclic aryls

cool and minty: cyclic alcohols

earthy, moldy, potatoey: cyclic ethers

fishy: alkyl and aryl monoamines

flowery, fresh: monoterpenoid alcohols

foul, rotten, sulfurous: small thiol sulfur compounds

fragrant, floral, flowery, ethereal: linear ethers

fruity, sweet: medium-chain alcohols and non-polar chain esters

goaty, hircine: medium-length polar chain carboxylic acids

grassy, herby: alkene aldehydes

malty: short-chain aldehydes

minty, spearminty, pepperminty: cyclic alkene ketones

musky, ambrosial: macrocyclic ketones

nutty, earthy, green peppery: heterocyclic aromatic amines

pharmaceutical, medicinal, pungent, unpleasant: halogens

pungent, acrid, vinegary: small carboxylic acids

putrid, sweaty, rancid: medium-short polar chain carboxylic acids

roasted, fermented, popcorny: heterocyclic aromatic amines

rubber: monoterpenes (isoprenes)

cabbage-like, rotten: methyl sulfides

savory, spicy: heterocyclic ketones

spermous: alkyl multi-amines and heterocyclic amines

tarry, oily: polyphenols

tobacco: alkenes-with-one-ring ketones

urinous: steroid ketones

woody: triterpenes (sesquiterpenes)

Odorants mix to make odor, and people can distinguish 10,000 different odors.

categories

Smells can range through sweet/flowery/fruity, mild/vegetably, mild/animaly, mild/mineraly, strong/vegetably, strong/animaly, putrid/animaly, and sharp/mineraly.

The smell-category sequence correlates with molecule reactivity:

Ether -C-O-C-

Alcohol -CH2OH

Ester -COO-

Aryl =CHC=

Terpene =CC2

Ketone -COC-

Aldehyde -CHO

Acid -COOH

Amine -CH2NH2

Sulfhydryl -CH2SH

Halogens Br2

similarities based on chemical group

Similar chemical types make similar smells. Similar chemical origins make similar smells.

Alcohols are similar: biting, fruity, sweet.

Aldehydes are similar: malty, grassy (herby).

Amines are similar: spermous, fishy, nutty, roasted.

Aryls are similar: aromatic, burnt (smoky), camphorous (resinous), tarry (oily).

Carboxylic acids are similar: pungent (acrid, vinegary), putrid (sweaty, rancid), goaty (hircine).

Ethers are similar: fragrant, floral, fruity and sweet.

Ketones are similar: minty, spicy, savory, tobacco, musky (ambrosial), urinous.

Sulfur compounds are similar: asparagusy, cabbage-like, alliaceous (garlicy), foul, rotten.

Terpenes are similar: minty, flowery (fresh), rubbery, woody.

similarities based on similar chemical groups

Alcohols and aryl ketones are similar: biting, fruity, minty, musky.

Alcohols and esters are similar: fruity, sweet.

Aldehydes and alkene ketones are similar: malty, grassy, tobacco.

Aldehydes and ethers are similar: malty, grassy, earthy.

Aldehydes and terpenes are similar: malty, grassy, rubbery, woody.

Amines and steroid ketones are similar: spermous, fishy, nutty, roasted, urinous.

Amines and carboxylic acids are similar: spermous, fishy, nutty, roasted, pungent, putrid, goaty.

Polycyclic aryls and halogens are similar: camphorous, pharmaceutical.

Carboxylic acids and steroid ketones are similar: pungent, putrid, goaty, urinous.

Alkene ketones and terpenes are similar: tobacco, rubbery, woody.

Polycyclic aryl ketones and ethers are similar: minty, camphorous, musky, fragrant, flowery, fruity.

similarities based on organism type

Vegetable smells are similar: alcohols, aldehydes, ethers, aryl and alkene ketones, sulfur compounds, terpenes.

Animal smells are similar: carboxylic acids, amines, polycyclic aryl ketones, steroid ketones.

opposites

Carboxylic acids (sour, putrid, animal) and esters (sweet, fruity, vegetable) are opposites.

Carboxylic acids (sour, putrid, animal) and alcohols (sweet, fruity, vegetable) are opposites.

Amines (animal) and aldehydes (vegetable) are opposites.

Amines (animal) and terpenes (vegetable) are opposites.

odor hedonics

Odors have pleasantness, familiarity, and intensity {odor hedonics}, which define how much people like them.

1-Consciousness-Sense-Smell-Odor-Pheromone

pheromone

In mammals, chemicals {pheromone}| establish territories and find mates [Pantages and Dulac, 2000]. Sex-hormone-derived pheromones are in skin secretions [Savic et al., 2001] [Savic, 2002] [Sobel et al., 1999]. Baboons secrete female pheromones during sexual receptive period. Perhaps, pheromones synchronize ovulation [Gangestad et al., 2002] [McClintock, 1998] [Schank, 2001] [Stern and McClintock, 1998] [Weller et al., 1999].

McClintock effect

Women living in close proximity menstruate at same time {McClintock effect}, perhaps from sweat pheromone.

scent marking

Animals mark locations with scent {scent marking}. Cats and antelope use urine and face or cheek scent glands. Skunk and badger use anal glands.

1-Consciousness-Sense-Smell-Odor-Kinds

primary odor

Linnaeus said smells can be alliaceous like garlic, ambrosial like musk, aromatic, foul, fragrant, hircine like goat, and nauseating {primary odor}. Primary odors can be putrid, flowery, fruity, burnt, spicy, resinous or camphor, musk, floral, peppermint, ether, pungent, and putrid. Primary odors can be floral, minty, ethereal like pear, musky, resinous like camphor, foul or sulfurous, and acrid like vinegar. Primary odors can be acidic, burnt, caprylic like goat, and fragrant. Primary odors can be camphorous, fishy, malty, minty, musky, spermous, sweaty, or urinous odors.

aegyptium

Almond oil, honey, cinnamon, orange blossom, and henna {aegyptium} can mix.

ambergris as smell

Sperm-whale-stomach oil {ambergris, smell} can protect stomach lining.

androstenone

Steroid molecules {androstenone} smell musky to 25% of people and urinous to 25% of people, and have no smell for 50% of people.

bergamot

Orange-rind oils {bergamot, smell} can mix.

cacous

Violets can make drops {cacous}. Casca preciosa is sassafras.

carvone

d-carvone {carvone} is caraway, and l-carvone is spearmint.

castoreum

Far-northern-beaver abdomen-gland oil {castoreum} marks territory.

civet

Ethiopian-cat near-genitalia-gland honey-like compound {civet, smell} is a sex pheromone.

ionone

Violets make compounds {ionone} that can inhibit odors.

kyphi

Rose, crocus, and violet oils {kyphi} can mix.

maple syrup urine

A genetic disease causes urine to smell like maple syrup {maple syrup urine}.

musk as smell

East-Asian deer-intestine red jelly {musk, smell} has steroids.

neroli

Oranges can make attar {neroli}.

1-Consciousness-Sense-Smell-Physiology

smell physiology

Smell processes use molecule shape and electric-field differences to distinguish odorants {smell, physiology}. After seven or eight molecules bind to cilia odorant receptors, olfactory receptors signal once. People need 40 signals to perceive odor. Odorants affect several olfactory-receptor types, which send to smell neurons that excite and inhibit each other to form intensity ratios. Smell neurons work together to distinguish odors.

Odors are painful at high concentrations. Smell can detect very low concentrations. Odor intensity and sense qualities mix.

Smell can detect source location. Smell can detect many sources from one location.

Lower air pressure increases volatility and so smell intensity. Higher humidity increases volatility and so smell intensity. Light typically decreases smell, by breaking down chemicals.

cross-adaptation

After smelling an odor, smell is less sensitive to later odors {cross-adaptation}, probably because both odors share one or more odorant-receptor types. Different odor sequences result in different sensitivities.

tip-of-the-nose phenomenon

People can be unable to name familiar odors {tip-of-the-nose phenomenon}. Unlike tip-of-the-tongue phenomena, there are no lexical cues.

1-Consciousness-Sense-Smell-Problems

anosmia

Sinus problems or head blows can cause inability to smell anything {anosmia}. People can be unable to smell specific odors {specific anosmia}.

hyperosmia

People can have heightened sucrose, urea, and hydrochloric-acid sensitivity {hyperosmia}.

hyposmia

People can have reduced smell sense {hyposmia}.

1-Consciousness-Sense-Smell-Theories

shape-pattern theory

Air chemicals and odorant receptors have shapes. Perhaps, chemical shapes must be complementary to receptor shapes to detect odorants {shape-pattern theory}. Odorant-receptor firing pattern determines odor.

stereochemical theory

Perhaps, molecule geometry correlates with odor type {stereochemical theory}. Smell receptor sites are small concave for camphorous smell, large concave for musky smell, V-shaped for minty smell, trough-shaped for ethereal smell, and concave-and-trough-shaped for floral smell. Receptor sites can have electric charges that attract oppositely charged moelcules, with negative charge for pungent smell and positive charge for putrid smell [Amoore, 1964] [Moncrieff, 1949].

vibration theory

Perhaps, odorant molecules have vibration frequencies {vibration theory} (Luca Turin). Molecules with similar vibration frequency have similar smell.

1-Consciousness-Sense-Taste

taste

Taste {taste, sense} {gustation} detects chemicals dissolved in water, using molecule electrochemical reactions and shape, acidity, and polarity. Taste molecules are below 200 molecular weight and include ions, hydrogen ions, hydroxide ions, and sugars. Taste is a synthetic sense, with some analysis.

physical properties

Tastable molecules include hydrogen ions, hydroxide ions, salt ions, and sugars, which are water-soluble and have molecular weights less than 200. Water-soluble molecules vary in size, shape, chemical sites, acidity, and ionicity. Water-soluble chemicals vary in concentration. Tastable molecules attach to tongue chemical receptors.

types

Taste types are sweet, salt, sour, and bitter.

Sweet is not acid, salt, or base. Salt is neutral. Sour is acid. Bitter is base.

Sweet is non-polar. Salt, sour, and bitter are polar.

Sour acid and salt are similar. Bitter base and salt are similar. Sweet and salt are similar.

Sour acid and bitter base are opposites. Sour acid and sweet are opposites. Salt and sweet are opposites.

Taste has same physical properties, and taste processes are similar, so taste perceptions are similar, for all undamaged people. Tastes relate in only one consistent and complete way. Tastes are not symmetric, so tastes have unique relations. Tastes cannot substitute. Tastes have specific sense qualities and so can never switch to other tastes. Newborns can detect sweet as pleasant and bitter as aversive.

Perhaps, the first taste was a food or poison sign.

mixing

Bitter and sweet can mix. Bitter and salt can mix. Salt and sour can mix. Tastes do not mix to make new tastes.

properties

Taste habituates quickly. Taste is in real time, with a half-second delay. Temperature affects taste, so sweets taste less sweet when warm than when cold. Taste has early, middle, and late sensations.

Sour acid and salt are similar. Bitter and salt are similar. Sweet and salt are similar.

Sour (acid) and bitter (base) are opposites. Sweet (neutral) and sour (acid) are opposites. Salt and sweet are opposites.

source location

Taste can detect source location. Taste can detect several sources from one location.

Taste has few spatial affects. However, taste can have interference from more than one source.

evolution

Perhaps, salt receptors evolved because animals need sodium and need associated chloride.

Perhaps, sour receptors evolved to detect food or dangerous acidic conditions.

Perhaps, sweet receptors evolved to detect sugar nutrients.

Perhaps, bitter receptors evolved to detect poisons.

development

Newborns do not taste salt, but babies soon can taste it, and they like it.

Newborns can taste sour. Children like sour taste.

Newborns can taste sweet and think it pleasant.

Babies can taste bitter and think it aversive.

relations to other senses

Taste and retronasal-area smell can combine to make flavor. Odors affect taste receptors. Taste has higher concentration than smell. Taste has water as solvent, not air. Taste has few spatial affects. Taste molecules can have polarization. Taste and smell can have interference from more than one source. Both taste and smell are often silent. Taste and smell have early, middle, and late sensations. Taste does not use vibrations, but smell can use vibrations.

Taste is at tongue surface and so has touch. Texture affects taste. Touch can feel solutions on tongue and react to noxious tastes. Touch locates tongue taste receptors.

Taste seems unrelated to hearing and vision.

effects

Sour makes people's lips pucker, sometimes downward.

Bitter makes people's eyes and nose change.

Salt is alerting.

Savory is less alerting.

Sweet is calming.

flavor and taste

Taste and retronasal-area smell can combine {flavor, taste}.

1-Consciousness-Sense-Taste-Anatomy

taste anatomy

Taste anatomy includes tongue, taste buds, chemical receptors, and neurons. Tongue chemical receptors send to thalamus, which sends to cortical regions.

Tongue skin has chemical receptors for water-soluble molecules {taste, anatomy}. Receptors have one receptor type. Taste uses four or five main receptor types, each with variations. Receptors have dozens of combinations. Taste buds have all receptor types. Tongue has no special salt, sweet, or sour regions.

Taste neurons typically receive from more than one taste-receptor type. Taste neurons detect one main taste category: salt-best, sugar-best, acid-best, and bitter-best. Similar taste sensations vary only in intensity, not in quality, because similar receptors go to same taste neuron.

Medulla solitary tract nucleus receives from tongue cranial nerves 7, 9, and 10, determines taste preferences, and sends to thalamus and to parabrachial nucleus, which also receives from GI tract. Taste cortex is in insula, which sends to orbitofrontal cortex.

1-Consciousness-Sense-Taste-Anatomy-Receptors

taste receptor

Tongue chemical receptors {taste receptor} are for sweet, sour, salty, bitter, and L-glutamate. Receptor cells have 50 chemoreceptors, all of the same receptor type, which detect positive ions or polarity.

glutamate receptor

Tongue chemoreceptors detect L-glutamate and other amino acids. Some receptors {glutamate receptor} {umami receptor} are metabotropic receptors similar to brain glutamate receptors and underlie savory taste (Kikunae Ikeda) [1908]. People with glutamate receptors can detect monosodium glutamate. Other receptors {amino-acid receptor} are altered sweet receptors that bind amino acids. Glutamate and amino-acid receptors couple to G-proteins, which have unknown second messengers.

salt receptor

Tongue chemoreceptors {salt receptor} detect positively charged salt ions, including sodium and potassium ions. Sodium-chloride sodium ions make pure salt taste. Potassium-chloride potassium ions make salt and bitter taste. Positive ions enter ion channels and directly cause depolarization.

Newborns do not taste salt, but babies soon can taste it, and they like it. Perhaps, salt receptors evolved because animals need sodium and need associated chloride.

Glasorisic acid increases sodium-ion retention.

sour receptor

Tongue chemoreceptors {sour receptor} detect acids. Acid hydrogen ions enter ion channels, block potassium channels, or bind to and open other positive-ion channels. Newborns can taste sour. Children like sour taste. Perhaps, sour receptors evolved to detect food or dangerous acidic conditions.

sweet receptor

Tongue chemoreceptors {sweet receptor} detect non-ionic organic compounds, mostly sugars. Sweet-receptors couple to G-proteins, and second messengers close potassium channels. Newborns can taste sweet and like it. Perhaps, sweet receptors evolved to detect sugar nutrients.

Asclepiad, similar to milkweed, inhibits tasting sweet. African miraculous berry makes everything taste sweet. Artificial sweeteners mimic sugar molecules.

T1R proteins

Proteins {T1R proteins} can make cell-membrane taste chemoreceptors. Sweet receptor has one T1R2 and one T1R3 protein. Umami savory receptor has one T1R1 and one T1R3 protein. Bitter receptor has 25 possible proteins.

1-Consciousness-Sense-Taste-Anatomy-Receptors-Bitter

bitter receptor

Thirty different chemoreceptors {bitter receptor} detect non-ionic organic compounds, such as alkaloids, including quinine and unripe-potato alkaloid {solanine}. Bitter receptors couple to G-proteins. Second messengers release calcium ions from endoplasmic reticulum. All bitter-receptor types synapse on same taste-neuron type, so people cannot discriminate among bitters. Babies can taste bitter and dislike it. Perhaps, bitter receptors evolved to detect poisons.

6-n-propylthiouracil taste receptor

6-n-propylthiouracil (PROP) tastes bitter. Supertasters have its chemoreceptors {6-n-propylthiouracil taste receptor}, have many fungiform papillae, and have high-intensity tastes. One-third of people cannot taste PROP, lack those receptors, have fewer fungiform papillae, and have low-intensity tastes.

PTC taste

Phenylthiocarbamide tastes bitter and is similar to propylthiouraci. One-third of people cannot taste it.

1-Consciousness-Sense-Taste-Anatomy-Tongue

taste bud

Tongue and soft-palate hemispherical cell clusters {taste bud}| hold cells {taste receptor cell} that have tip microvilli. Adult tongue has 10,000 taste buds, but babies have more. Taste buds last one week, fade, and then new ones grow.

microvilli

Taste-bud cells have tips with projections {microvillus} {microvilli} that extend into taste pore.

1-Consciousness-Sense-Taste-Anatomy-Tongue-Papilla

papilla

Tongue has four bump types {papilla}| {papillae}.

circumvallate papilla

Papillae {circumvallate papilla} can be largest, be before tonsils, be large circular mounds with depressed circumference, and have three to five taste buds (on tongue rear sides).

filiform papilla

Papillae {filiform papilla} can be smallest, be most, be down top middle, and have no taste buds.

foliate papilla

Papillae {foliate papilla} can be medium-size, be at back sides, be tissue folds at tongue rear and outsides, and have taste buds.

fungiform papilla

Papillae {fungiform papilla} can be next smallest, be on tongue broad part, be one-millimeter-size mushroom shapes at tongue tip and edges, and have six taste buds each.

1-Consciousness-Sense-Taste-Physiology

taste physiology

Taste distinguishes water-soluble salt, sugar, acid, and base chemicals {taste, physiology}. Taste receptors are for only salt, sugar, acid, or base. For example, salt taste receptors measure salt concentration as salt-to-receptor binding per second. Different taste receptors converge on taste neurons. Similar taste sensations vary only in intensity, not in quality, because similar receptors go to same taste neuron.

Salty chemicals are small and ionic and have neutral acidity. Sodium-chloride sodium ions make pure salt taste. Potassium-chloride potassium ions make salt and bitter taste.

Sour chemicals are small, ionic, and acidic. Hydrogen chloride makes pure sour taste.

Sweet chemicals are large and polar and have neutral acidity. Glucose makes pure sweet taste. Fructose and galactose are sweet.

Bitter chemicals are small or large, ionic, and basic. Hydroxide ions make pure bitter taste.

Savory chemicals are large, ionic-polar, and slightly acidic. L-glutamic acid sodium salt (monosodium glutamate) tastes distinctively salty and sweet.

Taste neurons inhibit and excite each other to compare sugar, acid, base, salt, and L-glutamate receptor inputs to find differences and indicate taste types [Kadohisa et al., 2005] [Pritchard and Norgren, 2004] [Rolls and Scott, 2003].

Tastes are relative. For example, salt only tastes salty relative to other tastes [Brillat-Savarin, 1825]. Saliva salt level is highest in morning, drops until afternoon, and then rises again to high morning value, so salt amount needed for salt taste varies during day. Saliva substance concentrations can vary tenfold. Tongue taste-receptor pattern affects taste.

Taste is painful at high concentrations. Taste can detect low concentrations.

Taste can detect source location. Taste can detect several sources from one location.

acidity

Molecule atoms, bonds, and electric charge determine acidity, which can be acidic, neutral, or basic.

Sour is acidic. Salty is neutral acidity. Savory is neutral. Sweet is neutral. Bitter is basic.

Salty, savory, and sweet have similar neutrality.

Sour and bitter have opposite acidity.

ionicity

Molecule atoms and bonds and molecule-electron properties determine ionicity, which can be ionic or polar.

Sweet and some bitters are polar. Salty, savory, sour, and some bitters are ionic.

Sour and sweet, salty and sweet, and savory and sweet have opposite ionicity.

size

Sour and some bitters have similar small size.

Salts have medium size.

Sweet, savory, and some bitters have similar large size.

polarity or ionicity; acidity, neutrality, or basicity; and size

Taste molecules have a combination of polarity or ionicity; acidity, neutrality, or basicity; and size.

Taste molecules can be:

acidic: hydrogen ion (sour)

neutral: monosodium glutamate (savory)

neutral: sodium chloride and potassium chloride (salt)

neutral: glucose and fructose (sweet)

slightly basic: phenylthiourea, phenylthiocarbamide, and 6-n-propylthiouracil (bitter)

basic: hydroxide ion (bitter)

Taste molecules can be:

polar: glucose and fructose (sweet)

polar: phenylthiourea, phenylthiocarbamide, and 6-n-propylthiouracil (bitter)

ionic: hydroxide ion (bitter)

ionic: hydrogen ion (sour)

ionic: sodium chloride and potassium chloride (salt)

ionic: monosodium glutamate (savory)

(They cannot be non-polar, because non-polar does not dissolve in water.)

Taste molecules can have molecular weight 1 to 200:

1: hydrogen ion (sour)

17: hydroxide ion (bitter)

58: sodium chloride (salt)

75: potassium chloride (salt)

152: phenylthiourea and phenylthiocarbamide (bitter)

169: monosodium glutamate (savory)

170: 6-n-propylthiouracil (bitter)

180: glucose and fructose (sweet)

Taste molecules are:

Sour: acidic, ionic, and small.

Salt: neutral, ionic, and medium.

Savory: neutral, ionic, and large.

Sweet: neutral, polar, and large.

Bitter: slightly basic, polar, and large.

Bitter: basic, ionic, and small.

Acidic and polar do not exist, because acids are ionic. Basic and polar do not exist, because bases are ionic.

Small and polar do not exist, because small molecules are ionic. Medium and polar do not exist, because medium molecules are ionic.

Small and neutral do not exist, because small molecules have hydrogen ions or hydroxide ions. Large and acidic do not exist, because acidic molecules have small hydrogen ions. Large and basic do not exist, because basic molecules have small hydroxide ions.

Taste molecules fall into six categories:

Large polar: neutral (sweet) or slightly basic (bitter)

Large ionic: neutral (savory)

Medium ionic: neutral (salt)

Small ionic: acidic (sour) or basic (bitter).

learned taste aversion

If new flavor associates with gastrointestinal illness, people are averse to the flavor {learned taste aversion}.

taste zero

Taste receptors adjust for current saliva substance concentrations. Taste stimulus at same concentration as saliva concentration is tasteless {taste zero}.

1-Consciousness-Sense-Taste-Kinds

primary taste

Henning said tastes are bitter, salty, sour, and sweet {primary taste} {basic taste}. Some people can distinguish monosodium glutamate savory taste from salt taste.

capsaicin

Peppers have molecules {capsaicin} that cause pain and sweating.

chow spice

Ethiopian spice mixtures {chow, spice} have chili and other spices and inhibit bacteria.

ginger taste

Roots {ginger, taste} prevent seasickness.

jambu

Brazilian daisy {spilanthes} {jambu} numbs and tingles mouth.

monosodium glutamate

Some people can distinguish umami savory taste from salt taste. Glutamic-amino-acid sodium salt {monosodium glutamate}| (MSG) tastes distinctively salty and sweet. Autolyzed yeast extract, glutavene, calcium caseinate, sodium caseinate, Marmite, soy sauce, anchovy, and fish sauce have high MSG.

phenylethylamine

Brain makes amphetamines {phenylethylamine} (PEA).

phenylthiourea

For one-half to two-thirds of people, with dominant allele, urea compounds {phenylthiourea} (PTU) can taste bitter. PTU has no taste to other one-half to one-third of people, who cannot recognize NC=S chemical functional group [Kalmus and Hubbard, 1960].

tetrodotoxin

Puffer-fish tissues can have poison {tetrodotoxin}, to which predators are averse.

1-Consciousness-Sense-Temperature

temperature sense

Skin has cold and warm receptors {temperature sense} {temperature receptor}. Coolness and warmth are relative and depend on body-tissue relative average random molecule speed. Very cold objects can feel hot at first.

Nociceptive and thermal receptor systems interact. Tactile and thermal receptor systems interact. Warmth and coolness have no pressure.

Nose and tongue thermoreceptors adjust food-digestion enzymes.

thermoreceptor

Skin mechanoreceptors {thermoreceptor} can detect temperature. Muscles, tendons, joints, alimentary canal, and bladder have thermoreceptors.

cold fiber

Skin has mechanoreceptors {cold fiber} that detect decreased skin temperature. Skin is normally 30 C to 36 C. If objects are colder than 30 C, cold fibers provide information about material as heat flows from skin to object. Cold receptors are mostly on face and genitals. Cold fibers are 30 times more than warmth fibers.

warmth fiber

Skin has receptors {warmth fiber} that detect increased skin temperature. Skin is normally 30 C to 36 C. If skin is above normal temperature, warmth fibers provide information about material as heat flows from skin to object. Warmth fibers also provide information about body state, such as fever or warm-weather overheating. Heat receptors are deep in skin, especially in tongue. Warm fibers are 30 times fewer than cool fibers.

1-Consciousness-Sense-Thirst

thirst sense

Throat chemoreceptors {thirst, receptor} {dryness, receptor} measure dryness.

1-Consciousness-Sense-Touch

touch

Mechanoreceptors can detect pressure at inside or outside body surfaces {touch, sense}. Compression, tension, and torsion stresses cause body-surface strains. Touch analyzes material properties, such as temperature, texture, surface curvature, density, hardness, and elasticity. Touch is a synthetic sense, with some analysis. Protozoa have touch and stretch receptors.

physical properties

Touch events include tissue stresses, motions, and vibrations, which displace surfaces and regions. Stresses vary in area, pressure, and vibration states. Pressures include compression, tension, and torsion. Stresses and stress changes stress skin mechanical receptors.

types

People can feel "butterflies", tickle, tingle, gentle touch, regular pressure, and sharp pressure. People can feel motion and vibrations up to 20 Hz. People can feel object temperature, texture, surface curvature, density, hardness, and elasticity.

Touches relate in only one consistent and complete way. Touches are not symmetric, so touches have unique relations. Touches cannot substitute. Touches have specific sense qualities and so can never switch to other touches. Touches do not have opposites. Touch has same physical properties, and touch processes are similar, so touch perceptions are similar, for all undamaged people.

Touch is pleasurable for babies and parents and for sexual relations. Perhaps, the first touch was for food or mating.

properties

Touch habituates quickly. Touch is in real time, with a half-second delay. Touch can detect low pressure or speed. Touch is painful at high pressure or speed. Touches do not mix to make new touches. Age reduces vibration sensitivity.

source location

Touch can locate body and objects {where system}.

From one location, touch detects only one source.

Touch can detect multiple sensations simultaneously.

Touch has no fixed coordinate origin (egocenter), so coordinates change with task.

evolution

Humans have higher touch sensitivity than other mammals. Lower animals have even less touch sensitivity. Perhaps, the first touch was for food or mating.

Protozoa have touch and stretch receptors.

development

Newborns can turn in touched-cheek direction.

effects

Pressure and touch receptor activity increases muscle flexor activity and decreases muscle extensor activity.

Emotions generate brain-gut hormones that cause abdominal feelings.

relations to other senses

Hearing, temperature, and touch involve mechanical energy.

Touch can feel vibrations below 20 Hz. Sound vibrates eardrum and other body surfaces but is not felt as touch. Touch uses higher energy level than hearing. Hearing uses waves that travel far, but touch uses vibrations that travel short. Hearing and touch have no input from most spatial locations. Hearing has sound attack and decay, and touch has temporal properties.

Touch can feel air near smell receptors and react to noxious smells. Touch locates smell receptors in upper nose.

Touch can feel solutions on tongue and react to noxious tastes. Touch locates tongue taste receptors.

Touch coordinates with vision.

Nociceptive and thermal receptor systems interact. Tactile and thermal receptor systems interact.

hyperaesthesia touch

People can have extreme touch sensitivity and low pain threshold {hyperaesthesia, touch}|.

1-Consciousness-Sense-Touch-Anatomy

epicritic pathway

Sense-nerve myelinated-fiber pathways {epicritic pathway} {lemniscal system} can begin at Meissner's corpuscles, Pacinian corpuscles, hair root structures, muscle spindles, and Golgi tendon organs, go through lateral cervical nucleus, continue to gracile and cuneate nuclei, and end at cerebellum and thalamus.

Skin mechanical receptors send to spinal cord, brainstem nuclei, thalamus, and parietal lobe.

1-Consciousness-Sense-Touch-Anatomy-Fibers

A-beta fiber

Skin mechanoreceptor fibers {A-beta fiber} can be large.

fast-adapting fiber I

Meissner corpuscles are fast-adapting mechanoreceptors and have small receptive fields {fast-adapting fiber I} (FA I).

fast-adapting fiber II

Pacinian corpuscles are fast-adapting mechanoreceptors and have large receptive fields {fast-adapting fiber II} (FA II).

slow-adapting fiber I

Merkel receptors are slow-adapting mechanoreceptors and have small receptive fields {slow-adapting fiber I} (SA I).

slow-adapting fiber II

Ruffini receptors are slow-adapting mechanoreceptors and have large receptive fields {slow-adapting fiber II} (SA II).

1-Consciousness-Sense-Touch-Anatomy-Receptors

touch receptor

Skin, muscles, tendons, joints, alimentary canal, and bladder have mechanical receptors that detect tissue strains, pressures/stresses (compression, tension, and torsion), motions, and vibrations {touch receptor}. Eight basic mechanoreceptor types each have many variations, making thousands of combinations. Skin has encapsulated tactile receptors, free-nerve-ending receptors, hair-follicle receptors, Meissner's corpuscles, Merkel cells, Pacinian corpuscles, palisade cells, and Ruffini endorgans.

Skin mechanoreceptors (thermoreceptor) can detect surface temperature. Muscles, tendons, joints, alimentary canal, and bladder have thermoreceptors. Skin mechanoreceptors (cold fiber) can detect decreased skin temperature. Cold receptors are mostly on face and genitals. Skin has receptors (warmth fiber) that detect increased skin temperature. Heat receptors are deep in skin, especially in tongue. Warm fibers are 30 times fewer than cool fibers.

free nerve ending

Skin mechanoreceptors {free nerve ending} respond to all skin-stimulation types, because they are specialized receptors.

hair cell of skin

Skin mechanoreceptors {hair cell, skin}, with tip cilia {stereocilia} {stereocilium}, detect movement. Stereocilia movement begins neurotransmitter release. Hair cells send to brainstem and receive from brain.

Herbst corpuscle

Woodpeckers have tongue vibration detectors {Herbst corpuscle}, which are like Pacinian corpuscles.

Krause end bulb

Skin encapsulated mechanoreceptors {Krause end bulb} {Krause's end bulb} are in mammals other than primates and correspond to primate Meissner's corpuscles. Krause end bulbs are mostly in genitals, tongue, and lips.

lateral line system

Teleosts have side canals and openings {lateral line system}|, running from head to tail, which perceive water pressure and flow changes. Visual signals influence lateral-line perceptions.

Meissner corpuscle

Primate glabrous-skin encapsulated mechanoreceptors {Meissner's corpuscle} {Meissner corpuscle} are fast-adapting, have small receptive fields of 100 to 300 micrometers diameter, and lie in rows just below fingertip surface-ridge dermal papillae. Meissner's corpuscles are only in primates and correspond to Krause end bulbs in other mammals.

Meissner's corpuscles respond to vibration, to detect changing stimuli. Maximum sensitivity is at 20 to 40 Hz. Range is from 1 Hz to 400 Hz. Meissner's corpuscles send to myelinated dorsal-root neuron fibers.

Merkel cell

Numerous encapsulated mechanoreceptors {Merkel cell} {Merkel-cell neurite complex} form domes {Iggo-Pinkus dome} visible at skin surfaces. Merkel cells are slow-adapting, have small receptive fields of 100 to 300 micrometers diameter, and are in hairy-skin epidermis-bottom small scattered clusters and in glabrous-skin epidermis rete pegs.

Merkel cells detect continuous pressures and deformations as small as one micrometer. Merkel cells detect 0.4-Hz to 3-Hz low-frequency vibrations. Merkel cells send to myelinated dorsal-root neuron fibers.

ODC enzyme

Enzymes {ODC enzyme} begin touch chemical changes.

pacinian corpuscle

Encapsulated mechanoreceptors {pacinian corpuscle}, 1 to 2 mm diameter, detect deep pressure. Pacinian corpuscles are fast-adapting, have large receptive fields, and are in body, joint, genital, and mammary-gland hairy-skin and glabrous-skin deep layers.

Pacinian corpuscles respond to vibration with maximum sensitivity at 200 to 300 Hz. Range is 20 to 1500 Hz. Pacinian corpuscles can detect movements smaller than one micrometer. Pacinian corpuscles have lamellae, which act as high-pass filters to prevent steadily maintained pressure from making signals. Pacinian corpuscles send to myelinated dorsal-root neuron fibers.

palisade cell in skin

Hair follicles have pressure mechanoreceptors {palisade cell, touch} {hair follicle nerve}, around hair-shaft base, that have three myelinated-fiber types. Palisade cells respond to different deformations. Palisade cells respond to vibration frequencies from 1 to 1500 Hz.

Ruffini endorgan

Encapsulated skin mechanoreceptors {Ruffini's endorgan} {Ruffini endorgan} {Ruffini ending} are spindle shaped and 1 mm to 2 mm long, similar to Golgi tendon organs. Ruffini's endorgans are slow-adapting, are in joints and glabrous-skin dermis, and have large receptive fields (SA II), several centimeters diameter in arms and trunk. Ruffini endorgans have densely-branched center nerve endings.

Ruffini endorgans respond to skin slip, stretch, and deformation, with sensitivity less than that of SA I receptors. Ruffini endorgans respond to 100 Hz to 500 Hz. Ruffini endorgans send to myelinated dorsal-root neuron fibers.

skin receptor

Skin has hair-follicle receptors, Meissner's corpuscles, Merkel cells, Pacinian corpuscles, and Ruffini endorgans {skin receptor}.

tactile receptor

Skin encapsulated mechanoreceptors {tactile receptor} are for vibration, steady pressure, and light touch. Receptors measure amplitude, constancies, changes, and frequencies.

1-Consciousness-Sense-Touch-Physiology

touch physiology

Mechanoreceptors detect pressures, strains, and movements {touch, physiology}. Touch stimuli affect many touch-receptor types, which excite and inhibit each other to form intensity ratios. Receptors do not make equal contributions but have weights. Receptor sensitivity varies over touch spectrum and touch region [Katz, 1925] [McComas and Cupido, 1999] [Teuber et al., 1960] [Teuber, 1960].

Touch is more about weight, heat transfer, texture, and hardness {material property, touch} than about shape {geometric property, touch}. Weight discrimination is best if lifted-weight density is one gram per cubic centimeter. Touch receptors can detect mechanical vibrations up to 20 to 30 Hz.

Touch can detect body location. From one location, touch detects only one source. Touch can detect multiple sensations simultaneously. Touch has no fixed coordinate origin (egocenter), so coordinates change with task.

Pressure, pain, and touch receptor activity increases muscle flexor activity and decrease muscle extensor activity.

Mechanoreceptors detect pressures/stresses (compression, tension, torsion), strains, motions, and vibrations [Bolanowski et al., 1998] [Hollins, 2002] [Johnson, 2002]:

Free nerve ending: smooth or rough surface texture

Hair cell: motion

Meissner corpuscle: vibration

Merkel cell: light compression and vibration

Pacinian corpuscle: deep compression and vibration

Palisade cell: light compression

Ruffini endorgan: slip, stretch, and vibratio

pressure

Skin encapsulated tactile receptors are for steady pressure and light touch.

Skin free-nerve-ending mechanoreceptors respond to all skin-stimulation types.

Merkel cells detect continuous pressures and deformations as small as one micrometer. Merkel cells detect 0.4-Hz to 3-Hz low-frequency vibrations. Merkel cells are slow-adapting.

Pacinian corpuscles detect deep pressure. Pacinian corpuscles are fast-adapting.

Palisade cells respond to different deformations.

Ruffini endorgans respond to skin slip, stretch, and deformation, with sensitivity less than that of SA I receptors. Ruffini's endorgans are slow-adapting.

Nerve signals differ for pain, itch, heat, and pressure [Bialek et al., 1991]. Pain is irregular and high intensity and has rapid increase. Itch is regular and fast. Heat rises higher. Pressure has high intensity that fades away.

People can distinguish 10 stress levels. Maximum touch is when high pressure causes tissues to have inelastic strain, which stretches surface tissues past point to which they can completely return and which typically causes pain.

vibration

Skin encapsulated tactile receptors are for vibration.

Skin free-nerve-ending mechanoreceptors respond to all skin-stimulation types.

Meissner's corpuscles respond to vibration, to detect changing stimuli. Maximum sensitivity is at 20 to 40 Hz. Range is from 1 Hz to 400 Hz. Meissner corpuscles are fast-adapting.

Pacinian corpuscles respond to vibration with maximum sensitivity at 200 to 300 Hz. Range is 20 to 1500 Hz. Pacinian corpuscles can detect movements smaller than one micrometer. Pacinian-corpuscle lamellae act as high-pass filters to prevent steadily maintained pressure from making signals. Pacinian corpuscles are fast-adapting.

Palisade cells respond to vibration frequencies from 1 to 1500 Hz.

Ruffini endorgans respond to 100 Hz to 500 Hz. Ruffini's endorgans are slow-adapting.

People can distinguish 10 vibration levels. Age reduces vibration sensitivity.

movement

Skin hair-cell mechanoreceptors detect movement.

Skin free-nerve-ending mechanoreceptors respond to all skin-stimulation types.

The touch system can detect whether objects are stationary. Touch can tell whether surface is sliding under stationary skin, or skin is sliding over stationary surface.

Most objects connect to the ground and are stationary. Their connection to the ground makes them have high inertia and no acceleration when pushed or pulled.

Objects that slide past stationary skin have inertia similar to or less than the body. (If large object slides by skin, the collision affects the whole body, not just the skin.) They have measurable deceleration when pushed or pulled.

The touch system measures accelerations and decelerations in the skin. Large decelerations in skin result from sliding by stationary objects. Small decelerations in skin result from objects sliding by skin.

People can distinguish 10 motion levels.

space

Skin touches objects, and touch receptors receive information about objects adjacent to body. As body moves around in space, mental space expands by adding adjacency information. Sensations impinge on body surface in repeated patterns at touch receptors. From receptor activity patterns, nervous system builds a three-dimensional sensory surface.

Foot motions stop at ground. Touch and kinesthetic receptors define a horizontal plane in space.

People can distinguish inside-body stimuli, as self. Tightening muscles actively compresses, to affect proprioception receptors that define body points. When people move, other objects do not move, so correlated body movements belong to self.

People can distinguish outside-body stimuli, as non-self. During movements or under pressure, body surfaces passively extend, to affect touch receptors that define external-space points. When people move, correlated non-movements belong to non-self.

Because distance equals rate times time, motion provides information about distances. Nervous system correlates body motions and touch and kinesthetic receptors to extract reference points and three-dimensional space. Repeated body movements define perception metrics. Such ratios build standard length, angle, time, and mass units that model physical-space lengths, angles, times, and masses. As body, head, and eyes move, they trace geometric structures and motions.

material properties

Touch can identify {what system}.

Holding in hand determines weight.

Touching with no moving determines temperature. Material properties determine heat flow, which determines temperature, which ranges from cold to warm to pain. Temperature perceptual processes compare thermoreceptor inputs. People can distinguish 10 temperature levels.

Applying pressure determines hardness.

Sliding touch back and forth determines texture.

Wrapping around determines shape and volume. Following contours determines shape.

Touch is more about weight, heat transfer, texture, and hardness than about shape. Weight discrimination is best if lifted-weight density is one gram per cubic centimeter.

qualities

Emotions generate brain-gut hormones that cause abdominal feelings. Maximum touch is when high pressure causes tissues to have inelastic strain, which stretches surface tissues past point to which they can completely return and which typically causes pain.

neuron

Nerve signals differ for pain, itch, heat, and pressure [Bialek et al., 1991]. Pain is irregular and high intensity and has rapid increase. Itch is regular and fast. Heat rises higher. Pressure has high intensity that fades away.

EEG

In NREM sleep, anesthesia, and waking, short touch causes P1 cortical response 25 milliseconds later. In waking, short touch causes N1 cortical response 100 milliseconds later, lasting hundreds of milliseconds.

temperature

Coolness and warmth are relative and depend on body-tissue relative average random molecule speed. Very cold objects can feel hot at first. Skin is normally 30 C to 36 C. If objects are colder than 30 C, cold fibers provide information about material as heat flows from skin to object. If skin is above normal temperature, warmth fibers provide information about material as heat flows from skin to object. Warmth fibers also provide information about body state, such as fever or warm-weather overheating.

exploratory procedure

When touching objects, people use hand-movement patterns {exploratory procedure} to learn about features. Applying pressure determines hardness. Wrapping around determines shape and volume. Following contours determines shape. Touching with no moving determines temperature. Sliding touch back and forth determines texture. Holding in hand determines weight.

haptic touch

Skin, muscles, tendons, and joints have mechanoreceptors that work with muscle movements to explore environment. Touching by active exploration with fingers {haptic touch} {haptic perception} uses one information channel. Passive touch uses parallel channels. Touch can tell whether surface is sliding under stationary skin, or skin is sliding over stationary surface. See Figure 1.

two-point threshold

At different skin areas, for people to perceive separate touches, two touches must be separate by greater or smaller distances {two-point threshold}.

what system

Touch can identify {what system, touch}. Touch is more about weight, heat transfer, texture, and hardness than about shape.

where system

Touch can locate {where system, touch}.

1-Consciousness-Sense-Urination

urination sense

Bladder mechanoreceptors {urination, receptor} {bladder, receptor} measure distension {distension receptor, bladder}.

1-Consciousness-Sense-Vestibular System

vestibular system

Semicircular canals, utricle, and saccule {vestibular system}| work together. Vestibular system detects rotary and linear accelerations and body positions. Vestibular system maintains balance. All vertebrates have semicircular canals to detect accelerations.

Gravity makes constant force, and vestibular systems are similar, so balance feelings are similar, for all undamaged people.

Body-equilibrium neurons continuously stimulate motor nerves. If body-equilibrium nerves have damage, body becomes weak [Cole, 1995] [Lee and Lishman, 1975].

1-Consciousness-Sense-Vestibular System-Anatomy

semicircular canal

In inner ear, three mutually perpendicular semicircular tubes {semicircular canal}| {labyrinth, inner ear} detect head rotation.

otolith

In inner ear, small calcium-carbonate beads {otolith}| press on semicircular-canal hair-cell hairs.

saccule

Inner-ear parts {saccule}| can have small calcium-carbonate stones pressing on hair-cell hairs to detect body positions and rotary and linear accelerations.

utricle

Inner-ear parts {utricle}| can have small calcium-carbonate stones pressing on hair-cell hairs to detect head positions, centrifugal forces, and linear accelerations.

1-Consciousness-Sense-Vestibular System-Problems

Meniere disease

Hair-cell damage can produce dizziness {Ménière's disease} {Ménière disease}.

nystagmus

Rapid involuntary eyeball oscillation {nystagmus}| can accompany dizziness.

1-Consciousness-Sense-Vision

vision

Perception, imagination, dreaming, and memory-recall process visual information to represent color, distance, and location {vision, sense}. Eyes detect visible light by absorbing light energy to depolarize receptor-cell membrane. Vision analyzes light intensities and frequencies [Wallach, 1963]. Vision can detect color, brightness, contrast, texture, alignment, grouping, overlap, transparency, shadow, reflection, refraction, diffraction, focus, noise, blurriness, smoothness, and haze. Lateral inhibition and spreading excitation help find color categories and space surfaces.

properties: habituation

Vision habituates slowly.

properties: location

Vision can detect location. Vision detects only one source from one location. Vision receives from many locations simultaneously. Vision perceives locations that correspond to physical locations, with same lengths and angles.

properties: synthetic sense

Vision is a synthetic sense. From each space direction/location, vision mixes colors and reduces frequency-intensity spectrum to one color and brightness.

properties: phase

Vision does not use electromagnetic-wave phase differences.

properties: time

Vision is in real time, with a half-second delay.

factors: age

Age gradually yellows eye lenses, and vision becomes more yellow.

factors: material

Air is transparent to visible light and other electromagnetic waves. Water is opaque, except to visible light and electric waves. Skin is translucent to visible light.

nature: language

People see same basic colors, whether language has rudimentary or sophisticated color vocabulary. However, people can learn color information from environment and experiences. Fundamental sense qualities are innate and learned.

nature: perspective

Vision always has viewpoint, which always changes.

relations to other senses

Vision seems unrelated to hearing. Hearing has higher energy level than vision. Hearing has longitudinal mechanical waves, and vision has transverse electric waves. Hearing has ten-octave frequency range, and vision has one-octave frequency range. Hearing uses wave phase differences, but vision does not. Hearing is silent from most spatial locations, but vision displays information from all scene locations. Hearing has sound attack and decay, but vision is so fast that it has no temporal properties. Integrating vision and hearing makes three-dimensional space. Hearing can have interference from more than one source, but vision can have interference from only one source. Hearing hears multiple frequencies, but vision reduces to one quality. Vision mixes sources and frequencies into one sensation, but hearing can detect more than one source and frequency from one location.

Touch provides information about eyes. Vision coordinates with touch. Vision is at eye body surface, but brain feels no touch there.

Vision coordinates with kinesthesia.

Vision seems unrelated to smell and taste.

graphics

Images use vector graphics, such as splines with generalized ellipses or ellipsoids. Splines represent lines and can represent region boundary lines. Spline sets can represent surfaces using parallel lines or line grids, because they divide surfaces into polygons. Closed surfaces can be polygon sets. For simplicity, polygons can be triangles. Perhaps, brain uses ray tracing, but not two-dimensional projection.

Vector graphics represents images using mathematical formulas for volumes, surfaces, and curves (including boundaries) that have parameters, coordinates, orientations, colors, opacities, shading, and surface textures. For example, circle information includes radius, center point, line style, line color, fill style, and fill color. Vector graphics includes translation, rotation, reflection, inversion, scaling, stretching, and skewing. Vector graphics uses logical and set operations and so can extrapolate and interpolate, including filling in.

movement

Vision improves motor control by locating and recognizing objects.

evolution

More than 500 million years ago, animal skin touch-receptor cells evolved photoreceptor protein for dim light, making light-sensitive rod cells. More than 500 million years ago, gene duplication evolved photoreceptor proteins for bright light, and cone cells evolved.

Multiplying light-sensitive cells built a rod-cell region. Rod-cell region sank into skin to make a dimple, so light can enter only from straight-ahead. Dimple became a narrow hole and, like pinholes, allowed image focusing on light-sensitive rod-cell region. Transparent skin covered narrow hole. Transparent-skin thickening created a lens, allowing better light gathering. Muscles controlled lens shape, allowing focusing at different distances.

evolution: beginning

Perhaps, the first vision was for direct sunlight, fire, lightning, or lightning bugs.

evolution: animals

Animal eyes are right and left, not above and below, to help align vertical direction.

development

Pax-6 gene has homeobox and regulates head and eye formation.

change blindness

People often do not see scene changes or anomalies {change blindness}, especially if overall meaning does not change.

blinking

When scene changes during eye blinks, people do not see differences.

saccades

When scene changes during saccades, people do not see differences.

gradient

People do not see gradual changes.

masking

People do not see changes when masking hides scene changes.

featureless intermediate view

When a featureless gray picture flashes between views of first scene and slightly-different second scene, people do not see differences.

attentional load

If attentional load increases, change blindness increases.

enactive perception

Vision behavior and use determine vision phenomena {enactive perception} [Noë, 2002] [Noë, 2004] [O'Regan, 1992] [O'Regan and Noë, 2001].

fixation in vision

To fixate moving visual target, or stationary target when head is moving {fixation, vision}|, vertebrates combine vestibular system, vision system, neck somatosensory, and extraocular proprioceptor movement-sensor inputs. For vision, eyes jump from fixation to fixation, as body, head, and/or eyes move. At each eye fixation, body parts have distances and angles to objects (landmarks). (Fixations last long enough to gather new information with satisfactory marginal returns. Fixations eventually gather new information too slowly, so eyes jump again.)

mirror reversal

As observer looks in a plane mirror, mirror reflects observer top, bottom, right, and left at observed top, bottom, right, and left. Observer faces in opposite direction from reflection, reflection right arm is observer left arm, and reflection left arm is observer right arm, as if observer went through mirror and turned front side back (inside out) {mirror reversal}.

no inversion

Reflection through one point causes reflection and rotation (inversion). Inversion makes right become left, left become right, top become bottom, and bottom become top. Plane mirrors do not reflect through one point.

rotation

If an object is between observer and mirror, observer sees object front, and mirror reflects object back. Front top is at observer top, front bottom is at observer bottom, front right is at observer left, and front left is at observer right. Back top is at observer top, back bottom is at observer bottom, back right is at observer right, and back left is at observer left. It is like object has rotated horizontally 180 degrees. Mirrors cause rotation {mirror rotation}. 180-degree horizontal rotation around vertical axis exchanges right and left. 180-degree vertical rotation around right-left horizontal axis exchanges top and bottom. 180-degree vertical rotation around front-back horizontal axis exchanges right and left and top and bottom.

mirror writing

If a transparent glass sheet has writing on the back side facing a plane mirror, observers looking at the glass front and mirror see the same "mirror" writing. People can easily read what someone writes on their foreheads, and it is not "mirror" writing. People can choose to observe from another viewpoint.

eyes

Because mirror reversal still occurs using only one eye, having two horizontally separated eyes does not affect mirror reversal. Observing mirror reversal while prone, with eyes vertically separated, does not affect mirror reversal.

reporting

Mirror reversals are not just verbal reports, because "mirror" writing is difficult to read and looks different from normal writing.

cognition

Because mirror reversal occurs even when people cannot perceive the mirror, mirror reversal does not have cognitive rotation around vertical axis. People do not see mirror reversal if they think a mirror is present, but it is not.

optic array

Light rays reflect from visual-field objects, forming a two-dimensional array {optic array} [Gibson, 1966] [Gibson, 1979].

repetition blindness

Repeated stimuli can lead to not seeing {repetition blindness}, especially if overall meaning does not change [Kanwisher, 1987].

1-Consciousness-Sense-Vision-Opacity

opacity

Surfaces can be transparent, translucent (semi-reflective), or opaque (reflective) {opacity}.

absorbance

For each wavelength, a percentage {absorbance} of impinging light remains in the surface. Surface transmits or reflects the rest.

reflectance

For each wavelength, a percentage {reflectance} of impinging light reflects from surface. Surface transmits or absorbs the rest. Reflectance changes at object boundaries are abrupt [Land, 1977]. Color depends on both illumination and surface reflectance [Land, 1977]. Comparing surfaces' reflective properties results in color.

transmittance

For each wavelength, a percentage {transmittance} of impinging light transmits through surface. Surface reflects or absorbs the rest.

1-Consciousness-Sense-Vision-Anatomy

vision anatomy

Inner eyeball has a visible-light receptor-cell layer {vision, anatomy}.

occipital lobe

Areas V2 and V4 detect contour orientation, regardless of luminance. Area V4 detects curved boundaries.

temporal lobe

Middle temporal-lobe area V5 detects pattern directions and motion gradients. Dorsal medial superior temporal lobe detects heading.

temporal lobe: inferotemporal lobe

Inferotemporal lobe (IT) detects shape parts. IT and CIP detect curvature and orientation.

retina and brain

Brain sends little feedback to retina [Brooke et al., 1965] [Spinelli et al., 1965].

pathways

Brain processes object recognition and color from area V1, to area V2, to area V4, to inferotemporal cortex. Cortical area V1, V2, and V3 damage impairs shape perception and pattern recognition, leaving only flux perception. Brain processes locations and actions in a separate faster pathway.

lamellar body

At first-ventricle top, chordates have cells {lamellar body} with cilia and photoreceptors. In vertebrates, lamellar body evolved to make parietal eye and pineal gland.

spatial frequency channel

Cortical-neuron sets {spatial frequency channel} can detect different spatial-frequency ranges and so detect different object sizes.

1-Consciousness-Sense-Vision-Anatomy-Cells

vision cells

Vision cells {vision, cells} are in retina, thalamus, and cortex.

1-Consciousness-Sense-Vision-Anatomy-Cells-Cortex

cardinal cell

One thousand cortical cells collectively {cardinal cell} code for one perception type.

color difference neuron

Area-V4 neurons {color difference neuron} can detect adjacent and surrounding color differences, by relative intensities at different wavelengths.

color-opponent cell

Neurons {color-opponent cell} can detect output differences from different cone cells for same space direction.

comparator neuron

Visual-cortex neurons {comparator neuron} can receive same output that eye-muscle motor neurons send to eye muscles, so perception can account for eye movements that change scenes.

double-opponent neuron

Cells {double-opponent neuron} can have both ON-center and OFF-center circular fields and compare colors.

face cell

Some cortical cells {face cell} respond only to frontal faces, profile faces, familiar faces, facial expressions, or face's gaze direction. Face cells are in inferior-temporal cortex, amygdala, and other cortex. Face-cell visual field is whole fovea. Color, contrast, and size do not affect face cells [Perrett et al., 1992].

grandmother cell

Some brain neurons {grandmother cell} {grandmother neuron} {Gnostic neuron} {place cell, vision} can recognize a perception or store a concept [Barlow, 1972] [Barlow, 1995] [Gross, 1998] [Gross, 2002] [Gross et al., 1969] [Gross et al., 1972] [Konorski, 1967]. Place cells recognize textures, objects, and contexts. For example, they fire only when animal sees face (face cell), hairbrush, or hand.

1-Consciousness-Sense-Vision-Anatomy-Cells-Retina

amacrine cell in vision

Small retinal cells {amacrine cell, vision} inhibit inner-plexiform-layer ganglion cells, using antitransmitter to block pathways. There are 27 amacrine cell types.

bipolar cell in vision

Photoreceptor cells excite retinal neurons {bipolar cell, vision}. There are ten bipolar-cell types. Parasol ganglion cells can receive from large-dendrite-tree bipolar cells {diffuse bipolar cell}.

input

Central-retina small bipolar cells {midget bipolar cell} receive from one cone. Peripheral-retina bipolar cells receive from more than one cone. Horizontal cells inhibit bipolar cells.

output

Bipolar cells send to inner plexiform layer to excite or inhibit ganglion cells, which can be up to five neurons away.

ON-center cells

ON-center midget bipolar cells increase output when light intensity increases in receptive-field center and/or decreases in receptive-field periphery. OFF-center midget bipolar cells increase output when light intensity decreases in receptive-field center and/or increases in receptive-field periphery.

ganglion cell

Retinal neurons {ganglion cell, retina} can receive from bipolar cells and send to thalamus lateral geniculate nucleus (LGN), which sends to visual-cortex hypercolumns.

midget ganglion cell

Small central-retina ganglion cells {midget ganglion cell} receive from one midget bipolar cell. Midget cells respond mostly to contrast. Most ganglion cells are midget ganglion cells.

parasol cell

Ganglion cells {parasol cell} {parasol ganglion cell} can receive from diffuse bipolar cells. Parasol cells respond mostly to change. Parasol cells are 10% of ganglion cells.

X cell

Ganglion X cells can make tonic and sustained signals, with slow conduction, to detect details and spatial orientation. X cells send to thalamus simple cells. X cells have large dendritic fields. X cells are more numerous in fovea.

Y cell

Ganglion Y cells can make phasic and transient signals, with fast conduction, to detect stimulus size and temporal motion. Y cells send to thalamus complex cells. Y cells have small dendritic fields. Y cells are more numerous in retinal periphery.

W cell

Ganglion W cells are small, are direction sensitive, and have slow conduction speed.

ON-center neuron

ON-center ganglion cells respond when light intensity above background level falls on their receptive field. Light falling on field surround inhibits cell. Bipolar cells excite ON-center neurons.

Four types of ON-center neuron depend on balance between cell excitation and inhibition. One has high firing rate at onset and zero rate at offset. One has high rate at onset, then zero, then high, and then zero. One has high rate at onset, goes to zero, and then rises to constant level. One has high rate at onset and then goes to zero.

OFF-center neuron

OFF-center ganglion cells increase output when light intensity decreases in receptive-field center. Light falling on field surround excites cell. Bipolar cells excite OFF-center neurons.

ON-OFF-center neuron

ON-OFF-center ganglion cells for motion use ON-center-neuron time derivatives to find movement position and direction. Amacrine cells excite transient ON-OFF-center neurons.

similar neurons

Ganglion cells are like auditory nerve cells, Purkinje cells, olfactory bulb cells, olfactory cortex cells, and hippocampal cells.

spontaneous activity

Ganglion-cell spontaneous activity can be high or low [Dowling, 1987] [Enroth-Cugell and Robson, 1984] [Wandell, 1995].

horizontal cell

Retinal cells {horizontal cell} can receive from receptor cells and inhibit bipolar cells.

1-Consciousness-Sense-Vision-Anatomy-Cells-Retina-Receptors

photoreceptor cell

Retina has pigment cells {photoreceptor cell}, with three layers: cell nucleus, then inner segment, and then outer segment with photopigment. Visual-receptor cells find illumination logarithm.

types

Human vision uses four receptor types: rods, long-wavelength cones, middle-wavelength cones, and short-wavelength cones.

hyperpolarization

Visual receptor cells hyperpolarize up to 30 mV from resting level [Dowling, 1987] [Enroth-Cugell and Robson, 1984] [Wandell, 1995]. Photoreceptors have maximum response at one frequency and lesser responses farther from that frequency.

rod cell

Rod-shaped retinal cells {rod cell} are night-vision photoreceptors, detect large features, and do not signal color.

frequency

Rods have maximum sensitivity at 498 nm, blue-green.

Just above cone threshold intensity {mesopic vision, rod}, rods are more sensitive to short wavelengths, so blue colors are brighter but colorless.

number

Retinas have 90 million rod cells.

layers

Rods have cell nucleus layer, inner layer that makes pigment, and outer layer that stores pigment. Outer layer is next to pigment epithelium at eyeball back.

size

Rods are larger than cones.

pigment

Rod light-absorbing pigment is rhodopsin. Cones have iodopsin.

rod cell and long-wavelength cone

Brain can distinguish colors using light that only affects rod cells and long-wavelength cone cells.

fovea

Fovea has no rod cells. Rod cells are denser around fovea.

1-Consciousness-Sense-Vision-Anatomy-Cells-Retina-Receptors-Cones

cone cell

Cone-shaped retinal cells {cone, cell} have daylight-vision photoreceptors and detect color and visual details.

types

Humans have three cone types. Cone maximum wavelength sensitivities are at indigo 437 nm {short-wavelength cone}, green 534 nm {middle-wavelength cone}, and yellow-green 564 nm {long-wavelength cone}. Shrimp can have eleven cone types.

evolution

Long-wavelength cones evolved first, then short-wavelength cones, and then middle-wavelength cones. Long-wavelength and short-wavelength cones differentiated 30,000,000 years ago. Three cone types and trichromatic vision began in Old World monkeys.

fovea

Fovea has patches of only medium-wavelength or only long-wavelength cones. To improve acuity, fovea has few short-wavelength cones, because different colors focus at different distances. Fovea center has no short-wavelength cones [Curcio et al., 1991] [Roorda and Williams, 1999] [Williams et al., 1981] [Williams et al., 1991].

number

There are five million cones, mostly in fovea. Short-wavelength cones are mostly outside fovea.

size

Cones are smaller than rods.

pigment

Cone light-absorbing pigment is iodopsin. Rods have rhodopsin.

frequency

When rods saturate, cones have approximately same sensitivity to blue and red.

Just above cone threshold {mesopic vision, cone}, rods are more sensitive to short wavelengths, so blue colors are brighter but colorless. Retinal receptors do not detect pure or unmixed colors. Red light does not optimally excite one cone type but makes maximum excitation ratio between two cone types. Blue light excites short-wavelength cones and does not excite other cone types. Green light excites all cone types.

output

Cones send to one ON-center and one OFF-center midget ganglion cell.

dichromat

Most mammals, including cats and dogs, have two photopigments and two cone types {dichromat}. For dogs, one photopigment has maximum sensitivity at 429 nm, and one photopigment has maximum sensitivity at 555 nm. Early mammals and most mammals are at 424 nm and 560 nm.

monochromat

Animals can have only one photopigment and one cone type {monochromat} {cone monochromat}. They have limited color range. Animals can have only rods and no cones {rod monochromat} and cannot see color.

quadchromat

Reptiles and birds have four different photopigments {quadchromat}, with maximum sensitivities at near-ultraviolet 370 nm, 445 nm, 500 nm, and 565 nm. Reptiles and birds have yellow, red, and colorless oil droplets, which make wavelength range less, except for ultraviolet sensor.

tetrachromacy

Women can have two different long-wavelength cones {L-cone} {L photopigment}, one short-wavelength cone {S-cone} {S photopigment}, and one middle-wavelength cone {M-cone} {M photopigment}, and so have four different pigments {tetrachromacy}. Half of men have one or the other long-wavelength cone [Asenjo et al., 1994] [Jameson et al., 2001] [Jordan and Mollon, 1993] [Nathans, 1999].

trichromat

People with normal color vision have three different photopigments and cones {trichromat}.

1-Consciousness-Sense-Vision-Anatomy-Eye

eye

Land-vertebrate eyes {eye} are spherical and focus images on retina.

eye muscles

Eye muscles exert constant tension against movement, so effort required to move eyes or hold them in position is directly proportional to eye position. Midbrain oculomotor nucleus sends, in oculomotor nerve, to inferior oblique muscle below eyeball, superior rectus muscle above eyeball, inferior rectus muscle below eyeball, and medial rectus muscle on inside. Pons abducens nucleus sends, in abducens nerve, to lateral rectus muscle on outside. Caudal midbrain trochlear nucleus sends, in trochlear nerve, to superior oblique muscle around light path from above eyeball.

eye muscles: convergence

Eyes converge toward each other as object gets nearer than 10 meters.

eye muscles: zero-gravity

In zero-gravity environment, eye resting position shifts upward, but people are not aware of shift.

fiber projection

Removing embryonic eye and re-implanting it in rotated positions does not change nerve fiber projections from retina onto visual cortex.

simple eye

Horseshoe crab (Limulus) eye {simple eye} can only detect light intensity, not direction. Input/output equation uses relation between Green function and covariance, because synaptic transmission is probabilistic.

inner eyelid

Most mammals and birds have tissue fold {inner eyelid} {palpebra tertia} that, when eye retracts, comes down from above eye to cover cornea. Inner eyelid has outside mucous membrane {conjunctiva}, inner-side lymphoid follicles, and lacrimal gland.

nictitating membrane

Reptiles and other vertebrates have transparent membrane {nictitating membrane}| that can cover and uncover eye.

cornea

Eye has transparent cells {cornea}| protruding in front. Cornea provides two-thirds of light refraction. Cornea has no blood vessels and absorbs nutrients from aqueous humor. Cornea has many nerves. Non-spherical-cornea astigmatism distorts vision. Corneas can transplant without rejection.

lens of eye

Elastic and transparent cell layers {lens, eye} {crystalline lens} attach to ciliary muscles that change lens shape. To become transparent, lens cells destroy all cell organelles, leaving only protein {crystallin} and outer membrane. Lens cells are all the same. They align and interlock [Weale, 1978]. Lens shape accommodates when objects are less than four feet away. Lens maximum magnification is 15.

iris of eye

Sphincter muscles in a colored ring {iris, eye}| close pupils. When iris is translucent, light scattering causes blue color. In mammals, autonomic nervous system controls pupil smooth muscles. In birds, striate muscles control pupil opening.

pupil of eye

Eye has opening {pupil}| into eye. In bright light, pupil is 2 mm diameter. At twilight, pupil is 10 mm diameter. Iris sphincter muscles open and close pupils. Pupil reflex goes from one eye to the other.

fundus

Eyeball has insides {fundus, eye}.

1-Consciousness-Sense-Vision-Anatomy-Eye-Fluid

aqueous humor

Liquid {aqueous humor}| can be in anterior chamber behind cornea and nourish cornea and lens.

vitreous humor

Liquid {vitreous humor}| fills main eyeball chamber between lens and retina.

1-Consciousness-Sense-Vision-Anatomy-Eye-Layers

sclera

Eyeball has outer white opaque connective-tissue layer {sclera}|.

trochlea

Eye regions {trochlea}| can have eye muscles.

choroid

Eyeball has inner blood-vessel layer {choroid}.

retinal pigment epithelium

Between retina and choroid is a cell layer {retinal pigment epithelium} (RPE) and Bruch's membrane. RPE cells maintain rods and cones by absorbing used molecules.

Bruch membrane

Retinal-pigment epithelium and membrane {Bruch's membrane} {Bruch membrane} are between retina and choroid.

1-Consciousness-Sense-Vision-Anatomy-Eye-Retina

retina

At back inner eyeball, visual receptor-cell layers {retina}| have 90 million rod cells, one million cones, and one million optic nerve axons.

cell types

Retina has 50 cell types.

cell types: clustering

Retina has clusters of same cone type. Retina areas can lack cone types. Fovea has few short-wavelength cones.

development

Retina grows by adding cell rings to periphery. Oldest eye part is at center, near where optic nerve fibers leave retina. In early development, contralateral optic nerve fibers cross over to connect to optic tectum. In early development, optic nerve fibers and brain regions have topographic maps. After maturation, axons can no longer alter connections.

processing

Retina cells separate information about shape, reflectance, illumination, and viewpoint.

blindspot

Ganglion-cell axons leave retina at region {blindspot}| medial to fovea [DeWeerd et al., 1995] [Finger, 1994] [Fiorani, 1992] [Komatsu and Murakami, 1994] [Komatsu et al., 2000] [Murakami et al., 1997].

color-receptor array

Cone cells are Long-wavelength, Middle-wavelength, or Short-wavelength. Outside fovea, cones can form two-dimensional arrays {color-receptor array} with L M S cones in equilateral triangles. Receptor rows have ...S-M-L-S-M-L-S... Receptor rows above, and receptor rows below, are offset a half step: ...-L-S-M-L-S-M-.../...S-M-L-S-M-L-S.../...-L-S-M-L-S-M-...

hexagons

Cones have six different cones around them in hexagons: three of one cone and three of other cone. No matter what order the three cones have, ...S-M-L-S-M..., ...S-L-M-S-L..., or ...M-L-S-M-L..., M and L are beside each other and S always faces L-M pair, allowing red+green brightness, red-green opponency, and yellow-blue opponency. L receptors work with three surrounding M receptors and three surrounding S receptors. M receptors work with three surrounding L receptors and three surrounding S receptors. S receptors work with six surrounding L+M receptor pairs, which are from three equilateral triangles, so each S has three surrounding L and three surrounding M receptors.

In all directions, fovea has alternating long-wavelength and middle-wavelength cones: ...-L-M-L-M-.

fovea

Primates have central retinal region {fovea}| that tracks motions and detects self-motion. Retinal periphery detects spatial orientation. Fovea contains 10,000 neurons in a two-degree circle. Fovea has no rods. Fovea center has no short-wavelength cones. Fovea has patches of only medium-wavelength cones or only long-wavelength cones. Fovea has no blood vessels, which pass around fovea.

inner plexiform layer

Retinal layers {inner plexiform layer} can have bipolar-cell and amacrine-cell axons and ganglion-cell dendrites. There are ten inner plexiform layers.

macula

Near retina center is a yellow-pigmented region {macula lutea}| {yellow spot}. Yellow pigment increases with age. If incident light changes spectra, people can briefly see macula image {Maxwell spot}.

1-Consciousness-Sense-Vision-Anatomy-Lateral Geniculate

achromatic channel

Lateral-geniculate-nucleus magnocellular neurons measure luminance {luminance channel, vision} {achromatic channel} {spectrally non-opponent channel}.

chromatic channel

Lateral-geniculate-nucleus parvocellular neurons measure colors {chromatic channel} {spectrally opponent channel}.

1-Consciousness-Sense-Vision-Anatomy-Midbrain

horizontal gaze center

Regions {horizontal gaze center}, near pons abducens nucleus, can detect right-to-left and left-to-right motions.

vertical gaze center

Regions {vertical gaze center}, near midbrain oculomotor nucleus, can detect up and down motions.

1-Consciousness-Sense-Vision-Physiology

vision physiology

Visual processing finds colors, features, parts, wholes, spatial relations, and motions {vision, physiology}. Brain first extracts elementary perceptual units, contiguous lines, and non-accidental properties.

properties: sizes

Observers do not know actual object sizes but only judge relative sizes.

properties: reaction speed

Reaction to visual perception takes 450 milliseconds [Bachmann, 2000] [Broca and Sulzer, 1902] [Efron, 1967] [Efron, 1970] [Efron, 1973] [Taylor and McCloskey, 1990] [Thorpe et al., 1996] [VanRullen and Thorpe, 2001].

properties: timing

Location perception is before color perception. Color perception is before orientation perception. Color perception is 80 ms before motion perception. If people must choose, they associate current color with motion 100 ms before. Brain associates two colors or motions before associating color and motion.

processes: change perception

Brain does not maintain scene between separate images. Perceptual cortex changes only if brain detects change. Perceiving changes requires high-level processing.

processes: contrast

Retina neurons code for contrast, not brightness. Retina compares point brightness with average brightness. Retinal-nerve signal strength automatically adjusts to same value, whatever scene average brightness.

processes: orientation response

High-contrast feature or object movements cause eye to turn toward object direction {orientation response, vision}.

processes: voluntary eye movements

Posterior parietal and pre-motor cortex plan and command voluntary eye movements [Bridgeman et al., 1979] [Bridgeman et al., 1981] [Goodale et al., 1986]. Stimulating superior-colliculus neurons can cause angle-specific eye rotation. Stimulating frontal-eye-field or other superior-colliculus neurons makes eyes move to specific locations, no matter from where eye started.

information

Most visual information comes from receptors near boundaries, which have large brightness or color contrasts. For dark-adapted eye, absorbed photons supply one information bit. At higher luminance, 10,000 photons make one bit.

blinking

People lower and raise eyelids {blinking}| every few seconds.

purpose

Eyelids close and open to lubricate eye [Gawne and Martin, 2000] [Skoyles, 1997] [Volkmann et al., 1980]. Blinking can be a reflex to protect eye.

rate

Blinking rate increases with anxiety, embarrassment, stress, or distraction, and decreases with concentration. Mind inhibits blinking just before anticipated events.

perception

Automatic blinks do not noticeably change scene [Akins, 1996] [Blackmore et al., 1995] [Dmytryk, 1984] [Grimes, 1996] [O'Regan et al., 1999] [Rensink et al., 1997] [Simons and Chabris, 1999] [Simons and Levin, 1997] [Simons and Levin, 1998] [Wilken, 2001].

constancy

Vision maintains constancies: size constancy, shape constancy, color constancy, and brightness constancy {constancy, vision}. Size constancy is accurate and learned.

eccentricity on retina

Scene features land on retina at distances {eccentricity, retina} {visual eccentricity} from fovea.

feature inheritance

Visual features can blend {feature inheritance} [Herzog and Koch, 2001].

filling-in

If limited or noisy stimuli come from space region, perception completes region boundaries and surface textures {filling-in}| {closure, vision}, using neighboring boundaries and surface textures.

perception

Filling-in always happens, so people never see regions with missing information. If region has no information, people do not notice region, only scene.

perception: conceptual filling-in

Brain perceives occluded object as whole-object figure partially hidden behind intervening-object ground {conceptual filling-in}, not as separate, unidentified shape beside intervening object.

perception: memory

Filling-in uses whole brain, especially innate and learned memories, as various neuron assemblies form and dissolve and excite and inhibit.

perception: information

Because local neural processing makes incomplete and approximate representations, typically with ambiguities and contradictions, global information uses marked and indexed features to build complete and consistent perception. Brain uses global information when local region has low receptor density, such as retina blindspot or damaged cells. Global information aids perception during blinking and eye movements.

processes: expansion

Surfaces recruit neighboring similar surfaces to expand homogeneous regions by wave entrainment. Contours align by wave entrainment.

processes: lateral inhibition

Lateral inhibition distinguishes and sharpens boundaries. Surfaces use constraint satisfaction to optimize edges and regions.

processes: spreading

Brain fills in using line completion, motion continuation, and color spreading. Brain fills areas and completes half-hidden object shapes. Blindspot filling-in maintains lines and edges {completion, filling-in}, preserves motion using area MT, and keeps color using area V4.

processes: surface texture

Surfaces have periodic structure and spatial frequency. Surface texture can expand to help filling in. Blindspot filling-in continues background texture using area V3.

processes: interpolation

Brain fills in using plausible guesses from surroundings and interpolation from periphery. For large damaged visual-cortex region, filling-in starts at edges and goes inward toward center, taking several seconds to finish [Churchland and Ramachandran, 1993] [Dahlbom, 1993] [Kamitani and Shimojo, 1999] [Pessoa and DeWeerd, 2003] [Pessoa et al., 1998] [Poggio et al., 1985] [Ramachandran, 1992] [Ramachandran and Gregory, 1991].

flicker fusion frequency

Stimuli blend if less than 200 milliseconds apart {flicker fusion frequency} [Efron, 1973] [Fahle, 1993] [Gowdy et al., 1999] [Gur and Snodderly, 1997] [Herzog et al., 2003] [Nagarajan et al., 1999] [Tallal et al., 1998] [Yund et al., 1983] [Westheimer and McKee, 1977].

Standard Observer

People have different abilities to detect color radiance. Typical people {Standard Observer} have maximum sensitivity at 555 nm and see brightness {luminance, Standard Observer} according to standard radiance weightings at different wavelengths. Brightness varies with luminance logarithm.

variable resolution

In dim light, without focus on anything, black, gray, and white blobs, smaller in brighter light and larger in dimmer light, flicker on surfaces. In darkness, people see large-size regions slowly alternate between black and white. Brightest blobs are up to ten times brighter than background. In low-light conditions, people see three-degrees-of-arc circular regions, alternating randomly between black and white several times each second {variable resolution}. If eyes move, pattern moves. In slightly lighter conditions, people see one-degree-of-arc circular regions, alternating randomly between dark gray and light gray, several times each second. In light conditions, people see colors, with no flashing circles.

Flicker rate varies with activity. If you relax, flicker rate is 4 to 20 Hz. If flicker rate becomes more than 25 Hz, you cannot see flicker.

Flicker shows that sense qualities have elements.

causes

Variable-resolution size reflects sense-field dynamic building. Perhaps, fewer receptor numbers can respond to lower light levels. Perhaps, intensity modulates natural oscillation. Perhaps, rods have competitive inhibition and excitation [Hardin, 1988] [Hurvich, 1981].

visual search

Observers can look {visual search} for objects, features, locations, or times {target, search} in scenes or lists.

distractors

Other objects {distractor, search} are not targets. Search time is directly proportional to number of targets and distractors {set size, search}.

types

Searches {conjunction search} can be for feature conjunctions, such as both color and orientation. Conjunction searches {serial self-terminating search} can look at items in sequence until finding target. Speed decreases with number of targets and distractors.

Searches {feature search} can be for color, size, orientation, shadow, or motion. Feature searches are fastest, because mind searches objects in parallel.

Searches {spatial search} can be for feature conjunctions that have shapes or patterns, such as two features that cross. Mind performs spatial searches in parallel but can only search feature subsets {limited capacity parallel process}.

guided search theory

A parallel process {preattentive stage} suggests serial-search candidates {attentive stage} {guided search theory, search}.

1-Consciousness-Sense-Vision-Physiology-Binocular Vision

binocular vision

Vision combines output from both eyes {binocular vision}|. Cats, primates, and predatory birds have binocular vision. Binocular vision allows stereoscopic depth perception, increases light reception, and detects differences between camouflage and surface. During cortex-development sensitive period, what people see determines input pathways to binocular cells and orientation cells [Blakemore and Greenfield, 1987] [Cumming and Parker, 1997] [Cumming and Parker, 1999] [Cumming and Parker, 2000].

binocular summation

One stimulus can affect both eyes, and effects can add {binocular summation}.

disparity detector

Visual-cortex cells {disparity detector} can combine right and left eye outputs to detect relative position disparities. Disparity detectors receive input from same-orientation orientation cells at different retinal locations. Higher binocular-vision cells detect distance directly from relative disparities, without form or shape perception.

eye-of-origin

People using both eyes do not know which eye {eye-of-origin} saw something [Blake and Cormack, 1979] [Kolb and Braun, 1995] [Ono and Barbieto, 1985] [Pickersgill, 1961] [Porac and Coren, 1986] [Smith, 1945] [Helmholtz, 1856] [Helmholtz, 1860] [Helmholtz, 1867] [Helmholtz, 1962].

interocular transfer

Adaptation can transfer from one eye to the other {interocular transfer}.

1-Consciousness-Sense-Vision-Physiology-Contour

contour in vision

Boundaries {contour, vision} have brightness differences and are the most-important visual perception. Contours belong to objects, not background.

curved axes

Curved surfaces have perpendicular curved long and short axes. In solid objects, short axis is object depth axis and indicates surface orientation. Curved surfaces have dark edge in middle, where light and dark sides meet.

completion

Mind extrapolates or interpolates contour segments to make object contours {completion, contour}.

When looking only at object-boundary part, even young children see complete figures. Children see completed outline, though they know it is not actually there.

crowding

If background contours surround figure, figure discrimination and recognition fail.

relatability

Two line segments can belong to same contour {relatability}.

subjective contour

Perception extends actual lines to make imaginary figure edges {subjective contour}|. Subjective contours affect depth perception.

1-Consciousness-Sense-Vision-Physiology-Dark Adaptation

duplex vision

Rods and cones {duplex vision} operate in different light conditions.

photopic system

Vision has systems {photopic system} for daylight conditions.

scotopic system

Vision has systems {scotopic system} for dark or nighttime conditions.

mesopic vision

Seeing at dusk {mesopic vision, dark} {twilight vision} is more difficult and dangerous.

1-Consciousness-Sense-Vision-Physiology-Depth Perception

depth perception

Brain can find depth and distance {depth perception} {distance perception} in scenes, paintings, and photographs.

depth: closeness

Closer objects have higher edge contrast, more edge sharpness, position nearer scene bottom, larger size, overlap on top, and transparency. Higher edge contrast is most important. More edge sharpness is next most important. Position nearer scene bottom is more important for known eye-level. Transparency is least important. Nearer objects are redder.

depth: farness

Farther objects have smaller retinal size; are closer to horizon (if below horizon, they are higher than nearer objects); have lower contrast; are hazier, blurrier, and fuzzier with less texture details; and are bluer or greener. Nearer objects overlap farther objects and cast shadows on farther objects.

binocular depth cue: convergence

Focusing on near objects causes extraocular muscles to turn eyeballs toward each other, and kinesthesia sends this feedback to vision system. More tightening and stretching means nearer. Objects farther than ten meters cause no muscle tightening or stretching, so convergence information is useful only for distances less than ten meters.

binocular depth cue: shadow stereopsis

For far objects, with very small retinal disparity, shadows can still have perceptibly different angles {shadow stereopsis} [Puerta, 1989], so larger angle differences are nearer, and smaller differences are farther.

binocular depth cue: stereopsis

If eye visual fields overlap, the two scenes differ by a linear displacement, due to different sight-line angles. For a visual feature, displacement is the triangle base, which has angles at each end between the displacement line and sight-line, allowing triangulation to find distance. At farther distances, displacement is smaller and angle differences from 90 degrees are smaller, so distance information is imprecise.

binocular depth cue: inference

Inference includes objects at edges of retinal overlap in stereo views.

monocular depth cue: aerial perspective

Higher scene contrast means nearer, and lower contrast means farther. Bluer means farther, and redder means nearer.

monocular depth cue: accommodation

Focusing on near objects causes ciliary muscles to tighten to increase lens curvature, and kinesthesia sends this feedback to vision system. More tightening and stretching means nearer. Objects farther than two meters cause no muscle tightening or stretching, so accommodation information is useful only for distances less than two meters.

monocular depth cue: blur

More blur means farther, and less blur means nearer.

monocular depth cue: color saturation

Bluer objects are farther, and redder objects are nearer.

monocular depth cue: color temperature

Bluer objects are farther, and redder objects are nearer.

monocular depth cue: contrast

Higher scene contrast means nearer, and lower contrast means farther. Edge contrast, edge sharpness, overlap, and transparency depend on contrast.

monocular depth cue: familiarity

People can have previous experience with objects and their size, so larger retinal size is closer, and smaller retinal size is farther.

monocular depth cue: fuzziness

Fuzzier objects are farther, and clearer objects are nearer.

monocular depth cue: haziness

Hazier objects are farther, and clearer objects are nearer.

monocular depth cue: height above and below horizon

Objects closer to horizon are farther, and objects farther from horizon are nearer. If object is below horizon, higher objects are farther, and lower objects are nearer. If object is above horizon, lower objects are farther, and higher objects are nearer.

monocular depth cue: kinetic depth perception

Objects becoming larger are moving closer, and objects becoming smaller are moving away {kinetic depth perception}. Kinetic depth perception is the basis for judging time to collision.

monocular depth cue: lighting

Light and shade have contours. Light is typically above objects. Light typically falls on nearer objects.

monocular depth cue: motion parallax

While looking at an object, if observer moves, other objects moving backwards are nearer than object, and other objects moving forwards are farther than object. For the farther objects, objects moving faster are nearer, and objects moving slower are farther. For the nearer objects, objects moving faster are nearer, and objects moving slower are farther. Some birds use head bobbing to induce motion parallax. Squirrels move orthogonally to objects. While observer moves while looking straight ahead, objects moving backwards faster are closer, and objects moving backwards slower are farther.

monocular depth cue: occlusion

Objects that overlap other objects {interposition} are nearer, and objects behind other objects are farther {pictorial depth cue}. Objects with occluding contours are farther.

monocular depth cue: peripheral vision

At the visual periphery, parallel lines curve, like the effect of a fish eye lens, framing the visual field.

monocular depth cue: perspective

By linear perspective, parallel lines converge, so, for same object, smaller size means farther distance.

monocular depth cue: relative movement

If objects physically move at same speed, objects moving slower are farther, and objects moving faster are nearer, to a stationary observer.

monocular depth cue: relative size

If two objects have the same shape and are judged to be the same, object with larger retinal size is closer.

monocular depth cue: retinal size

If observer has previous experience with object size, object retinal size allows calculating distance.

monocular depth cue: shading

Light and shade have contours. Shadows are typically below objects. Shade typically falls on farther objects.

monocular depth cue: texture gradient

Senses can detect gradients by difference ratios. Less fuzzy and larger surface-texture sizes and shapes are nearer, and more fuzzy and smaller are farther. Bluer and hazier surface texture is farther, and redder and less hazy surface texture is closer.

properties: precision

Depth-calculation accuracy and precision are low.

properties: rotation

Fixed object appears to revolve around eye if observer moves.

factors: darkness

In the dark, objects appear closer.

processes: learning

People learn depth perception and can lose depth-perception abilities.

processes: coordinates

Binocular depth perception requires only ground plane and eye point to establish coordinate system. Perhaps, sensations aid depth perception by building geometric images [Poggio and Poggio, 1984].

processes: two-and-one-half dimensions

ON-center-neuron, OFF-center-neuron, and orientation-column intensities build two-dimensional line arrays, then two-and-one-half-dimensional contour arrays, and then three-dimensional surfaces and texture arrays [Marr, 1982].

processes: three dimensions

Brain derives three-dimensional images from two-dimensional ones by assigning convexity and concavity to lines and vertices and making convexities and concavities consistent.

processes: triangulation model

Animals continually track distances and directions to distinctive landmarks.

continuity constraint

Adjacent points not at edges are on same surface and so at same distance {continuity constraint, depth}.

corresponding retinal points

Scenes land on right and left eye with same geometric shape, so feature distances and orientations are the same {corresponding retinal points}.

cyclopean stimulus

Brain stimuli {cyclopean stimulus} can result only from binocular disparity.

distance ratio

One eye can find object-size to distance ratio {distance ratio} {geometric depth}, using three object points. See Figure 1.

Eye fixates on object center point, edge point, and opposite-edge point. Assume object is perpendicular to sightline. Assume retina is planar. Assume that eye is spherical, rotates around center, and has calculable radius.

Light rays go from center point, edge point, and opposite edge point to retina. Using kinesthetic and touch systems and motor cortex, brain knows visual angles and retinal distances. Solving equations can find object-size to distance ratio.

When eye rotates, scenes do not change, except for focus. See Figure 2. 3.

Calculating distances to space points

Vision cone receptors receive from a circular area of space that subtends one minute of arc (Figure 3). Vision neurons receive from a circular area of space that subtends one minute to one degree of arc.

To detect distance, neuron arrays receive from a circular area of space that subtends one degree of arc (Figure 4). For the same angle, circular surfaces at farther distances have longer diameters, bigger areas, and smaller circumference curvature.

Adjacent neuron arrays subtend the same visual angle and have retinal (and cortical) overlap (Figure 5). Retinal and cortical neuron-array overlap defines a constant length. Constant-length retinal-image size defines the subtended visual angle, which varies inversely with distance, allowing calculating distance (r = s / A) in one step.

Each neuron array sends to a register for a unique spatial direction. The register calculates distance and finds color. Rather than use multiple registers at multiple locations, as in neural networks or holography, a single register can place a color at the calculated distance in the known direction. There is one register for each direction and distance. Registers are not physical neuron conglomerations but functional entities.

divergence of eyes

Both eyes can turn outward {divergence, eye}, away from each other, as objects get farther. If divergence is successful, there is no retinal disparity.

Emmert law

Brain expands more distant objects in proportion to the more contracted retinal-image size, making apparent size increase with increasing distance {size-constancy scaling} {Emmert's law} {Emmert law}. Brain determines size-constancy scaling by eye convergence, geometric perspective, texture gradients, and image sharpness. Texture gradients decrease in size with distance. Image sharpness decreases with distance.

triangulation by eye

Two eyes can measure relative distance to scene point, using geometric triangulation {triangulation, eye}. See Figure 1.

comparison

Comparing triangulations from two different distances does not give more information. See Figure 2.

movement

Moving eye sideways while tracking scene point can calculate distance from eye to point, using triangulation. See Figure 3.

Moving eye sideways while tracking scene points calibrates distances, because other scene points travel across retina. See Figure 4.

Moving eye from looking at object edge to looking at object middle can determine scene-point distance. See Figure 5.

Moving eye from looking at object edge to looking at object other edge at same distance can determine scene-point distance. See Figure 6.

uniqueness constraint

Scene features land on one retina point {uniqueness constraint, depth}, so brain stereopsis can match right-retina and left-retina scene points.

1-Consciousness-Sense-Vision-Physiology-Depth Perception-Cue

depth cue

Various features {depth cue}| {cue, depth} signal distance. Depth cues are accommodation, colors, color saturation, contrast, fuzziness, gradients, haziness, distance below horizon, linear perspective, movement directions, occlusions, retinal disparities, shadows, size familiarity, and surface textures.

types

Non-metrical depth cues can show relative depth, such as object blocking other-object view. Metrical depth cues can show quantitative information about depth. Absolute metrical depth cues can show absolute distance by comparison, such as comparing to nose size. Relative metrical depth cues can show relative distance by comparison, such as twice as far away.

aerial perspective

Vision has less resolution at far distances. Air has haze, smoke, and dust, which absorb redder light, so farther objects are bluer, have less light intensity, and have blurrier edges {aerial perspective}| than if air were transparent. (Air scatters blue more than red, but this effect is small except for kilometer distances.)

binocular depth cue

Brain perceives depth using scene points that stimulate right and left eyes differently {binocular depth cue} {binocular depth perception}. Eye convergences, retinal disparities, and surface-area sizes have differences.

surface area size

Brain can judge distance by overlap, total scene area, and area-change rate. Looking at surfaces, eyes see semicircles. See Figure 1. Front edge is semicircle diameter, and vision field above that line is semicircle half-circumference. For two eyes, semicircles overlap in middle. Closer surfaces make overlap less, and farther surfaces make overlap more. Total scene surface area is more for farther surfaces and less for closer surfaces. Movement changes perceived area at rate that depends on distance. Closer objects have faster rates, and farther objects have slower rates.

convergence of eyes

For fixation, both eyes turn toward each other {convergence, eye} {eye convergence} when objects are nearer than 10 meters. If convergence is successful, there is no retinal disparity. Greater eye convergence means object is closer, and lesser eye convergence means object is farther. See Figure 1.

intensity difference during movement

Brain can judge surface relative distance by intensity change during movement toward and away from surface {intensity difference during movement}. See Figure 1.

moving closer

Moving from point to half that distance increases intensity four times, because eye gathers four times more light at closer radius.

moving away

Moving from point to double that distance decreases intensity four times, because eye gathers four times less light at farther radius.

moving sideways

Movement side to side and up and down changes intensity slightly by changing distance slightly. Perhaps, saccades and/or eyeball oscillations help determine distances.

memory

Experience with constant-intensity objects establishes distances.

accommodation

Looking at object while moving it or eye closer, or farther, causes lens-muscle tightening, or loosening, and makes more, or less, visual angle. If brain knows depth, movement toward and away can measure source intensity.

light ray

Scene points along same light ray project to same retina point. See Figure 2.

haze

Atmospheric haze affects light intensity. Haze decreases intensity proportionally with distance. Object twice as far away has half the intensity, because it encounters twice as many haze particles.

sound

Sound-intensity changes can find distances. Bats use sonar because it is too dark to see at night. Dolphins use sonar because water distorts light.

monocular depth cue

One eye can perceive depth {monocular depth cue}. Monocular depth cues are accommodation, aerial perspective, color, color saturation, edge, monocular movement parallax, occlusion, overlap, shadows, and surface texture.

occlusion and depth

Closer object can hide farther object {occlusion, cue}|. Perception knows many rules about occlusion.

stereoscopic depth

Using both eyes can make depth and three dimensions appear {stereoscopic depth} {stereoscopy} {stereopsis}. Stereopsis aids random shape perception. Stereoscopic data analysis is independent of other visual analyses. Monocular depth cues can cancel stereoscopic depth. Stereoscopy does not allow highly unlikely depth reversals or unlikely depths.

texture gradient

Features farther away are smaller than when closer, so surfaces have larger texture nearby and smaller texture farther away {texture gradient}.

1-Consciousness-Sense-Vision-Physiology-Eye Movements

drift of eye

During fixations, eye is not still but drifts irregularly {drift, eye} {eye drift} through several minutes of arc, over several fovea cones.

microsaccade

During fixations, eye is not still but moves in straight lines {microsaccade} over 10 to 100 fovea cones.

scanning

Eyes scan scenes {scanning, vision} in regular patterns along outlines or contours, looking for angles and sharp curves, which give the most shape information.

tremor of eye

During fixations, eye is not still but has tremor {eye tremor} {tremor, eye} over one or two fovea cones, as it also drifts.

1-Consciousness-Sense-Vision-Physiology-Eye Movements-Saccade

saccade

After fixations lasting 120 ms to 130 ms, eye moves {saccade}|, in 100 ms, to a new fixation position.

brain

Superior colliculus controls involuntary saccades. Brain controls saccades using fixed vectors in retinotopic coordinates and using endpoint trajectories in head or body coordinates [Bridgeman et al., 1979] [Bridgeman et al., 1981] [Goodale et al., 1986].

movement

People do not have saccades while following moving objects or turning head while fixating objects.

transformation

When eye moves from one fixation to another, brain translates whole image up to 100 degrees of arc. World appears to stand still while eyes move, probably because motor signals to move eyes cancel perceptual retinal movement signals.

perception

Automatic saccades do not noticeably change scene [Akins, 1996] [Blackmore et al., 1995] [Dmytryk, 1984] [Grimes, 1996] [O'Regan et al., 1999] [Rensink et al., 1997] [Simons and Chabris, 1999] [Simons and Levin, 1997] [Simons and Levin, 1998] [Wilken, 2001].

saccadic suppression

Brain does not block input from eye to brain during saccades, but cortex suppresses vision during saccades {saccadic suppression}, so image blurs less. For example, people cannot see their eye movements in mirrors.

1-Consciousness-Sense-Vision-Physiology-Focusing

accommodation

In land-vertebrate eyes, flexible lens focuses {accommodation, vision} image by changing surface curvature using eye ciliary muscles. In fish, an inflexible lens moves backwards and forwards, as in cameras. Vision can focus image on fovea, by making thinnest contour line and highest image-edge gradient [Macphail, 1999].

process

To accommodate, lens muscles start relaxed, with no accommodation. Brain tightens lens muscles and stops at highest spatial-frequency response.

distance

Far objects require no eye focusing. Objects within four feet require eye focusing to reduce blur. Brain can judge distance by muscle tension, so one eye can measure distance. See Figure 1.

Pinhole camera can focus scene, but eye is not pinhole camera. See Figure 2.

far focus

If accommodation is for point beyond object, magnification is too low, edges are blurry, and spatial-frequency response is lower, because scene-point light rays land on different retina locations, before they meet at focal point. Focal point is past retina.

near focus

If accommodation is for point nearer than object, magnification is too high, edges are blurry, and spatial-frequency response is lower, because scene-point light rays meet at focal point and then land on different retina locations. Focal point is in eye middle.

binocular disparity

Right and left retinas see different images {retinal disparity} {binocular disparity}| [Dacey et al., 2003] [DeVries and Baylor, 1997] [Kaplan, 1991] [Leventhal, 1991] [MacNeil and Masland, 1998] [Masland, 2001] [Polyak, 1941] [Ramón y Cajal, 1991] [Rodieck et al., 1985] [Rodieck, 1998] [Zrenner, 1983].

correlation

Brain can correlate retinal images to pair scene retinal points and then find distances and angles.

fixation

Assume eye fixates on a point straight-ahead. Light ray from scene point forms horizontal azimuthal angle and vertical elevation angle with straight-ahead direction. With no eye convergence, eye azimuthal and elevation angles from scene point differ {absolute disparity}. Different scene points have different absolute disparities {relative disparity}.

When both eyes fixate on same scene point, eye convergence places scene point on both eye foveas at corresponding retinal points, azimuthal and elevation angles are the same, and absolute disparity is zero. See Figure 1. After scene-point fixation, azimuth and elevation angles differ for all other scene points. Brain uses scene-point absolute-disparity differences to find relative disparities to estimate relative depth.

horopter

Points from horopter land on both retinas with same azimuthal and elevation angles and same absolute disparities. These scene points have no relative disparity and so have single vision. Points not close to horopter have different absolute disparities, have relative disparity, and so have double vision. See Figure 2.

location

With eye fixation on far point between eyes and with eye convergence, if scene point is straight-ahead, between eyes, and nearer than fixation distance, point lands outside fovea, for both eyes. See Figure 3. For object closer than fixation plane, focal point is after retina {crossed disparity}.

With eye fixation on close point between eyes and eye convergence, if scene point is straight-ahead, between eyes, and farther than fixation distance, point lands inside fovea, for both eyes. For object farther than fixation plane, focal point is before retina {uncrossed disparity}.

Two eyes can measure relative distance to point by retinal disparity. See Figure 4.

motion

Retinal disparity and motion change are equivalent perceptual problems, so finding distance from retinal disparity and finding lengths and shape from motion changes use similar techniques.

fixation plane

Eye focuses at a distance, through which passes a vertical plane {fixation plane} {plane of fixation}, perpendicular to sightline. From that plane's points, eye convergence can make right and left eye images almost correspond, with almost no disparity. From points in a circle {Vieth-Müller circle} in that plane, eye convergence can make right and left eye images have zero disparity.

horopter

After eye fixation on scene point and eye convergence, an imaginary sphere {horopter} passes through both eye lenses and fixation point. Points from horopter land on both retinas with same azimuthal and elevation angles and same absolute disparities. These scene points have no relative disparity and so have single vision.

Panum fusion area

Brain fuses scene features that are inside distance from horopter {Panum's fusion area} {Panum fusion area} {Panum's fusional area}, into one feature. Brain does not fuse scene features outside Panum's fusional area, but features still register in both eyes, so feature appears double.

1-Consciousness-Sense-Vision-Physiology-Intensity

intensity in vision

Color varies in energy flow per unit area {intensity, vision}. Vision can detect very low intensity. People can see over ten-thousand-fold light intensity range. Vision is painful at high intensity.

sensitivity

People can perceive one-percent intensity differences. Sensitivity improves in dim light when using both eyes.

receptors

Not stimulating long-wavelength or middle-wavelength receptor reduces brightness. For example, extreme violets are less bright than other colors.

temporal integration

If light has constant intensity for less than 100 ms, brain perceives it as becoming less bright. If light has constant intensity for 100 ms to 300 ms, brain perceives it as becoming brighter. If light has constant intensity for longer than 300 ms, brain perceives it as maintaining same brightness.

unchanging image

After people view unchanging images for two or three seconds, image fades and becomes dark gray or black. If object contains sharp boundaries between highly contrasting areas, object reappears intermittently.

bleaching

Eyes blinded by bright light recover in 30 minutes, as eye chemicals become unbleached.

Bloch law

If stimulus lasts less than 0.1 second, brightness is product of intensity and duration {Bloch's law} {Bloch law}.

brightness

Phenomenal brightness {brightness} {luminosity} relates to logarithm of total stimulus-intensity energy flux from all wavelengths. Surfaces that emit more lumens are brighter. On Munsell scale, brightness increases by 1.5 units if lumens double.

properties: reflectance

Surfaces that reflect different spectra but emit same number of lumens are equally bright.

properties: reflectivity

For spectral colors, brightness is logarithmic, not linear, with reflectivity.

factors: adaptation

Brightness depends on eye adaptation state. Parallel pathways calculate brightness. One pathway adapts to constant-intensity stimuli, and the other does not adapt. If two same-intensity flashes start at same time, briefer flash looks dimmer than longer flash. If two same-intensity flashes end at same time, briefer flash looks brighter than longer flash {temporal context effect} (Sejnowsky). Visual system uses visual-stimulus timing and spatial context to calculate brightness.

factors: ambient light

Brightness is relative and depends on ambient light.

factors: color

Light colors change less, and dark colors change more, as source brightness increases. Light colors change less, and dark colors change more, as color saturation decreases.

factors: mental state

Brightness depends on mental state.

brightness control

Good brightness control increases all intensities by same amount. Consciousness cannot control brightness directly. Television Brightness control sets "picture" level by increasing input-signal multiple {gain, brightness}. If gain is too low, high-input signals have low intensity and many low-input signals are same black. If gain is too high, low-input signals have high intensity and many high-input signals are same white. Television Brightness control increases ratio between black and white and so really changes contrast.

contrast

Detected light has difference between lowest and highest intensity {contrast, vision}.

contrast control

Good contrast control sets black to zero intensity while decreasing or increasing maximum intensity. Consciousness cannot control contrast directly. Television Contrast control sets "black level" by shifting lowest intensity to shift intensity scale. It adjusts input signal to make zero intensity. If input is too low, lower input signals all result in zero intensity. If input is too high, lowest input signal results in greater than zero intensity. Television Contrast control changes all intensities by same amount and so really changes brightness.

contrast threshold

Mind can detect small intensity difference {contrast threshold} between light and dark surface area.

contrast sensitivity function

Larger objects have smaller contrast thresholds. Stimulus-size spatial frequency determines contrast-threshold reciprocal {contrast sensitivity function} (CSF). Contrast-threshold reciprocal is large when contrast threshold is small.

edge enhancement

Visual system increases brightness contrast across edge {edge enhancement}, making lighter side lighter and darker side darker.

fading

If eyes are still with no blinking, scene fades {fading} [Coppola and Purves, 1996] [Pritchard et al., 1960] [Tulunay-Keesey, 1982].

Mach band

Human visual systems increase brightness contrast across edges, making lighter side lighter and darker side darker {Mach band}.

1-Consciousness-Sense-Vision-Physiology-Intensity-Luminance

luminance

Leaving, arriving, or transmitted luminous flux in a direction divided by surface area {luminance}. Constant times sum over frequencies of spectral radiant energy times long-wavelength-cone and short-wavelength-cone spectral-sensitivity function [Autrum, 1979] [Segall et al., 1966]. Luminance relates to brightness. Lateral-geniculate-nucleus magnocellular-cell layers {luminance channel, LGN} measure luminance. Light power (radiance) and energy differ at different frequencies {spectral power distribution}, typically in 31 ranges 10 nm wide between 400 nm and 700 nm.

luminous flux

Light {luminous flux} can shine with a spectrum of wavelengths.

illuminant

Light sources {illuminant} shine light on observed surfaces.

radiant flux

Light {radiant flux} can emit or reflect with a spectrum of wavelengths.

radiance

Radiant flux in a direction divided by surface area {radiance}.

irradiance

Radiant flux divided by surface area {irradiance}.

1-Consciousness-Sense-Vision-Physiology-Motion

motion perception

Brain can perceive motion {motion perception} {motion detector}. Motion analysis is independent of other visual analyses.

properties: adaptation

Motion detector neurons adapt quickly.

properties: direction

Most cortical motion-detector neurons detect motion direction.

properties: distance

Most cortical motion-detector neurons are for specific distance.

properties: fatigue

Motion-detector neurons can fatigue.

properties: location

Most cortical motion-detector neurons are for specific space direction.

properties: object size

Most cortical motion-detector neurons are for specific object spot or line size. To detect larger or smaller objects, motion-detector neurons have larger or smaller receptive fields.

properties: rotation

To have right and left requires asymmetry, such as dot or shape. In rotation, one side appears to go backward while the other goes forward, which makes whole thing stand still.

properties: speed

Most cortical motion-detector neurons detect motion speed.

processes: brain

Area-V5 neurons detect different speed motions in different directions at different distances and locations for different object spot or line sizes. Motion detectors are for one direction, object size, distance, and speed relative to background. Other neurons detect expansion, contraction, and right or left rotation [Thier et al., 1999].

processes: frame

Spot motion from one place to another is like appearance at location and then appearance at another location. Spot must excite motion-detector neuron for that direction and distance.

processes: opposite motions

Motion detectors interact, so motion inhibits opposed motion, making motion contrasts. For example, motion in one direction excites motion detectors for that direction and inhibits motion detectors for opposite direction.

processes: retina image speed

Retinal radial-image speed relates to object distance.

processes: timing

Motion-detector-neuron comparison is not simultaneous addition but has delay or hold from first neuron to wait for second excitation. Delay can be long, with many intermediate neurons, far-apart neurons, or slow motion, or short, with one intermediate neuron, close neurons, or fast motion.

processes: trajectory

Motion detectors work together to detect trajectory or measure distances, velocities, and accelerations. Higher-level neurons connect motion detection units to detect straight and curved motions (Werner Reichardt). As motion follows trajectory, memory shifts to predict future motions.

biological motion

Animal species have movement patterns {biological motion}. Distinctive motion patterns, such as falling leaf, pouncing cat, and swooping bat, allow object recognition and future position prediction.

looming response

Vision can detect that surface is approaching eye {looming response}. Looming response helps control flying and mating.

smooth pursuit

For moving objects, eyes keep object on fovea, then fall behind, then jump to put object back on fovea {smooth pursuit}. Smooth pursuit is automatic. People cannot voluntarily use smooth pursuit. Smooth pursuit happens even if people have no sensations of moving objects [Thiele et al., 2002].

Theory of Body

Three-month-old infants understand {Theory of Body} that when moving objects hit other objects, other objects move. Later, infants understand {Theory of Mind Mechanism} self-propelled motion and goals. Later, infants understand {Theory of Mind Mechanism-2} how mental states relate to behaviors. Primates can understand that acting on objects moves contacted objects.

1-Consciousness-Sense-Vision-Physiology-Motion-Parallax

motion parallax

Head or body movement causes scene retinal displacement. Nearer objects displace more, and farther objects displace less {motion parallax}| {movement parallax}. If eye moves to right while looking straight-ahead, objects appear to move to left. See Figure 1.

Nearer objects move greater visual angle. Farther objects move smaller visual angle and appear almost stationary. See Figure 2.

movement sequence

Object sequence can change with movement. See Figure 3.

depth

Brain can use geometric information about two different positions at different times to calculate relative object depth. Brain can also use geometric information about two different positions at same time, using both eyes.

monocular movement parallax

While observer is moving, nearer objects seem to move backwards while farther ones move in same direction as observer {monocular movement parallax}.

1-Consciousness-Sense-Vision-Physiology-Motion-Spots

aperture problem

When viewing moving object through small opening, motion direction can be ambiguous {aperture problem}, because moving spot or two on-off spots can trigger motion detectors. Are both spots in window aperture same object? Motion detectors solve the problem by finding shortest-distance motion.

apparent motion

When people see objects, first at one location, then very short time later at another location, and do not see object anywhere between locations, first object seems to move smoothly to where second object appears {apparent motion}|.

correspondence problem

Moving spot triggers motion detectors for two locations.

two locations and spot

How does brain associate two locations with one spot {correspondence problem, motion}? Brain follows spot from one location to next unambiguously. Tracking moving objects requires remembering earlier features and matching with current features. Vision can try all possible matches and, through successive iterations, find matches that yield minimum total distance between presentations.

location and spot

Turning one spot on and off can trigger same motion detector. How does brain associate detector activation at different times with one spot? Brain assumes same location is same object.

processes: three-dimensional space

Motion detectors are for specific locations, distances, object sizes, speeds, and directions. Motion-detector array represents three-dimensional space. Space points have spot-size motion detectors.

processes: speed

Brain action pathway is faster than object-recognition pathway. Brain calculates eye movements faster than voluntary movements.

constraints: continuity constraint

Adjacent points not at edges are at same distance from eye {continuity constraint, vision}.

constraints: uniqueness constraint

Scene features land on one retinal location {uniqueness constraint, vision}.

constraints: spatial frequency

Scene features have different left-retina and right-retina positions. Retina can use low resolution, with low spatial frequency, to analyze big regions and then use higher and higher resolutions.

phi phenomenon

If an image or light spot appears on a screen and then a second image appears 0.06 seconds later at a randomly different location, people perceive motion from first location to second location {phi phenomenon}. If an image or light spot blinks on and off slowly and then a second image appears at a different location, people see motion. If a green spot blinks on and off slowly and then a red spot appears at a different location, people see motion, and dot appears to change color halfway between locations.

1-Consciousness-Sense-Vision-Physiology-Motion-Defined

luminance-defined object

Objects {luminance-defined object}, for example bright spots, can contrast in brightness with background. People see luminance-defined objects move by mechanism that differs from texture-defined object-movement mechanism. Luminance-defined objects have defined edges.

texture-defined object

Objects {texture-defined object} {contrast-defined object} can contrast in texture with background. People see luminance-defined objects move by mechanism that differs from texture-defined object-movement mechanism. Contrast changes in patterned ways, with no defined edges.

1-Consciousness-Sense-Vision-Physiology-Motion-Order

first-order motion

Luminance changes indicate motion {first-order motion}.

second-order motion

Contrast and texture changes indicate motion {second-order motion}.

1-Consciousness-Sense-Vision-Physiology-Motion-Optic Flow

visual flow

Incoming visual information is continuous flow {visual flow}| {optical flow, vision} {optic flow} that brain can analyze for constancies, gradients, motion, and static properties. As head or body moves, head moves through stationary environment. Optical flow reveals whether one is in motion or not. Optical flow reveals planar surfaces. Optical flow is texture movement across eye as animals move.

radial expansion

Optic flow has a point {focus of expansion} (FOE) {expansion focus} where horizon meets motion-direction line. All visual features seem to come out of this straight-ahead point as observer moves closer, making radial movement pattern {radial expansion} [Gibson, 1966] [Gibson, 1979].

time to collision

Optic flow has information {tau, optic flow} that signals how long until something hits people {time to collision} (TTC) {collision time}. Tau is ratio between retinal-image size and retinal-image-size expansion rate. Tau is directly proportional to time to collision.

1-Consciousness-Sense-Vision-Physiology-Motion-Throw And Catch

Throwing and Catching

Mammals can throw and catch {Throwing and Catching}.

Animal Motions

Animals can move in direction, change direction, turn around, and wiggle. Animals can move faster or slower. Animals move over horizontal ground, climb up and down, jump up and down, swim, dive, and fly.

Predators and Prey

Predators typically intercept moving prey, trying to minimize separation. In reptiles, optic tectum controls visual-orientation movements used in prey-catching behaviors. Prey typically runs away from predators, trying to maximize separation. Animals must account for accelerations and decelerations.

Gravity and Motions

Animals must account for gravity as they move and catch. Some hawks free-fall straight down to surprise prey. Seals can catch thrown balls and can throw balls to targets. Dogs can catch thrown balls and floating frisbees. Cats raise themselves on hind legs to trap or bat thrown-or-bouncing balls with front paws.

Mammal Brain

Reticular formation, hippocampus, and neocortex are only in mammals. Mammal superior colliculus can integrate multisensory information at same spatial location [O'Regan and Noë, 2001]. In mammals, dorsal vision pathway indicates object locations, tracks unconscious motor activity, and guides conscious actions [Bridgeman et al., 1979] [Rossetti and Pisella, 2002] [Ungerleider and Mishkin, 1982] [Yabuta et al., 2001] [Yamagishi et al., 2001].

Allocentric Space

Mammal dorsal visual system converts spatial properties from retinotopic coordinates to spatiotopic coordinates. Using stationary three-dimensional space as fixed reference frame simplifies trajectories perceptual variables. Most motions are two-dimensional rather than three-dimensional. Fixed reference frame separates gravity effects from internally generated motions. Internally generated motion effects are straight-line motions, rather than curved motions.

Human Throwing and Shooting

Only primates can throw, because they can stand upright and have suitable arms and hands. From 45,000 to 35,000 years ago, Homo sapiens and Neanderthal Middle-Paleolithic hunter-gatherers cut and used wooden spears. From 15,000 years ago, Homo sapiens Upper Paleolithic hunter-gatherers cut and used wooden arrows, bows, and spear-throwers. Human hunter-gatherers threw and shot over long trajectories.

Human Catching

Geometric Invariants: Humans can catch objects traveling over long trajectories. Dogs and humans use invariant geometric properties to intercept moving objects.

Trajectory Prediction: To catch baseballs, eyes follow ball while people move toward position where hand can reach ball. In the trajectory prediction strategy [Saxberg, 1987], fielder perceives ball initial direction, velocity, and perhaps acceleration, then computes trajectory and moves straight to where hand can reach ball.

Acceleration Cancellation: When catching ball coming towards him or her, fielder must run under ball so ball appears to move upward at constant speed. In the optical-acceleration-cancellation hypothesis [Chapman, 1968], fielder motion toward or away from ball cancels ball perceived vertical acceleration, making constant upward speed. If ball appears to vertically accelerate, it lands farther than fielder. If it appears to vertically decelerate, it lands shorter. Ball rises until caught, because baseball is always above horizon, far objects are near horizon, and near objects are high above horizon.

Transverse Motion: Fielder controls transverse motion independently of radial motion. When catching ball toward right or left, fielder moves transversely to ball path, holding ball-direction and fielder-direction angle constant.

Linear Trajectory: In linear optical trajectory [McBeath et al., 1995], when catching ball to left or right, fielder runs in a curve toward ball, so ball rises in optical height, not to right or left. Catchable balls appear to go straight. Short balls appear to curve downward. Long balls appear to curve upward. Ratio between ball elevation and azimuth angles stays constant. Fielder coordinates transverse and radial motions. Linear optical trajectory is similar to simple predator-tracking perceptions. Dogs use the linear optical trajectory method to catch frisbees [Shaffer et al., 2004].

Optical Acceleration: Plotting optical-angle tangent changes over time, fielders appear to use optical-acceleration information to catch balls [McLeod et al., 2001]. However, optical trajectories mix fielder motions and ball motions.

Perceptual Invariants: Optical-trajectory features can be invariant with respect to fielder motions. Fielders catch fly balls by controlling ball-trajectory perceptions, such as lateral displacement, rather than by choosing how to move [Marken, 2005].

1-Consciousness-Sense-Vision-Physiology-Number Perception

number perception

Brain can count {number perception}. Number perception can relate to time-interval measurement, because both measure number of units [Dehaene, 1997].

accumulator model

Number perception can add energy units to make sum {accumulator model} [Dehaene, 1997].

numeron list model

Number perception can associate objects with ordered-symbol list {numeron list model} [Dehaene, 1997].

object file model

Number perception can use mental images in arrays, so objects are separate {object file model} [Dehaene, 1997].

1-Consciousness-Sense-Vision-Physiology-Size

acuity vision

Vision detects smallest visual angle {visual acuity} {acuity, vision}.

aliasing

If they look at too few lines {undersampling}, people estimate grating size incorrectly {aliasing}.

cortical magnification

Visual angles land on retinal areas, which send to larger visual-cortex surface areas {cortical magnification}.

twenty-twenty

Good vision means that people can see at 20 feet what perfect-vision people can detect at 20 feet {twenty-twenty}. In contrast, 20-40 means that people can see at 20 feet what perfect-vision people can detect at 40 feet.

visual angle

Scene features have diameter, whose ends define rays that go to eye-lens center to form angle {visual angle}.

1-Consciousness-Sense-Vision-Physiology-Texture Perception

texture perception

Visual perceptual processes can detect local surface properties {surface texture} {texture perception} [Rogers and Collett, 1989] [Yin et al., 1997].

surface texture

Surface textures are point and line patterns, with densities, locations, orientations, and gradients. Surface textures have point and line spatial frequencies [Bergen and Adelson, 1988] [Bülthoff et al., 2002] [Julesz, 1981] [Julesz, 1987] [Julesz and Schumer, 1981] [Lederman et al., 1986] [Malik and Perona, 1990].

occipital lobe

Occipital-lobe complex and hypercomplex cells detect points, lines, surfaces, line orientations, densities, and gradients and send to neuron assemblies that detect point and line spatial frequencies [DeValois and DeValois, 1988] [Hubel and Wiesel, 1959] [Hubel and Wiesel, 1962] [Hubel, 1988] [Livingstone, 1998] [Spillman and Werner, 1990] [Wandell, 1995] [Wilson et al., 1990].

similar statistics

Similar surface textures have similar point and line spatial frequencies and first-order and second-order statistics [Julesz and Miller, 1962].

gradients

Texture gradients are proportional to surface slant, surface tilt, object size, object motion, shape constancy, surface smoothness, and reflectance.

gradients: object

Constant texture gradient indicates one object. Similar texture patterns indicate same surface region.

gradients: texture segmentation

Brain can use texture differences to separate surface regions.

speed

Brain detects many targets rapidly and simultaneously to select and warn about approaching objects. Brain can detect textural changes in less than 150 milliseconds, before attention begins.

machine

Surface-texture detection can use point and line features, such as corner detection, scale-invariant features (SIFT), and speeded-up robust features (SURF) [Wolfe and Bennett, 1997]. For example, in computer vision, the Gradient Location-Orientation Histogram (GLOH) SIFT descriptor uses radial grid locations and gradient angles, then finds principal components, to distinguish surface textures [Mikolajczyk and Schmid, 2005].

texel

Surfaces have small regular repeating units {texel}.

texton

Texture perception uses three local-feature types {texton}: elongated blobs {line segment, texton}, blob ends {end-point}, and blob crossings {texture, texton}. Visual-cortex simple and complex cells detect elongated blobs, terminators, and crossings.

search

Texture perception searches in parallel for texton type and density changes.

attention

Texture discrimination precedes attention.

For texton changes, brain calls attention processes.

similarity

If elongated blobs are same, because blob terminators total same number, texture is same.

statistics

Brain uses first-order texton statistics, such as texton type changes and density gradients, in texture perception.

1-Consciousness-Sense-Vision-Physiology-Viewpoint

viewpoint consistency

Retina reference frame and object reference frame must match {viewpoint consistency constraint}.

viewpoint-invariance

Visual features can stay the same when observation point changes {viewpoint-invariance, vision}. Brain stores such features for visual recognition.

visual egocenter

People have a reference point {visual egocenter} {egocenter, vision} on line passing through nosebridge and head center, for specifying locations and directions.

1-Consciousness-Sense-Vision-Physiology-Visual Processing Stages

early vision

Brain first processes basic features {early vision}, then prepares to recognize objects and understand scenes, then recognizes objects and understands scenes.

middle vision

Brain first processes basic features, then prepares to recognize objects and understand scenes {middle vision} {midlevel vision}, then recognizes objects and understands scenes.

high-level vision

Brain first processes basic features, then prepares to recognize objects and understand scenes, then recognizes objects and understands scenes {high-level vision}.

1-Consciousness-Sense-Vision-Color Vision

color vision

People can distinguish 150 to 200 main colors and seven million different colors {vision, color} {color vision}, by representing the light intensity-frequency spectrum and separating it into categories.

color: spectrum

Colors range continuously from red to scarlet, vermilion, orange, yellow, chartreuse, green, spring green, cyan, turquoise, blue, indigo (ultramarine), violet, magenta, crimson, and back to red. Scarlet is red with some orange. Vermilion is half red and half orange. Chartreuse is half yellow and half green. Cyan is half green and half blue. Turquoise is blue with some green. Indigo is blue with some red. Violet is blue with more red. Magenta is half blue and half red. Crimson is red with some blue.

color: definition

Blue, green, and yellow have definite wavelengths at which they are pure, with no other colors. Red has no definite wavelength at which it is pure. Red excites mainly long-wavelength receptor. Yellow is at long-wavelength-receptor maximum-sensitivity wavelength. Green is at middle-wavelength-receptor maximum-sensitivity wavelength. Blue is at short-wavelength-receptor maximum-sensitivity wavelength.

color: similarities

Similar colors have similar average light-wave frequencies. Colors with more dissimilar average light-wave frequencies are more different.

color: opposites

Complementary colors are opposite colors, and white and black are opposites.

color: animals

Primates have three cone types. Non-mammal vertebrates have one cone type, have no color opponent process, and detect colors from violets to reds, with poorer discrimination than mammals.

Mammals have two cone types. Mammals have short-wavelength receptor and long-wavelength receptor. For example, dogs have receptor with maximum sensitivity at 429 nm, which is blue for people, and receptor with maximum sensitivity at 555 nm, which is yellow-green for people. Mammals can detect colors from violets to reds, with poorer discrimination than people.

With two cone types, mammals have only one color opponency, yellow-blue. Perhaps, mammals cannot see phenomenal colors because color sensations require two opponent processes.

nature: individuality

People's vision processes are similar, so everyone's vision perceptions are similar. All people see the same color spectrum, with the same colors and color sequence. Colorblind people have consistent but incomplete spectra.

nature: objects

Colors are surface properties and are not essential to object identity.

nature: perception

Colors are not symmetric, so colors have unique relations. Colors cannot substitute. Colors relate in only one consistent and complete way, and can mix in only one consistent and complete way.

nature: subjective

No surface or object physical property corresponds to color. Color depends on source illumination and surface reflectance and so is subjective, not objective.

nature: irreducibility

Matter and energy cannot cause color, though experience highly correlates with physical quantities. Light is only electromagnetic waves.

processes: coloring

Three coloring methods are coloring points, coloring areas, or using separate color overlays. Mind colors areas, not points or overlays, because area coloring is discrete and efficient.

processes: edge enhancement

Adjacent colors enhance their contrast by adding each color's complementary color to the other color. Adjacent black and white also have enhanced contrast.

processes: timing

Different color-receptor-system time constants cause color.

processes: precision

People can detect smaller wavelength differences between 500 nm and 600 nm than above 600 nm or below 500 nm, because two cones have maximum sensitivities within that range.

physical: energy and color

Long-wavelength photons have less energy, and short-wavelength photons have more energy, because photon energy relates directly to frequency.

physical: photons

Photons have emissions, absorptions, vibrations, reflections, and transmissions.

physical: reflectance

Color depends on both illumination and surface reflectance [Land, 1977]. Comparing surface reflective properties to other or remembered surface reflective properties results in color.

physical: scattering

Blue light has shorter wavelength and has more refraction and scattering by atoms.

Long-wavelength and medium-wavelength cones have similar wavelength sensitivity maxima, so scattering and refraction are similar. Fovea has no short-wavelength cones, for better length precision.

mixing

Colors from light sources cannot add to make red or to make blue. Colors from pigment reflections cannot add to make red or to make blue.

properties: alerting and calming colors

Psychologically, red is alerting color. Green is neutral color. Blue is calming color.

properties: contraction and expansion by color

Blue objects appear to go farther away and expand, and red objects appear to come closer and contract, because reds appear lighter and blues darker.

properties: color depth

Color can have shallow or deep depth. Yellow is shallow. Green is medium deep. Blue and red are deep.

Perhaps, depth relates to color opponent processes. Red and blue mainly excite one receptor. Yellow and green mainly excite two receptors. Yellow mixes red and green. Green mixes blue and yellow.

properties: light and dark colors

Yellow is the brightest color, comparable to white. In both directions from yellow, darkness grows. Colors darken from yellow toward red. Colors darken from yellow toward green and blue. Green is lighter than blue, which is comparable to black.

properties: sad and glad

Dark colors are sad and light colors are glad, because dark colors are less bright and light colors are more bright.

properties: warm and cool colors

Colors can be relatively warm or cool. Black-body-radiator spectra center on red at 3000 K, blue at 5000 K, and white at 7000 K. Light sources have radiation surface temperature {color temperature} comparable to black-body-radiator surface temperature. However, people call blue cool and red warm, perhaps because water and ice are blue and fires are red, and reds seem to have higher energy output. Warm pigments have more saturation and are lighter than cool pigments. White, gray, and black, as color mixtures, have no net temperature.

properties: hue change

Colors respond differently as hue changes. Reds and blues change more slowly than greens and yellows.

factors

Colors change with illumination intensity, illumination spectrum, background surface, adjacent surface, distance, and viewing angle. Different people vary in what they perceive as unique yellow, unique green, and unique blue. The same person varies in what they perceive as unique yellow, unique green, and unique blue.

realism and subjectivism

Perhaps, color relates to physical objects, events, or properties {color realism} {color objectivism}. Perhaps, color is identical to a physical property {color physicalism}, such as surface spectral reflectance distribution {reflectance physicalism}. Perhaps, colors are independent of subject and condition. Mental processes allow access to physical colors.

Perhaps, colors depend on subject and physical conditions {color relationism} {color relativism}.

Perhaps, things have no color {color eliminativism}, and color is only in mind. Perhaps, colors are mental properties, events, or processes {color subjectivism}. Perhaps, colors are mental properties of mental objects {sense-datum, color}. Perhaps, colors are perceiver mental processes or events {adverbialism, color}. Perhaps, humans perceive real properties that cause phenomenal color. Perhaps, colors are only things that dispose mind to see color {color dispositionalism}. Perhaps, colors depend on action {color enactivism}. Perhaps, colors depend on natural selection requirements {color selectionism}. Perhaps, colors depend on required functions {color functionalism}. Perhaps, colors represent physical properties {color representationalism}. Perhaps, experience has color content {color intentionalism}, which provides information about surface color.

Perhaps, humans know colors, essentially, by experiencing them {doctrine of acquaintance}, though they can also learn information about colors.

Perhaps, colors are identical to mental properties that correspond to color categories {corresponding category constraint}.

Properties {determinable property} can be about categories, such as blue. Properties {determinate property} can be about specific things, such as unique blue, which has no red or green.

Perhaps, there are color illusions due to illumination intensity, illumination spectrum, background surface, adjacent surface, distance, and viewing angle. Human color processing cannot always process the same way or to the same result. Color names and categories have some correspondence with other animals, infants, and cultures, but vary among scientific observers and by introspection.

How can colors be in mind but appear in space? Subjectivism cannot account for the visual field. Objectivism cannot account for the color facts.

Differences among objective object and physical properties, subjective color processing, and relations among surfaces, illumination, background, viewing angle and distance do not explain perceived color differences {explanatory gap, color}.

achromatic

White, gray, and black have no hue {achromatic} and have color purity zero.

aperture color

Color can have no definite depth {aperture color}, such as at a hole in a screen.

brain gray

If eyes completely adapt to dark, people see gray {brain gray} {eigengrau}.

chromatic-response curve

Each opponent system has a relative response for each wavelength {chromatic-response curve}. The brightness-darkness system has maximum response at 560 nm and is symmetric between 500 nm and 650 nm. The red-green system has maximum response at 610 nm and minimum response at 530 nm and is symmetric between 590 nm and 630 nm and between 490 nm and 560 nm. The blue-yellow system has maximum response at 540 nm and minimum response at 430 nm and is symmetric between 520 nm and 560 nm and between 410 nm and 450 nm.

color constancy

Sight tries to keep surface colors constant {color constancy}. Lower luminance makes more red or green, because that affects red-green opponency more. Higher luminance makes more yellow or blue, because that affects blue-yellow opponency more.

Haidinger brush

Light polarization can affect sight slightly {Haidinger brush}.

color frequency

Color relates directly to electromagnetic wave frequency {color, frequency} and intensity.

frequency

Light waves that human can see have frequencies between 420 and 790 million million cycles per second, 420 and 790 teraHertz or THz. Frequency is light speed, 3.02 x 10^8 m/s, divided by wavelength. Vision can detect about one octave of light frequencies.

frequency ranges

Red light has frequency range 420 THz to 480 THz. Orange light has frequency range 480 THz to 510 THz. Yellow light has frequency range 510 THz to 540 THz. Green light has frequency range 540 THz to 600 THz. Blue light has frequency range 600 THz to 690 THz. Indigo or ultramarine light has frequency range 690 THz to 715 THz. Violet light has frequency range 715 THz to 790 THz. Colors differ in frequency range and in range compared to average wavelength. Range is greater and higher percentage for longer wavelengths.

Reds have widest range. Red goes from infrared 720 nm to red-orange 625 nm = 95 nm. 95 nm/683 nm = 14%. Reds have more spread and less definition.

Greens have narrower range. Green goes from chartreuse 560 nm to cyan 500 nm = 60 nm. 60 nm/543 nm = 11%.

Blues have narrowest range. Blue goes from cyan 480 nm to indigo or ultramarine 440 nm = 40 nm. 40 nm/463 nm = 8%. Blues have less spread and more definition.

wavelength ranges

Spectral colors have wavelength ranges: red = 720 nm to 625 nm, orange = 625 nm to 590 nm, yellow = 590 nm to 575 nm, chartreuse = 575 nm to 555 nm, green = 555 nm to 520 nm, cyan = 520 nm to 480 nm, blue = 480 nm to 440 nm, indigo or ultramarine = 440 nm to 420 nm, and violet = 420 nm to 380 nm.

maximum purity frequency

Spectral colors have maximum purity at specific frequencies: red = 436 THz, orange = 497 THz, yellow = 518 THz, chartreuse = 539 THz, green = 556 THz, cyan = 604 THz, blue = 652 THz, indigo or ultramarine = 694 THz, and violet = 740 THz.

maximum purity wavelengths

Spectral colors have maximum purity at specific wavelengths: red = 683 nm, orange = 608 nm, yellow = 583 nm, chartreuse = 560 nm, green = 543 nm, cyan = 500 nm, blue = 463 nm, indigo or ultramarine = 435 nm, and violet = 408 nm. See Figure 1. Magenta is not spectral color but is red-violet, so assume wavelength is 730 nm or 375 nm.

maximum sensitivity wavelengths

Blue is most sensitive at 482 nm, where it just turned blue from greenish-blue. Green is most sensitive at 506 nm, at middle. Yellow is most sensitive at 568 nm, just after greenish-yellow. Red is most sensitive at 680 nm, at middle red.

color-wavelength symmetry

Colors are symmetric around middle of long-wavelength and middle-wavelength receptor maximum-sensitivity wavelengths 550 nm and 530 nm. Wavelength 543 nm has green color. Chartreuse, yellow, orange, and red are on one side. Cyan, blue, indigo or ultramarine, and violet are on other side. Yellow is 583 - 543 = 40 nm from middle. Orange is 608 - 543 = 65 nm from middle. Red is 683 - 543 = 140 nm from middle. Blue is 543 - 463 = 80 nm from middle. Indigo or ultramarine is 543 - 435 = 108 nm from middle. Violet is 543 - 408 = 135 nm from middle.

opponency

Cone outputs can subtract and add {opponency} {color opponent process} {opponent color theory} {tetrachromatic theory}.

red-green opponency

Middle-wavelength cone output subtracts from long-wavelength cone output, L - M, to detect blue, green, yellow, orange, pink, and red. Maximum is at red, and minimum is at blue. See Figure 1. Hue calculation is in lateral geniculate nucleus, using neurons with center and surround. Center detects long-wavelengths, and surround detects medium-wavelengths.

blue-yellow opponency

Short-wavelength cone output subtracts from long-wavelength plus middle-wavelength cone output, (L + M) - S, to detect violet, indigo or ultramarine, blue, cyan, green, yellow, and red. Maximum is at chartreuse, minimum is at violet, and red is another minimu is at red. See Figure 1. Saturation calculation is in lateral geniculate nucleus, using neurons with center and surround. Luminance output goes to center, and surround detects short-wavelengths [Hardin, 1988] [Hurvich, 1981] [Katz, 1911] [Lee and Valberg, 1991].

brightness

Long-wavelength and middle-wavelength cones add to detect luminance brightness: L + M. See Figure 1. Short-wavelength cones are few. Luminance calculation is in lateral geniculate nucleus, using neurons with center and surround. Center detects long-wavelengths, and surround detects negative of medium-wavelengths. Brain uses luminance to find edges and motions.

neutral point

When positive and negative contributions are equal, opponent-color processes can give no signal {neutral point}. For the L - M opponent process, red and cyan are complementary colors and mix to make white. For the L + M - S opponent process, blue and yellow are complementary colors and mix to make white. The L + M sense process has no neutral point.

color and cones

Red affects long-wavelength some. Orange affects long-wavelength well. Yellow affects long-wavelength most. Green affects middle-wavelength most. Blue affects short-wavelength most.

Indigo or ultramarine, because it has blue and some red, affects long-wavelength and short-wavelength. Violet, because it has blue and more red, affects long-wavelength more and short-wavelength less. Magenta, because it has half red and half blue, affects long-wavelength and short-wavelength equally. See Figure 1.

White, gray, and black affect long-wavelength receptor and middle-wavelength receptor equally, and long-wavelength receptor plus middle-wavelength receptor and short-wavelength receptor equally. See Figure 1. Complementary colors add to make white, gray, or black.

color and opponencies

For red, L - M is maximum, and L + M - S is maximum. For orange, L - M is positive, and L + M - S is maximum. For yellow, L - M is half, and L + M - S is maximum. For green, L - M is zero, and L + M - S is zero. For blue, L - M is minimum, and L + M - S is minimum. For magenta, L - M is half, and L + M - S is half.

saturation

Adding white, to make more unsaturation, decreases L - M values and increases L + M - S values. See Figure 1.

evolution

For people to see color, the three primate cone receptors must be maximally sensitive at blue, green, and yellow-green, which requires opponency to determine colors and has color complementarity. The three cones do not have maximum sensitivity at red, green, and blue, because each sensor is then for one main color, and system has no complementary colors. Such a system has no opponency, because those opponencies have ambiguous ratios and ambiguous colors.

univariance problem

Photoreceptors can have the same output {univariance problem} {problem of univariance} {univariance principle} {principle of univariance} for an infinite number of stimulus frequency-intensity combinations. Different photon wavelengths have different absorption probabilities, from 0% to 10%. Higher-intensity low-probability wavelengths can make same total absorption as lower-intensity high-probability wavelengths. For example, if frequency A has probability 1% and intensity 2, and frequency B has probability 2% and intensity 1, total absorption is same.

Photon absorption causes one photoreceptor molecule to isomerize. Isomerization reactions are the same for all stimulus frequencies and intensities. Higher intensity increases number of reactions.

wavelength mixture space

Color-vision systems have one or more receptor types, each able to absorb a percentage of quanta at each wavelength {wavelength mixture space}. For all receptor types, different wavelength and intensity combinations can result in same output.

1-Consciousness-Sense-Vision-Color Vision-Colors

color categories

Colors {colors} {color, categories} are distinguishable.

The eleven fundamental color categories are white, black, red, green, blue, orange, yellow, pink, brown, purple (violet), and gray [Byrne and Hilbert, 1997] [Wallach, 1963].

major and minor colors

Major colors are red, yellow, green, and blue. Yellow is red and green. Green is yellow and blue. Minor colors are orange, chartreuse, cyan, and magenta. Orange is red and yellow. Chartreuse is yellow and green {chartreuse, color mixture}. Cyan is green and blue {cyan, color mixture}. Magenta is red and blue. Halftones are between major and minor color categories: red-orange {vermilion, color mixture}, orange-yellow, yellow-chartreuse, chartreuse-green, green-cyan {spring green, color mixture}, cyan-blue {turquoise, color mixture}, blue-violet {indigo, color mixture} {ultramarine, color mixture}, indigo-magenta or blue-magenta {violet, color mixture}, and magenta-red {crimson, color mixture}.

white

White is relatively higher in brightness than adjacent surfaces. Adding white to color makes color lighter. However, increasing colored-light intensity does not make white.

white: intensity

When light is too dim for cones, people see whites, grays, and blacks. When light is intense enough for cones, people see whites, grays, and blacks if no color predominates.

white: complementary colors

Spectral colors have complementary colors. Color and complementary color mix to make white, gray, or black. Two spectral colors mix to make intermediate color, which has a complementary color. Mixing two spectral colors and intermediate-color complementary color makes white, gray, or black.

black

Black is relatively lower in brightness than adjacent surfaces. Black is not absence of visual sense qualities but is a color.

gray

Gray is relatively the same brightness as adjacent surfaces.

red

Red light is absence of blue and green, and so is absence of cyan, its additive complementary color. Red pigment is absence of green, its subtractive complementary color.

red: purity

Spectral red cannot be a mixture of other colors. Pigment red cannot be a mixture of other colors.

red: properties

Red is alerting color. Red is warm color, not cool color. Red is light color.

red: mixing

Red mixes with white to make pink.

Spectral red blends with spectral cyan to make white. Pigment red blends with pigment green to make black. Spectral red blends with spectral yellow to make orange. Pigment red blends with pigment yellow to make brown. Spectral red blends with spectral blue or violet to make purples. Pigment red blends with pigment blue or violet to make purples.

red: distance

People do not see red as well at farther distances.

red: retina

People do not see red as well at visual periphery.

red: range

Red has widest color range because reds have longest wavelengths and largest frequency range.

red: intensity

Red can fade in intensity to brown then black.

red: evolution

Perhaps, red evolved to discriminate food.

blue

Blue light is absence of red and green, so blue is absence of yellow, its additive complementary color. Blue pigment is absence of red and green, so blue is absence of orange, its subtractive complementary color.

blue: purity

Spectral blue cannot be a mixture of other colors. Pigment blue cannot be a mixture of other colors.

blue: properties

Blue is calming color. Blue is cool color, not warm color. Blue is light color.

blue: mixing

Blue mixes with white to make pastel blue.

Spectral blue blends with spectral yellow to make white. Pigment blue blends with pigment yellow to make black. Spectral blue blends with spectral green to make cyan. Pigment blue blends with pigment green to make dark blue-green. Spectral blue blends with spectral red to make purples. Pigment blue blends with pigment red to make purples.

blue: distance

People see blue well at farther distances.

blue: retina

People see blue well at visual periphery.

blue: range

Blue has narrow wavelength range.

blue: evolution

Perhaps, blue evolved to tell when sky is changing or to see certain objects against sky.

blue: saturation

Teal is less saturated cyan.

green

Green light is absence of red and blue, and so magenta, its additive complementary color. Green pigment is absence of red, its subtractive complementary color.

green: purity

Spectral green can mix blue and yellow. Pigment green can mix blue and yellow.

green: properties

Green is neutral color in alertness. Green is cool color. Green is light color.

green: mixing

Green mixes with white to make pastel green.

Spectral green blends with spectral magenta to make white. Pigment green blends with pigment magenta to make black. Spectral green blends with spectral orange to make yellow. Pigment green blends with pigment orange to make brown. Spectral green blends with spectral blue to make cyan. Pigment green blends with pigment blue to make dark blue-green.

green: distance

People see green OK at farther distances.

green: retina

People do not see green well at visual periphery.

green: range

Green has wide wavelength range.

green: evolution

Perhaps, green evolved to discriminate fruit and vegetable ripening.

yellow

Yellow light is absence of blue, because blue is its additive complementary color. Yellow pigment is absence of indigo or violet, its subtractive complementary color.

yellow: purity

Spectral yellow can mix red and green. Pigment yellow cannot be a mixture of other colors.

yellow: properties

Yellow is neutral color in alertness. Yellow is warm color. Yellow is light color.

yellow: mixing

Yellow mixes with white to make pastel yellow.

Spectral yellow blends with spectral blue to make white. Pigment yellow blends with pigment blue to make green. Spectral yellow blends with spectral red to make orange. Pigment yellow blends with pigment red to make brown. Olive is dark low-saturation yellow (dark yellow-green).

yellow: distance

People see yellow OK at farther distances.

yellow: retina

People do not see yellow well at visual periphery.

yellow: range

Yellow has narrow wavelength range.

orange: purity

Spectral orange can mix red and yellow. Pigment orange can mix red and yellow.

orange: properties

Orange is slightly alerting color. Orange is warm color. Orange is light color.

orange: mixing

Orange mixes with white to make pastel orange.

Spectral orange blends with spectral blue-green to make white. Pigment orange blends with pigment blue-green to make black. Spectral orange blends with spectral cyan to make yellow. Pigment orange blends with pigment cyan to make brown. Spectral orange blends with spectral red to make light red-orange. Pigment orange blends with pigment red to make dark red-orange.

orange: distance

People do not see orange well at farther distances.

orange: retina

People do not see orange well at visual periphery.

orange: range

Orange has narrow wavelength range.

violet: purity

Spectral violet can mix blue and red. Pigment violet has red and so is purple.

violet: properties

Violet is calming color. Violet is cool color. Violet is light color.

violet: mixing

Violet mixes with white to make pastel violet.

Spectral violet blends with spectral yellow-green to make white. Pigment violet blends with pigment yellow-green to make black. Spectral violet blends with spectral red to make purples. Pigment violet blends with pigment red to make purples.

violet: distance

People see violet well at farther distances.

violet: retina

People see violet well at visual periphery.

violet: range

Violet has narrow wavelength range.

violet: intensity

Violet can fade in intensity to dark purple then black.

brown: purity

Pigment brown can mix red, yellow, and green. Brown is commonest color but is not spectral color. Brown is like dark orange pigment or dark yellow-orange. Brown color depends on contrast and surface texture.

brown: properties

Brown is not alerting or calming. Brown is warm color. Brown is dark color.

brown: mixing

Brown mixes with white to make pastel brown.

Pigment brown blends with other pigments to make dark brown or black.

brown: distance

People do not see brown well at farther distances.

brown: retina

People do not see brown well at visual periphery.

brown: range

Brown is not spectral color and has no wavelength range.

purple: purity

Purples come from mixing red and blue. They have no green, to which they are complementary. Purples are non-spectral colors, because reds have longer wavelengths and blues have shorter wavelengths.

purple: saturation

Purple is low-saturation magenta.

gamut

Hue, brightness, and saturation ranges make all perceivable colors {gamut, color}. Perceivable-color range is greater than three-primary-color additive-combination range. However, allowing subtraction of red makes color gamut.

primary color

For subtractive colors, combining three pure color pigments {primary color}, such as red, yellow, and blue, can make most other colors.

secondary color

Mixing primary-color pigments {secondary color} makes magenta from red and blue, green from blue and yellow, and orange from red and yellow.

tertiary color

Mixing primary-color and secondary-color pigment {tertiary color} {intermediate color} makes chartreuse from yellow and green, cyan from blue and green, violet from blue and magenta, red-magenta, red-orange, and yellow-orange.

non-unique

Primary colors are not unique. Besides red, yellow, and blue, other triples can make most colors.

related color

Color can have light surround and appear to reflect light {related color}. Brown and gray can appear only when other colors are present. If background is white, gray appears black. If background is black, gray appears white. Color can have dark surround and appear luminous {unrelated color}.

spectral color

People can see colors {spectral color}| from illumination sources. Light from sources can have one wavelength.

seven categories

Violets are 380 to 435 nm, with middle 408 nm and range 55 nm. Blues are 435 to 500 nm, with middle 463 nm and range 65 nm. Cyans are 500 to 520 nm, with middle 510 nm and range 20 nm. Greens are 520 to 565 nm, with middle 543 nm and range 45 nm. Yellows are 565 to 590 nm, with middle 583 nm and range 35 nm. Oranges are 590 to 625 nm, with middle 608 nm and range 35 nm. Reds are 625 to 740 nm, with middle 683 nm and range 115 nm.

fifteen categories

Spectral colors start at short-wavelength purplish-blue. Purplish-blues are 400 to 450 nm, with middle 425 nm. Blues are 450 to 482 nm, with middle 465. Greenish-blues are 482 to 487 nm, with middle 485 nm. Blue-greens are 487 to 493 nm, with middle 490 nm. Bluish-greens are 493 to 498 nm, with middle 495 nm. Greens are 498 to 530 nm, with middle 510 nm. Yellowish-greens are 530 to 558 nm, with middle 550 nm. Yellow-greens are 558 to 568 nm, with middle 560 nm. Greenish-yellows are 568 to 572 nm, with middle 570 nm. Yellows are 572 to 578 nm, with middle 575 nm. Yellowish-oranges are 578 to 585, with middle 580 nm. Oranges are 585 to 595 nm, with middle 590 nm. Reddish-oranges and orange-pinks are 595 to 625 nm, with middle 610 nm. Reds and pinks are 625 to 740 nm, with middle 640 nm. Spectral colors end at long-wavelength purplish-red.

non-spectral hue

People can see colors {non-spectral hue} that have no single wavelength but require two wavelengths. For example, mixing red and blue makes magenta and other reddish purples. Such a mixture stimulates short-wavelength cones and long-wavelength cones but not middle-wavelength cones.

unique hue

Blue, red, yellow, and green describe pure colors {unique hue}. Unique red occurs only at low brightness, because more brightness adds yellow. Other colors mix unique hues. For example, orange is reddish yellow or yellowish red, and purples are reddish blue or bluish red.

1-Consciousness-Sense-Vision-Color Vision-Color Space

color space

Three-dimensional mathematical spaces {color space} can use signals or signal combinations from the three different cone cells to give colors coordinates.

color wheel

Circular color scales {color wheel} can show sequence from red to magenta.

simple additive color wheel

Colors on circle circumference can show correct color mixing. See Figure 1. Two-color mixtures have color halfway between the colors. Complementary colors are opposite. Three complementary colors are 120 degrees apart. Red is at left, blue is 120 degrees to left, and green is 120 degrees to right. Yellow is halfway between red and green. Cyan is halfway between blue and green. Magenta is halfway between red and blue. Orange is between yellow and red. Chartreuse is between yellow and green. Indigo or ultramarine is between blue and violet. Violet is between indigo or ultramarine and magenta. Non-spectral colors are in quarter-circle from violet to red. Cone color receptors, at indigo or ultramarine, green, and yellow-green positions, are in approximately half-circle.

simple subtractive color wheel

For subtractive colors, shift bluer colors one position: red opposite green, vermilion opposite cyan, orange opposite blue, yellow opposite indigo, and chartreuse opposite violet. Color subtraction makes darker colors, which are bluer, because short-wavelength receptor has higher weighting than other two receptors. It affects reds and oranges little, greens some, and blues most. Blues and greens shift toward red to add less blue, so complementary colors make black rather than blue-black. See Figure 2.

quantum chromodynamics color circle

Additive color wheel can describe quantum-chromodynamics quark color-charge complex-number vectors. On complex-plane unit circle, red coordinates are (+1, 0*i). Green coordinates are (-1/2, -(3^(0.5))*i/2). Blue coordinates are (-1/2, +(3^(0.5))*i/2). Yellow coordinates are (+1/2, -(3^(0.5))*i/2). Cyan coordinates are (-1, 0*i). Magenta coordinates are (+1/2, +(3^(0.5))*i/2).

To find color mixtures, add vectors. Two quarks add to make muons, which have no color and whose resultant vector is zero. Three quarks add to make protons and neutrons, which have no color and whose resultant vector is zero. Color mixtures that result in non-zero vectors have colors and are not physical.

color wheel by five-percent intervals

Color wheel can separate all colors equally. Divide color circle into 20 parts with 18 degrees each. Red = 0, orange = 2, yellow = 4, chartreuse = 6, green = 8, cyan = 10, blue = 12, indigo or ultramarine = 14, violet = 16, and magenta = 18. Crimson = 19, cyan-blue turquoise at 11, cyan-green at 9, yellow-orange = 3, and red-orange vermilion = 1. Primary colors are at 0, 8, and 12. Secondary colors are at 4, 10, and 18. Tertiary colors are at 2, 6, and 14/16. Complementary colors are opposite. See Figure 3.

color wheel with number line

Set magenta = 0 and green = 1. Red = 0.33, and blue = 0.33. Yellow = 0.67, and cyan = 0.67. Complementary colors add to 1.

color wheel with four points

Blue, green, yellow, and red make a square. Green is halfway between blue and yellow. Yellow is halfway between green and red. Blue is halfway between green and red in other direction. Red is halfway between yellow and blue in other direction. Complementary pigments are opposite. Adding magenta, cyan, chartreuse, and orange makes eight points, like tones of an octave but separated by equal intervals, which can be harmonic ratios: 2/1, 3/2, 4/3, and 5/4.

white and black

Color wheel has no black or white, because they mostly depend on brightness. Adding black, gray, and white makes color cylinder, on which unsaturated colors are pastels or dark colors.

CIE Chromaticity Diagram

Color-space systems {chromaticity diagram} {CIE Chromaticity Diagram} can use luminance Y and two coordinates, x and y, related to hue and saturation. CIE system uses spectral power distribution (SPD) of light emitted from surfaces.

tristimulus

Retina has three cone types, each with maximum-output stimulus frequency {tristimulus values}, established by eye sensitivity measurements. Using tristimulus values allows factoring out luminance brightness to establish luminance coordinate. Factoring out luminance leaves two chromaticity color coordinates.

color surface

Chromaticity coordinates define border of upside-down U-shaped color space, giving all maximum-saturation hues from 400 to 700 nm. Along the flat bottom border are purples. Plane middle regions represent decreasing saturation from edges to middle, with completely unsaturated white (because already white) in middle. For example, between middle white and border reds and purples are pinks. Central point is where x and y equal 1/3. From border to central white, regions have same color with less saturation [Hardin, 1988]. CIE system can use any three primary colors, not just red, green, and blue.

Munsell color space

Color-space systems {Munsell color space} can use color samples spaced by equal differences. Hue is on color-circle circumference, with 100 equal hue intervals. Saturation {chroma, saturation} {chrominance} is along color-circle radius, with 10 to 18 equal intervals, for different hues. Brightness {light value} is along perpendicular above color circle, with black at 0 units and white at 10 units. Magenta is between red and violet. In Munsell system, red and cyan are on same diameter, yellow and blue are on another diameter, and green and magenta are on a diameter [Hardin, 1988].

Ostwald color space

Color-space systems {Ostwald color space} can use standard samples and depend on reflectance. Colors have three coordinates: percentage of total lumens for main wavelength C, white W, and black B. Wavelength is hue. For given wavelength, higher C gives greater purity, and higher W with lower B gives higher luminance [Hardin, 1988].

Swedish Natural Color

Color-space systems {Swedish Natural Color Order System} (NCS) can depend on how primary colors and other colors mix [Hardin, 1988].

1-Consciousness-Sense-Vision-Color Vision-Contrast

color contrast

If two different colors are adjacent, each color adds its complementary color to the other {color contrast}. If bright color is beside dark color, contrast increases. If white and black areas are adjacent, they add opposite color to each other. If another color overlays background color, brighter color dominates. If brighter color is in background, it shines through overlay. If darker color is in background, overlay hides it.

successive contrast

Two adjacent different-colored objects have enhanced color differences {successive contrast} {simultaneous contrast}.

1-Consciousness-Sense-Vision-Color Vision-Mixing Colors

color mixture

All colors from surface point can mix {color mixture}.

intermediate color

Two colors mix to make the intermediate color. For example, red and orange make red-orange vermilion. See Figure 1.

colors mix uniquely

Colors blend with other colors differently.

additive color mixture

Colors from light sources add {additive color mixture}. No additive spectral-color mixture can make blue or red. Magenta and orange cannot make red, because magenta has blue, orange has yellow and green, and red has no blue or green. Indigo and cyan cannot make blue, because indigo has red and cyan has green, and blue has no green or red.

subtractive color mixture

Colors from pigmented surfaces have colors from source illumination minus colors absorbed by pigments {subtractive color mixture}. Colors from pigment reflections cannot add to make red or to make blue. Blue and yellow pigments reflect green, because both reflect some green, and sum of greens is more than reflected blue or yellow. Red and yellow pigments reflect orange, because each reflects some orange, and sum of oranges is more than reflected red or yellow.

For subtractive colors, mixing cannot make red, blue, or yellow. Magenta and orange cannot make red, because magenta has blue, orange has yellow and green, and red has no blue or green. Indigo and cyan cannot make blue, because indigo has red and cyan has green, and blue has no red or green. Chartreuse and orange cannot make yellow, because chartreuse has green and some indigo, orange has red and some indigo, and yellow has no indigo.

pastel colors

Colors mix with white to make pastel colors.

similarity

Similar colors mix to make the intermediate color.

primary additive colors

Red, green, and blue are the primary additive colors.

primary subtractive colors

Red, yellow, and blue, or magenta, yellow, and cyan, are the primary subtractive colors.

secondary additive colors

Primary additive-color mixtures make secondary additive colors: yellow from red and green, magenta from red and blue, and cyan from green and blue.

secondary subtractive colors

Primary subtractive-color mixtures make secondary subtractive colors: orange from red and yellow, magenta from red and blue, and green from yellow and blue.

tertiary additive colors

Mixing primary and secondary additive colors makes tertiary additive colors: orange from red and yellow, violet from blue and magenta, and chartreuse from yellow and green.

tertiary subtractive colors

Mixing primary and secondary subtractive colors makes tertiary subtractive colors: cyan from blue and green, violet from blue and magenta, and chartreuse from yellow and green.

complementary color

Two colors {complementary color}| can add to make white. Complementary colors can be primary, secondary, or tertiary colors.

complementary additive colors

Colors with equal amounts of red, green, and blue make white. Red and cyan, yellow and blue, or green and magenta make white.

Equal red, blue, and green contributions make white light.

complementary subtractive colors

Colors that mix to make equal amounts of red, yellow, and blue make black. Orange and blue, yellow and indigo/violet, or green and red make black. Equal magenta, yellow, and cyan contributions make black.

Grassmann laws

Grassmann described color-mixing laws {Grassmann's laws} {Grassmann laws}. Grassmann's laws are vector additions and multiplications in wavelength mixture space.

If two pairs of wavelengths at specific intensities result in same color, adding the pairs gives same color: if C1 + C2 = x and C3 + C4 = x, then C1 + C2 + C3 + C4 = x. For example, if blue-and-yellow pair makes green, and two greens together make same green, adding pairs makes same green.

If pair of wavelengths at specific intensities makes color, adding same wavelength and intensity to each makes same color as adding it to the pair. If C1 + C2 = x and C3 = y, then (C1 + C3) + (C2 + C3) = (C1 + C2) + C3 = z. For example, if blue-and-yellow pair makes green, adding red to blue and to yellow makes same color as adding red to the pair.

If pair of wavelengths at specific intensities makes color, changing both intensities equally makes same color as changing pair intensity. If C1 + C2 = x, then n*C1 + n*C2 = n*(C1 + C2) = w. For example, if blue-and-yellow pair makes green, increasing both color intensities by same amount makes same green, only brighter.

Benham top

Wheel with black and white areas, rotated five Hz to ten Hz to give flicker rate below fusion frequency, in strong light, can produce intense colors {Benham's top} {Benham top} {Benham disk}, because color results from different color-receptor-system time-constants.

1-Consciousness-Sense-Vision-Color Vision-Parameters

chromaticity

Color perception depends on hue, saturation, and brightness. Mostly hue and saturation {chromaticity} make colors. Brightness does not affect chromaticity much [Kandel et al., 1991] [Thompson, 1995].

hue

Spectral colors depend on light wavelength and frequency {hue}. People can distinguish 160 hues, from light of wavelength 400 nm to 700 nm. Therefore, people can distinguish colors differing by approximately 2 nm of wavelength.

color mixtures

Hue can come from light of one wavelength or light mixtures with different wavelengths. Hue takes the weighted average of the wavelengths. Assume colors can have brightness 0 to 100. If red is 100, green is 0, and blue is 0, hue is red at maximum brightness. If red is 50, green is 0, and blue is 0, hue is red at half maximum brightness. If red is 25, green is 0, and blue is 0, hue is red at quarter maximum brightness.

If red is 100, green is 100, and blue is 0, hue is yellow at maximum brightness. If red is 50, green is 50, and blue is 0, hue is yellow at half maximum brightness. If red is 25, green is 25, and blue is 0, hue is yellow at quarter maximum brightness.

If red is 100, green is 50, and blue is 0, hue is orange at maximum brightness. If red is 50, green is 25, and blue is 0, hue is orange at half maximum brightness. If red is 24, green is 12, and blue is 0, hue is orange at quarter maximum brightness.

lightness

Fraction of incident light transmitted or reflected diffusely {lightness} {luminance factor}. Lightness sums the three primary-color (red, green, and blue) brightnesses. Assume each color can have brightness 0 to 100. For example, if red is 100, green is 100, and blue is 100, lightness is maximum brightness. If red is 100, green is 100, and blue is 50, lightness is 83% maximum brightness. If red is 100, green is 50, and blue is 50, lightness is 67% maximum brightness. If red is 67, green is 17, and blue is 17, lightness is 33% maximum brightness. If red is 17, green is 17, and blue is 17, lightness is 17% maximum brightness.

saturation of color

Pure saturated color {saturation, color}| {purity, color} has no white, gray, or black. White, gray, and black have zero purity. Spectral colors can have different white, gray, or black percentages (unsaturation). Saturated pigments mixed with black make dark colors, like ochre. Saturated pigments mixed with white make light pastel colors, like pink.

frequency range

The purest most-saturated color has light with one wavelength. Saturated color pigments reflect light with narrow wavelength range. Unsaturated pigments reflect light with wide wavelength range.

colors and saturation

All spectral colors can mix with white. White is lightest and looks least saturated. Yellow is the lightest color. Monochromatic yellows have largest saturation range (as in Munsell color system), change least as saturation changes, and look least saturated (most white) at all saturation levels. Green is second-lightest color. Monochromatic greens have second-largest saturation range, change second-least as saturation changes, and look second-least saturated (second-most white) at all saturation levels. Red is third-lightest color. Monochromatic reds have average saturation range, change third-least as saturation changes, and look third-least saturated (third-most white) at all saturation levels. Blue is darkest color. Monochromatic blues have smallest saturation range, change most as saturation changes, and look fourth-least saturated (least white) at all saturation levels. Black is darkest and looks most saturated.

calculation

Whiteness, grayness, and blackness have all three primary colors (red, green, and blue) in equal amounts. Whiteness, grayness, or blackness level is brightness of lowest-level primary color times three. Subtracting the lowest level from all three primary colors and summing the two highest calculates hue brightness. Total brightness sums primary-color brightnesses. Saturation is hue brightness divided by brightness. Assume colors can have brightness 0 to 100. If red is 100, green is 100, and blue is 100, whiteness is maximum. If red is 50, green is 50, and blue is 50, grayness is half maximum. If red is 25, green is 25, and blue is 25, grayness is quarter maximum.

Assume maximum brightness is 100%. If red is 33%, green is 33%, and blue is 33%, brightness is 100% = (33% + 33% + 33%), whiteness is 100% = (33% + 33% + 33%), hue is white at 0%, and saturation is 0% = (0% / 100%). If red is 17%, green is 17%, and blue is 17%, brightness is 50% = (17% + 17% + 17%), whiteness is 50% = (17% + 17% + 17%), hue is white at 0%, and saturation is 0% = (0% / 50%). If red is 33%, green is 33%, and blue is 17%, brightness is 83% = (33% + 33% + 17%), whiteness is 50% = (17% + 17% + 17%), hue is yellow at 33% = (33% - 17%) + (33% - 17%), and saturation is 40% = (33% / 83%). If red is 67%, green is 17%, and blue is 17%, brightness is 100% = (67% + 17% + 17%), whiteness is 50% = (17% + 17% + 17%), hue is red at 50% = (67% - 17%), and saturation is 50% = (50% / 100%). If red is 100%, green is 0%, and blue is 0%, brightness is 100% = (100% + 0% + 0%), whiteness is 0% = (0% + 0% + 0%), hue is red at 100% = (100% - 0%), and saturation is 100% = (100% / 100%).

Assume colors can have brightness 0 to 100. If red is 100, green is 50, and blue is 50, red is 50 = 100 - 50, green is 0 = 50 - 50, blue is 0 = 50 - 50, brightness is 200, whiteness is 150 = 50 + 50 + 50, and hue is pink with red saturation of 25 = 50 / 200. If red is 100, green is 100, and blue is 50, red is 50 = 100 - 50, green is 50 = 100 - 50, blue is 0 = 50 - 50, brightness is 250, whiteness is 150 = 50 + 50 + 50, and hue is yellow with saturation of 40% = (50 + 50) / 100 = 100 / 250. If red is 75, green is 50, and blue is 25, red is 50 = 75 - 25, green is 25 = 50 - 25, blue is 0 = 25 - 25, brightness is 150, whiteness is 75 = 25 + 25 + 25, and hue is orange with saturation of 50% = (50 + 25) / 150 = 75 / 150.

1-Consciousness-Sense-Vision-Color Vision-Parameters-Effects

Abney effect

Hue depends on saturation {Abney effect}.

Bezold-Brucke phenomenon

If luminance is enough to stimulate cones, hue changes as luminance changes {Bezold-Brücke phenomenon} {Bezold-Brücke effect}.

Helmholtz-Kohlrausch effect

At constant luminance, brightness depends on both saturation and hue {Helmholtz-Kohlrausch effect}. If hue is constant, brightness increases with saturation. If saturation is constant, brightness changes with hue.

Hunt effect

Saturation increases as luminance increases {Hunt effect}.

1-Consciousness-Sense-Vision-Color Vision-Qualia

absent qualia

Systems that can perform same visual functions that people perform can have no qualia {absent qualia}. Perhaps, machines can duplicate neuron and synapse functions, as in the China-body system [Block, 1980], and so do anything that human visual system can do. Presumably, system physical states and mechanisms, no matter how complex, do not have or need qualia. System has inputs, processes, and outputs. Perhaps, such systems can have qualia, but complexity, large scale, or inability to measure prevents people from knowing.

alien color

Perhaps, hue can be not any combination of red, blue, green, or yellow {alien color}.

Inverted Earth

Planets {Inverted Earth} {inverted qualia} can have complementary colors of Earth things [Block, 1990]. For same things, its people experience complementary color compared to Earth-people color experience. However, Inverted-Earth people call what they see complementary color names rather than Earth color names, because their vocabulary is different. When seeing tree leaves, Inverted-Earth people see magenta and say green.

If Earth people go to Inverted Earth and wear inverting-color lenses, they see same colors as on Earth and call colors same names as on Earth. When seeing tree leaves, they see green and call them green, because they use Earth language.

If Earth people go to Inverted Earth and do not wear inverting-color lenses, they see complementary colors rather than Earth colors and call them Earth names for complementary colors. However, if they stay there, they learn to use Inverted-Earth language and call complementary colors Inverted-Earth names, though phenomena remain unchanged. When seeing tree leaves, they see magenta and say green. Intentions change though objects remain the same. Therefore, phenomena are not representations.

problems

Intentions probably do not change, because situation requires no adaptations. The representation is fundamentally the same.

Perhaps, qualia do change.

inverted spectrum

Perhaps, spectrum can invert, so people see short-wavelength light as red and long-wavelength light as blue {inverted spectrum}. Perhaps, phenomena and experiences can be their opposites without affecting moods, emotions, body sensations, perceptions, cognitions, or behaviors. Subject experiences differently, but applies same functions as other people, so subject reactions and initiations are no different than normal. This can start at birth or change through learning and maturation. Perhaps, behavior and perception differences diminish over time by forgetting or adaptation.

representation and phenomena

Seemingly, for inverted spectrum, representations are the same, but inverted phenomena replace phenomena. Functions or physical states remain identical, but qualia differ. If phenomena involve representations, inverted spectra are not metaphysically possible. If phenomena do not involve representations, inverted spectra are metaphysically possible.

inversion can be impossible

Inverted spectra are not necessarily conceptually possible, because they can lead to internal contradictions. Colors do not have exact inversions, because colors mix differently, so no complete and consistent color inversion is possible.

1-Consciousness-Sense-Vision-Pattern Recognition

pattern recognition

Vision processes can recognize patterns {pattern recognition, vision} {shape perception}.

patterns

Patterns have objects, features, and spatial relations. Patterns can have points, lines, angles, waves, histograms, grids, and geometric figures. Objects have brightness, hue, saturation, size, position, and motion.

patterns: context

Pattern surroundings and/or background have brightness, hue, saturation, shape, size, position, and motion.

patterns: movement

Mind recognizes objects with translation-invariant features more easily if they are moving. People can recognize objects that they see moving behind a pinhole.

patterns: music

Mind recognizes music by rhythm or by intonation differences around main note. People can recognize rhythms and rhythmic groups. People can recognize melodies transformed from another melody. People most easily recognize same melody in another key. People easily recognize melodies that exchange high notes for low. People can recognize melodies in reverse. People sometimes recognize melodies with both reverse and exchange.

factors: attention

Pattern recognition depends on alertness and attention.

factors: memory

Recall easiness varies with attention amount, emotion amount, cue availability, and/or previous-occurrence frequency.

animals

Apes recognize objects using fast multisensory processes and slow single-sense processes. Apes do not transfer learning from one sense to another. Frogs can recognize prey and enemy categories [Lettvin et al., 1959]. Bees can recognize colors, except reds, and do circling and wagging dances, which show food-source angle, direction, distance, and amount.

machines

Machines can find, count, and measure picture object areas; classify object shapes; detect colors and textures; and analyze one image, two stereo images, or image sequences. Recognition algorithms have scale invariance.

process levels

Pattern-precognition processing has three levels. Processing depends on effective inputs and useful outputs {computational level, Marr}. Processing uses functions to go from input to output {algorithmic level, Marr}. Processing machinery performs algorithms {physical level, Marr} [Marr, 1982].

neuron pattern recognition

Neuron dendrite and cell-body synapses contribute different potentials to axon initial region. Input distributions represent patterns, such as geometric figures. Different input-potential combinations can trigger neuron impulse. As in statistical mechanics, because synapse number is high, one input-potential distribution has highest probability. Neurons detect that distribution and no other. Learning and memory change cell and affect distribution detected.

mirror recognition

Children and adults immediately recognize their images in mirrors {mirror recognition}. Chimpanzees, orangutans, bonobos, and two-year-old humans, but not gorillas, baboons, and monkeys, can recognize themselves in mirrors after using mirrors for a time [Gallup, 1970].

species member

Animals and human infants recognize that their images in mirrors are species members, but they do not recognize themselves. Perhaps, they have no mirror-reflection concept.

movements

Pigeons, monkeys, and apes can use mirrors to guide movements. Some apes can touch body spots that they see in mirrors. Chimpanzees, orangutans, bonobos, and two-year-old humans, but not gorillas, baboons, and monkeys, can use mirror reflections to perceive body parts and to direct actions [Gallup, 1970].

theory of mind

Autistic children use mirrors normally but appear to have no theory of mind. Animals have no theory of mind.

Molyneux problem

Will a blind person that knows shapes by touch recognize the shapes if able to see {Molyneux problem}? Testing cataract patients after surgery has not yet resolved this question.

1-Consciousness-Sense-Vision-Pattern Recognition-Methods

pattern recognition methods

Brain has mechanisms to recognize patterns {pattern recognition, methods} {pattern recognition, mechanisms}.

mechanism: association

The first and main pattern-recognition mechanism is association (associative learning). Complex recognition uses multiple associations.

mechanism: feature recognition

Object or event classification involves high-level feature recognition, not direct object or event identification. Brain extracts features and feeds forward to make hypotheses and classifications. For example, people can recognize meaningful facial expressions and other complex perceptions in simple drawings that have key features [Carr and England, 1995].

mechanism: symbol recognition

To recognize letters, on all four sides, check for point, line, corner, convex curve, W or M shape, or S or squiggle shape. 6^4 = 1296 combinations are available. Letters, numbers, and symbols add to less than 130, so symbol recognition is robust [Pao and Ernst, 1982].

mechanism: templates

Templates have non-accidental and signal properties that define object classes. Categories have rules or criteria. Vision uses structural descriptions to recognize patterns. Brains compare input patterns to template using constraint satisfaction on rules or criteria and then selecting best-fitting match, by score. If input activates one representation strongly and inhibits others, representation sends feedback to visual buffer, which then augments input image and modifies or completes input image by altering size, location, or orientation. If representation and image then match even better, mind recognizes object. If not, mind inhibits or ranks that representation and activates next representation.

mechanism: viewpoint

Vision can reconstruct how object appears from any viewpoint using a minimum of two, and a maximum of six, different-viewpoint images. Vision calculates object positions and motions from three views of four non-coplanar points. To recognize objects, vision interpolates between stored representations. Mind recognizes symmetric objects better than asymmetric objects from new viewpoints. Recognition fails for unusual viewpoints.

importance: frequency

For recognition, frequency is more important than recency.

importance: orientation

Recognition processing ignores left-right orientation.

importance: parts

For recognition, parts are more important for nearby objects.

importance: recency

For recognition, frequency is more important than recency.

importance: size

Recognition processing ignores size.

importance: spatial organization

For recognition, spatial organization and overall pattern are more important than parts.

method: averaging

Averaging removes noise by emphasizing low frequencies and minimizing high frequencies.

method: basis functions

HBF or RBF basis functions can separate scene into multiple dimensions.

method: cluster analysis

Pattern recognition can place classes or subsets in clusters in abstract space.

method: feature deconvolution

Cerebral cortex can separate feature from feature mixture.

method: differentiation

Differentiation subtracts second derivative from intensity and emphasizes high frequencies.

method: generalization

Vision generalizes patterns by eliminating one dimension, using one subpattern, or including outer domains.

method: index number

Patterns can have algorithm-generated unique, unambiguous, and meaningful index numbers. Running reverse algorithm generates pattern from index number. Similar patterns have similar index numbers. Patterns differing by subpattern have index numbers that differ only by ratio or difference. Index numbers have information about shape, parts, and relations, not about size, distance, orientation, incident brightness, incident light color, and viewing angle.

Index numbers can be power series. Term coefficients are weights. Term sums are typically unique numbers. For patterns with many points, index number is large, because information is high.

Patterns have a unique point, like gravity center. Pattern points have unique distances from unique point. Power-series terms are for pattern points. Term sums are typically unique numbers that depend only on coordinates internal to pattern. Patterns differing by subpattern differ by ratio or difference.

method: lines

Pattern recognition uses shortest line, extends line, or links lines.

method: intensity

Pattern recognition uses gray-level changes, not colors. Motion detection uses gray-level and pattern changes.

method: invariance

Features can remain invariant as images deform or move. Holding all variables, except one, constant can find the derivative with respect to the non-constant variable, and so calculate partial differentials to measure changes/differences and find invariants.

method: line orientation

Secondary visual cortex neurons can detect line orientation, have large receptive fields, and have variable topographic mapping.

method: linking

Vision can connect pieces in sequence and fill gaps.

method: optimization

Vision can use dynamic programming to optimize parameters.

method: orientation

Vision accurately knows surface tilt and slant, directly, by tilt angle itself, not by angle function [Bhalla and Proffitt, 1999] [Proffitt et al., 1995].

method: probability

Brain uses statistics to assign probability to patterns recognized.

method: registers

Brain-register network can store pattern information, and brain-register network series can store processes and pattern changes.

method: search

Matching can use heuristic search to find feature or path. Low-resolution search over whole image looks for matches to feature templates.

method: separation into parts

Vision can separate scene into additive parts, by boundaries, rather than using basis functions.

method: sketching

Vision uses contrast for boundary making.

instructionism in recognition

To recognize structure, brain can use information about that structure {instructionism, recognition}.

selectionism recognition

To recognize structure, brain can compare to multiple variations and select best match {selectionism, recognition}, just as cells try many antibodies to bind antigen.

detection threshold

To identify objects, algorithms can test patterns against feature sets. If patterns have features, algorithms add distinctiveness weight to object distinctiveness-weight sum. If object has sum greater than threshold {detection threshold} {threshold of detection}, algorithm identifies pattern as object. Context sets detection threshold.

distinctiveness weight

In recognition algorithms, object features can have weights {distinctiveness weight}, based on how well feature distinguishes object from other objects. Algorithm designers use feature-vs.-weight tables or automatically build tables using experiences.

edge detection

Sharp brightness or hue difference indicates edge or line {edge detection}. Point clustering indicates edges. Vision uses edge information to make object boundaries and adds information about boundary positions, shapes, directions, and noise. Neuron assemblies have different spatial scales to detect different-size edges and lines. Tracking and linking connect detected edges.

Gabor transform

Algorithms {Gabor transform} {Gabor filter} can make series, whose terms are for independent visual features, have constant amplitude, and have functions. Term sums are series [Palmer et al., 1991]. Visual-cortex complex cells act like Gabor filters with power series. Terms have variables raised to powers. Complex-cell types are for specific surface orientation and object size. Gabor-filter complex cells typically make errors for edge gaps, small textures, blurs, and shadows.

histogram density estimate

Non-parametric algorithms {histogram density estimate} can calculate density. Algorithm tests various cell sizes by nearest-neighbor method or kernel method. Density is average volume per point.

image segmentation

Using Bayesian theory, algorithms {image segmentation} can extend edges to segment image and surround scene regions.

kernel method

Algorithms {kernel method} can test various cell sizes, to see how small volume must be to have only one point.

linear discriminant function

Algorithms {linear discriminant function} (Fischer) can find abstract-space hypersurface boundary between space regions (classes), using region averages and covariances.

memory-based model

Algorithms {memory-based models} (MBM) can match input-pattern components to template-pattern components, using weighted sums, to find highest scoring template. Scores are proportional to similarity. Memory-based models uniquely label component differences. Memory-based recognition, sparse-population coding, generalized radial-basis-function (RBF) networks, and hyper-basis-function (HBF) networks are similar algorithms.

mental rotation

Vision can manipulate images to see if two shapes correspond. Vision can zoom, rotate, stretch, color, and split images {mental rotation} [Shepard and Metzler, 1971] [Shepard and Cooper, 1982].

high level

Images transform by high-level perceptual and motor processing, not sense-level processing. Image movements follow abstract-space trajectories or proposition sequence.

motor cortex

Motor processes transform visual mental images, because spatial representations are under motor control [Shiekh, 1983].

time

People require more time to perform mental rotations that are physically awkward. Vision compares aligned images faster than translated, rotated, or inverted images.

nearest neighbor method

Algorithms {nearest neighbor method} can test various cell sizes to see how many points (nearest neighbor) are in cells.

pattern matching

Algorithms {pattern matching} can try to match two network representations by two parallel searches, starting from each representation. Searches look for similar features, components, or relations. When both searches meet, they excite the intermediate point (not necessarily simultaneously), whose signals indicate matching.

pattern theory

Algorithms {pattern theory} can use feedforward and feedback processes and relaxation methods to move from input pattern toward memory pattern. Algorithm uses probabilities, fuzzy sets, and population coding, not formal logic.

receiver operating characteristics

For algorithms or observers, graphs {receiver operating characteristics} (ROC) can show true identification-hit rate versus false-hit rate. If correlation line is 45-degree-angle straight line, observer has as many false hits as true hits. If correlation line has steep slope, observer has mostly true hits and few false hits. If correlation line has maximum slope, observer has zero false hits and all true hits.

region analysis

Vision finds, separates, and labels visual areas by enlarging spatial features or partitioning scenes {region analysis}.

expanding

Progressive entrainment of larger and larger cell populations builds regions using synchronized firing. Regions form by clustering features, smoothing differences, relaxing/optimizing, and extending lines using edge information.

splitting

Regions can form by splitting spatial features or scenes. Parallel circuits break large domains into similar-texture subdomains for texture analysis. Parallel circuits find edge ends by edge interruptions.

relational matching

For feature detection, brain can use classifying context or constrain classification {relational matching}.

response bias

Algorithms {response bias} can use recognition criteria iteratively set by receiver operability curve.

segmentation problem

Vision separates scene features into belonging to object and not belonging {segmentation problem}|. Large-scale analysis is first and then local constraints. Context hierarchically divides image into non-interacting parts.

shading for shape

If brain knows reflectance and illumination, shading {shading}| can reveal shape. Line and edge detectors can find shape from shading.

shape from motion

Motion change and retinal disparity are equivalent perceptual problems, so finding distance from retinal disparity and finding shape from motion {shape from motion} changes use equivalent techniques.

signal detection theory

Algorithms {signal detection theory} can find patterns in noisy backgrounds. Patterns have stronger signal strength than noise. Detectors have sensitivity and response criteria.

vertex perception

Vision can label vertices as three-intersecting-line combinations {vertex perception}. Intersections can be convex or concave, to right or to left.

1-Consciousness-Sense-Vision-Pattern Recognition-Methods-Systems

production system

Classification algorithms {production system} can use IF/THEN rules on input to conditionally branch to one feature or object. Production systems have three parts: fact database, production rule, and rule-choosing control algorithm.

database

Fact-database entries code for one state {local representation, database}, allowing memory.

rules

Production rules have form "IF State A, THEN Process N". Rules with same IF clause have one precedence order.

controller

Controller checks all rules, performing steps in sequence {serial processing}. For example, if system is in State A and rule starts "IF State A", then controller performs Process N, which uses fact-database data.

states

Discrete systems have state spaces whose axes represent parameters, with possible values. System starts with initial-state parameter settings and moves from state to state, along a trajectory, as controller applies rules.

production rule

Production systems have rules {production rule} for moving from one state to the next. Production rules have form "IF State A, THEN Process N". Rules with same IF clause have one precedence order.

ACT production system

Parallel pattern-recognition mechanisms can fire whenever they detect patterns {ACT production system}. Firing puts new data elements in working memory.

Data Refractoriness

Same production can match same data only once {Data Refractoriness production system}.

Degree of Match

Production with best-matched IF-clause can have priority {Degree of Match production system}.

Goal Dominance

Goals are productions put into working memory. Only one goal can be active at a time {Goal Dominance}, so productions whose output matches active goal have priority.

Production Strength

Recently successful productions can have higher strength {Production Strength production system}.

Soar production system

Parallel pattern-recognition mechanisms can fire whenever they detect particular patterns {Soar production system}. Firing puts new data elements in working memory.

Specificity production system

If two productions match same data, production with more-specific IF-clause wins {Specificity production system}.

1-Consciousness-Sense-Vision-Pattern Recognition-Representation

explicit representation

Neuron assemblies can hold essential knowledge about patterns {explicit representation}, using information not in implicit representation. Mind calculates explicit representation from implicit representation, using feature extraction or neural networks [Kobatake et al., 1998] [Logothetis and Pauls, 1995] [Logothetis et al., 1994] [Sheinberg and Logothetis, 2001].

implicit representation

Neuron or pixel sets can hold object image {implicit representation}, with no higher-level knowledge. Implicit representation samples intensities at positions at times, like bitmaps [Kobatake et al., 1998] [Logothetis and Pauls, 1995] [Logothetis et al., 1994] [Sheinberg and Logothetis, 2001].

generalized cone

Algorithms {generalized cone} can describe three-dimensional objects as conical shapes, with axis length/orientation and circle radius/orientation. Main and subsidiary cones can be solid, hollow, inverted, asymmetric, or symmetric. Cone surfaces have patterns and textures [Marr, 1982]. Cone descriptions can use three-dimensional Fourier spherical harmonics, which have volumes, centroids, inertia moments, and inertia products.

generalized cylinder

Algorithms {generalized cylinder} can describe three-dimensional objects as cylindrical shapes, with axis length/orientation and circle radius/orientation. Main and subsidiary cylinders can be solid, hollow, inverted, asymmetric, or symmetric. Cylindrical surfaces have patterns and textures. Cylinder descriptions can use three-dimensional Fourier spherical harmonics, which have volumes, centroids, inertia moments, and inertia products.

structural description

Representations can describe object parts and spatial relations {structural description}. Structure units can be three-dimensional generalized cylinders (Marr), three-dimensional geons (Biederman), or three-dimensional curved solids {superquadratics} (Pentland). Structural descriptions are only good for simple recognition {entry level recognition}, not for superstructures or substructures. Vision uses viewpoint-dependent recognition, not structural descriptions.

template

Shape representations {template} can hold information for mechanisms to use to replicate or recognize {template theory} {naive template theory}. Template is like memory, and mechanism is like recall. Template can be coded units, shape, image, model, prototype, or pattern. Artificial templates include clay or wax molds. Natural templates are DNA/RNA. Templates can be abstract-space vectors. Using templates requires templates for all viewpoints, and so many templates.

vector coding

Representations {vector coding} can be sense-receptor intensity patterns and/or brain-structure neuron outputs, which make feature vectors. Vector coding can identify rigid objects in Euclidean space. Vision uses non-metric projective geometry to find invariances by vector analysis [Staudt, 1847] [Veblen and Young, 1918]. Motor-representation middle and lower levels use code that indicates direction and amount.

1-Consciousness-Sense-Vision-Pattern Recognition-Scene

scene of vision

The feeling of seeing whole scene {scene, vision} {vision, scene} results from maintaining general scene sense in semantic memory, attending repeatedly to scene objects, and forming object patterns. Vision experiences whole scene (perceptual field), not just isolated points, features, surfaces, or objects. Perceptual field provides background and context, which can identify objects and events.

scale

Scenes have different spatial frequencies in different directions and distances. Scenes can have low spatial frequency and seem open. Low-spatial-frequency scenes have more depth, less expansiveness, and less roughness, and are more natural. Scenes can have high spatial frequency and seem closed. High-spatial-frequency scenes have less depth, more expansiveness, and more roughness, and are more about towns.

set size

Scenes have numbers of objects {set size, scene}.

spatial layout

Scenes have patterns or structures of object and object-property placeholders {spatial layout}, such as smooth texture, rough texture, enclosed space, and open space. In spatial layouts, object and property meanings do not matter, only placeholder pattern. Objects and properties can fill object and object property placeholders to supply meaning. Objects have spatial positions, and relations to other objects, that depend on spacing and order. Spatial relations include object and part separations, feature and part conjunctions, movement and orientation directions, and object resolution.

visual unit

Scenes have homogeneous color and texture regions {visual unit}.

1-Consciousness-Sense-Vision-Pattern Recognition-Shape

shape

Vision can recognize geometric features {shape, pattern} {pattern, features}.

lines

Shapes have lines, line orientations, and edges. Contour outlines indicate objects and enhance brightness and contrast. Irregular contours and hatching indicate movement. Contrast enhances contours, for example with Mach bands. Contrast differences divide large surfaces into parts.

axes

Shapes have natural position axes, such as vertical and horizontal, and natural shape axes, such as long axis and short axis. Vision uses horizontal, vertical, and radial axes for structure and composition.

relations

Objects are wholes and have parts. Wholes are part integrations or configurations and are about gist. Parts are standard features and are about details.

surfaces

Shape has surfaces, with surface curvatures, orientations, and vertices. Visual system can label lines and surfaces as convex, concave, or overlapping [Grunewald et al., 2002]. Shapes have shape-density functions, with projections onto axes or chords [Grunewald et al., 2002]. Shapes have distances and natural metrics, such as lines between points.

illuminance

Shapes have illuminance and reflectance.

area eccentricity

Shapes have axis and chord ratios {area eccentricity} [Grunewald et al., 2002].

compactness of shape

Shapes have perimeter squared divided by area {compactness, shape} [Grunewald et al., 2002].

concavity tree

Shapes have minimum chain-code sequences that make shape classes {concavity tree}, which have maximum and minimum concavity-shape numbers [Grunewald et al., 2002].

Euler number for shape

Shapes have connectedness {Euler number, shape} [Grunewald et al., 2002].

1-Consciousness-Sense-Vision-Pattern Recognition-Type

explicit recognition

Pattern recognition can use conscious memory {explicit recognition} [McDougall, 1911] [McDougall, 1923].

implicit recognition

Pattern recognition can be automatic {implicit recognition} [McDougall, 1911] [McDougall, 1923], like reflexes.

1-Consciousness-Sense-Vision-Gestalt

gestalt laws

Figures have three-dimensional representations or forms {gestalt}| built innately by vision, by analyzing stimulus interactions. Gestalt needs no learning.

Gestalt law

Finding stimulus relations or applying organizational laws {insight, Gestalt} allows recognizing figures, solving problems, and performing similar mental tasks. Related gestalt laws can conflict, and they have different relative strengths at different times. Grouping laws depend on figure-ground relationship, proximity, similarity, continuity, closure, connectedness, and context [Ehrenfels, 1891]. Laws {gestalt law} {grouping rule} {Gestalt grouping rule} can replace less-organized patterns with emphasized, complete, or adequate patterns. Gestalt laws are minimizations. Gestalt laws are assumptions about which visual-field parts are most likely to belong to which object.

figure in Gestalt

Perception must separate object {figure, Gestalt} from background, using Gestalt laws [Ehrenfels, 1891]. Regions with one color are figures. Many-colored regions are ground. Smaller region is figure, and nearby larger region is ground.

Edges separate figure and ground. Lateral inhibition distinguishes and sharpens boundaries.

Both figure and ground are homogeneous regions. Surfaces recruit neighboring similar surfaces to expand homogeneous regions by wave entrainment.

Vision separates figure and ground by detecting edges and increasing homogeneous regions, using constraint satisfaction [Crane, 1992].

ground in Gestalt

Perception must separate object figure from background {ground, Gestalt}, using Gestalt laws [Ehrenfels, 1891].

pragnans

Vision finds simplest possible percept, which has internal consistency and regularity {pragnans} [Ehrenfels, 1891].

closure law

Vision tends to perceive incomplete or occluded figures as wholes {closure law} {law of closure}. Closed contour indicates figure [Ehrenfels, 1891].

common fate

Vision groups features doing same thing {common fate}, such as moving in same direction or moving away from point [Ehrenfels, 1891].

connectedness law

Vision groups two features that touch or that happen at same time {connectedness, Gestalt} {law of connectedness} [Ehrenfels, 1891].

enclosedness law

Vision tends to perceive enclosed region as figure {enclosedness} {law of enclosedness} {surroundedness, Gestalt}. Surrounded region is figure, and surrounding region is ground [Ehrenfels, 1891].

good continuation

Vision perceives organization that interrupts fewest lines or that lies on one contour {good continuation} {law of good continuation}. Smooth lines, with no sharp angles, are figure parts. Regions with fewer continuous lines, fewer angles, and fewer angle differences are figures [Ehrenfels, 1891]. For example, the good-continuation law reflects probability that aligned edges belong to same object.

parallelism law

Vision groups two parallel contours {parallelism, Gestalt}. Region parallel contours are figure parts, and non-parallel contours are ground parts [Ehrenfels, 1891]. Surfaces have periodic structure that can model periodic structures.

proximity law

Adjacent features are figure parts {proximity, Gestalt} {law of proximity} [Ehrenfels, 1891].

segregation law

Vision finds image boundaries, to make perceptual regions, by angles, lines, and distances {segregation, Gestalt} {law of segregation, Gestalt} {differentiation, Gestalt} {law of differentiation} [Ehrenfels, 1891].

similarity law

Similar shape, color, and size parts go together {similarity, Gestalt} {law of similarity} [Ehrenfels, 1891].

symmetry law

Vision groups symmetrical contours {symmetry, Gestalt}. Symmetrical region is figure, and asymmetrical region is ground. Symmetrical closed region is figure [Ehrenfels, 1891].

synchrony law

Vision groups features that change simultaneously {synchrony, Gestalt}, even if features move in different directions and/or at different speeds [Ehrenfels, 1891].

1-Consciousness-Sense-Vision-Illusions

illusion

Illusions {illusion} are perceptions that differ from actual metric measurements. Brain uses rules to interpret sense signals, but rules can have contradictions or ambiguities. Vision sees bent lines, shifted lines, different lengths, or different areas, rather than line or area physical properties. Visual illusions are typically depth-perception errors [Frisby, 1979] [Gregory, 1972] [Heydt et al., 1984] [Kanizsa, 1979] [Peterhans and Heydt, 1991].

perception

Illusion, hallucination, and perception sense qualities do not differ. Mind typically does not notice illusions.

neural channels

Illusory edges and surfaces appear, because neural channels differ for movement and position. See Figure 1 and Figure 2.

contrast illusions

Contrast can cause illusions. Adelson illusion has grid of lighter and darker squares, making same-gray squares look different. Craik-O'Brien-Cornsweet illusion has lighter rectangle beside darker rectangle, making contrast enhancement at boundary. Mach bands have boundaries with enhanced contrast. Simultaneous brightness contrast illusions have same-gray squares in white or black backgrounds, looking like different grays. White illusion has black vertical bars with same-gray rectangle behind bars and adjacently and translucently in front of bars, looking like different grays.

color illusions

Color can cause color-contrast illusions and color and brightness illusions. Assimilation illusions have background effects that group same color points differently. Fading dot illusion has a green disk with blue dot in center, which fades with continued looking. Munker illusion has blue vertical bars with same-color rectangle behind bars or adjacently and translucently in front of bars, looking like different colors. Neon disk has an asterisk with half-white and half-red bars, which spins. Stroop effect has the word green in red, the word red in green.

geometric illusions

Geometry causes Ebbinghaus illusion, Müller-Lyer illusion, Ponzo illusion, and Zöllner illusion. Café-wall illusion has a vertically irregularly spaced black squares and white squares grid, making horizontal lines appear tilted. Distorted squares illusion has squares in concentric circles, making tilted lines. Ehrenstein illusion has radial lines with circle below center and square above center, making circle and square lines change alignment. Frazier spiral has concentric circles that look like a spiral in a spiraling background. Men with sunglasses illusion (Akiyoshi Kitaoka) has alternating color-square grid with two alternating vertical or horizontal dots at corners, making vertical and horizontal lines tilted. Midorigame or green turtle (Akiyoshi Kitaoka) has a grid with slightly tilted squares in one direction and a center grid with squares slightly tilted in other direction, making vertical and horizontal lines tilted. Poggendorf illusion has two vertical lines with diagonal line that goes behind space between lines, and two vertical lines with diagonal line that goes behind space between lines and dotted line on one side, making behind look not aligned.

size and depth illusions

Size and depth illusions are Ames room (Adelbert Ames), corridor illusion, impossible staircase (Maurits C. Escher), impossible triangle (Maurits C. Escher), impossible waterfall (Maurits C. Escher), Necker cube, size distortion illusion, and trapezoidal window (Adelbert Ames).

figure illusions

Imagined lines can cause illusions. Illusory circle has a small space between horizontal and vertical lines that do not meet, making a small circle. Illusory triangle has solid figures with cutouts that make angles in needed directions, which appear as corners of triangles with complete sides. Illusory square has solid figures with cutouts that make angles in needed directions, which appear as corners of squares with complete sides.

ambiguous figures

Ambiguous figures are eskimo-little girl seen from back, father-son, rabbit-duck, skull-two dancers, young woman and hag, and vase-goblet.

unstable figures

Figures can have features that randomly appear and disappear. Hermann's grid has horizontal and vertical lines with gaps at intersections, where dark disks appear and disappear. Rotating spiral snakes (Akiyoshi Kitaoka) have spirals, which make faint opposite spirals appear to rotate. Thatcher illusion has smile and eye corners up or down (Peter Thompson).

alternating illusions

Illusions with two forms show perceptual dominance or are bistable illusions. Vase-and-face illusion switches between alternatives.

Hering illusion

Radial rays, with two horizontal lines, make illusions. See Figure 4.

music

Music can cause illusions.

Necker cube

Wire cube at angle makes illusions. See Figure 3.

Ponzo illusion

If railroad tracks and ties lead into distance, and two horizontal bars, even with different colors, are at different distances, farther bar appears longer (Mario Ponzo) [1913]. See Figure 7. See Figure 8 for modified Ponzo illusions. See Figure 9 for split Ponzo illusions. Perhaps, line tilt, rather than depth perception, causes Ponzo illusion.

Rubin vase

Central vase has profiles that are symmetrical faces (Edgar Rubin). See Figure 5.

Zollner illusion

Vertical lines have equally spaced parallel line segments at 45-degree angles. See Figure 6.

aftereffect

After concentrating on object and then looking at another object, sense qualities opposite to, or shifted away from, original appear {aftereffect}| (CAE) [Blake, 1998] [Blake and Fox, 1974] [Dragoi et al., 2000] [He et al., 1996] [He et al., 1998] [He and MacLeod, 2001] [Koch and Tootell, 1996] [Montaser-Kouhsari et al., 2004].

afterimage

After observing bright light or image with steady gaze, image can persist {afterimage} [Hofstötter et al., 2003]. For one second, afterimage is the same as positive image. Then afterimage has opposite color or brightness {negative afterimage}. Against white ceiling, afterimage appears black. Colored images have complementary-color afterimages. Intensity is the same as image {positive afterimage} if eyes close or if gaze shifts to black background. Afterimage size, shape, brightness, and location can change {figural aftereffect}.

brain

Perhaps, CAEs reflect brain self-calibration. Orientation-specific adaptation is in area V1 or V2.

curves

Aftereffects also appear after prolonged stimulation by curved lines. Distortions associated with converging lines do not change with different brightness or line thickness.

gratings

Horizontal and vertical gratings cause opposite aftereffect {orientation-dependent aftereffect}, even if not perceived.

movement

Background can seem to move after observer stops moving {motion aftereffect, vision}.

stripes

Alternating patterns and prolonged sense stimulation can cause distortions that depend on adapting-field and test-field stripe orientations {contingent perceptual aftereffect}.

theory

Aftereffects appear because sense channels for processing color and orientation overlap {built-in theory} or because separate mechanisms for processing color and orientation overlap during adaptation period {built-up theory}.

tilt

After observing a pattern at an orientation, mind sees vertical lines tilt in opposite direction {tilt aftereffect}.

time

CAEs do not necessarily decay during sleep and can last for days.

bistable illusion

Illusions can have two forms. Illusions {bistable illusion} like Necker cube have two forms almost equal in perceptual dominance.

cafe wall illusion

Size, length, and curvature line or edge distortions can make illusions {cafe wall illusion}.

camera lucida

Illusions {Pepper's ghost} {stage ghost} {camera lucida} can depend on brightness differences. Part-reflecting mirrors can superimpose images on objects that people see through glass. Brightening one image while dimming the other makes one appear as the other disappears. If equally illuminated, both images superimpose and are transparent.

color contrast effect

Gray patches surrounded by blue are slightly yellow {color contrast effect}. Black is not as black near blue or violet.

color scission

Mind can perceive transparency when observing different-color split surfaces {color scission}.

color stereo effect

Blue and green appear closer {color stereo effect}. Red appears farther away.

conjunction error

When two objects have interchangeable features, and time or attention is short, mind can switch features to wrong object {conjunction error}.

cutaneous rabbit

Experimenter taps sharp pencil five times on wrist, three times on elbow, and two times on upper arm, while subject is not looking {cutaneous rabbit}. It feels like equal steps up arm [Geldard and Sherrick, 1972].

empty suitcase effect

Minds perceive darker objects as heavier than lighter ones {empty suitcase effect}.

flying squirrel illusion

Light and dark checkerboards can have light-color dots at central dark-square corners, making curved square sides and curved lines along square edges {flying squirrel illusion}, though lines are really straight (Kitaoka).

ghost

Illusory people perceptions {ghost} can be partially transparent and speak.

Hering illusion

Radial rays with two horizontal lines can make illusions {Hering illusion}.

Hermann grid

Black squares in an array with rows and columns of spaces {Hermann grid} can appear to have gray circles in white spaces where four corners meet.

irradiation illusion

Lighter areas have apparently greater size than same-size darker areas {irradiation, perception}.

McCullough effect

Orientation-specific color aftereffects can appear without perception {McCullough effect}. McCullough effect does not transfer from one eye to the other.

Moon illusion

Moon or Sun apparent size varies directly with nearness to horizon {Moon illusion}, until sufficiently above horizon. On horizon, Moon is redder, hazier, lower contrast, and fuzzier edged and has different texture. All these factors affect perceived distance.

elevation

Horizon Moon dominates and elevates scene, but scene seems lower when Moon is higher in sky.

distance

Horizon Moon, blue or black sky, and horizon are apparently at same place. Risen Moon appears in front of black night sky or blue day sky, because it covers blue or black and there is no apparent horizon.

topographic map

Moon illusion and other perspective illusions cause visual-brain topographic image to enlarge or shrink, whereas retinal image is the same.

perceptual dominance

Illusions can have two forms, and people see mostly one {perceptual dominance}, then other.

Purkinje shift

In dark, blues seem brighter than reds {Purkinje shift}. In day, reds seem brighter than blues.

radial lines illusion

Line segments radiating from central imaginary circle {radial lines illusion} make center circle appear brighter. If center circle is black, it looks like background. If center circle has color, it appears brighter and raised {anomalous brightness}. If center circle is gray disk, it appears gray but shimmers {scintillating luster}. If center circle has color and background is black, center circle appears blacker {anomalous darkness}. If center circle has color and gray disk, center circle shimmers gray with complementary color {flashing anomalous color contrast}.

rod and frame illusion

A vertical line segment in a tilted square frame appears to tilt oppositely {rod and frame illusion}, a late-visual-processing pictorial illusion.

Roelof effect illusion

If a rectangle is left of midline, with one edge at midline, rectangle appears horizontally shorter, and midline line segment appears to be right of midline {Roelof's effect} {Roelof effect}. If a rectangle is left of midline, with edge nearer midline left of midline, rectangle appears horizontally shorter, and rectangle appears closer to midline.

simultaneous tilt illusion

Central circle with vertical stripes surrounded by annulus with stripes angled to left appears to have stripes tilted to right {simultaneous tilt illusion}, an early visual processing illusion.

size-weight illusion

If small and large object both have same weight, small object feels heavier in hand than large object {size-weight illusion}. People feel surprise, because larger weight is lighter than expected.

watercolor effect

Lighter color contours inside darker color contours spread through interiors {watercolor effect}.

zero-gravity illusion

In zero-gravity environments, because eyes shift upward, objects appear to be lower than they actually are {zero-gravity illusion}.

1-Consciousness-Sense-Vision-Illusions-Ambiguous Figures

ambiguous figure

Figures {ambiguous figure}| can have two ways that non-vertical and non-horizontal lines can orient or have two ways to choose background and foreground regions. In constant light, observed ambiguous-figure surface-brightness changes as perception oscillates between figures [Gregory, 1966] [Gregory, 1986] [Gregory, 1987] [Gregory, 1990] [Gregory, 1997] [Seckel, 2000] [Seckel, 2002].

duck-rabbit illusion

Figures (Jastrow) with duck beaks and rabbit ears make illusions {duck-rabbit illusion}.

Rubin vase

Vases with profiles of symmetrical faces (Edgar Rubin) can make illusions {vase and two faces illusion} {Rubin vase}.

Salem witch girl illusion

Old crone with black hair facing young girl can make illusions {Salem witch and girl illusion}.

1-Consciousness-Sense-Vision-Illusions-Contrast Illusions

Craik-Cornsweet illusion

Illusions can depend on brightness differences, sound-intensity differences, or line-length and line-spacing differences {Craik-Cornsweet illusion}. Finding differences explains Weber's law and why just noticeable difference increases directly with stimulus magnitude.

1-Consciousness-Sense-Vision-Illusions-Depth Illusions

impossible triangle

People can see logically paradoxical objects {impossible triangle} {impossible staircase}. People can experience paradox perceptually while knowing its solution conceptually. Pictures are essentially paradoxical.

Kanizsa triangle

Impossible triangles can make illusions {Kanizsa illusion} {Kanizsa triangle}.

Necker cube

Wire cubes at angles can make illusions {Necker cube}.

Schroder stairs

Impossible stairs can make illusions {Schroder stairs}.

1-Consciousness-Sense-Vision-Illusions-Figure Illusions

illusory conjunction

Minds can combine two features, for example, color and shape, and report perceiving objects that are not in scenes {illusory conjunction} {conjunction, illusory}.

illusory contour

Mind can extend contours to places with no reflectance difference {illusory contour} {contour, illusory}.

1-Consciousness-Sense-Vision-Illusions-Geometric Illusions

Ebbinghaus illusion

Medium-size circle surrounded by smaller circles appears larger than same-size circle surrounded by larger circles {Ebbinghaus illusion} {Titchener circles illusion}, a late-visual-processing pictorial illusion.

Muller-Lyer illusion

Lines with inward-pointing arrowheads and adjacent lines with outward-pointing arrowheads appear to have different lengths {Müller-Lyer illusion}.

Ponzo illusion

If railroad tracks and ties lead into distance, and two horizontal bars, even with different colors, are at different distances, farther bar appears longer (Mario Ponzo) [1913] {Ponzo illusion}. Perhaps, line tilt, rather than depth perception, causes Ponzo illusion.

Zollner illusion

Vertical lines with equally spaced parallel line segments at 45-degree angles can make illusions {Zollner illusion}.

1-Consciousness-Sense-Vision-Illusions-Motion Illusions

autokinetic effect

In homogeneous backgrounds, a single object appears to move around {autokinetic effect} {keyhole illusion} [Zeki et al., 1993].

flash-lag effect

If line or spot is moving, and another line or spot flashes at same place, the other seems behind first {flash-lag effect} [Eagleman and Sejnowski, 2000] [Krekelberg and Lappe, 2001] [Nijhawan, 1994] [Nijhawan, 1997] [Schlag and Schlag-Rey, 2002] [Sheth et al., 2000]. Flashed object seems slower than moving object.

kinetic depth effect

Rotating two-dimensional objects makes them appear three-dimensional {kinetic depth effect} [Zeki et al., 1993].

Korte's law

Alternating visual-stimulus pairs show apparent movement at special times and separations {Korte's law} [Zeki et al., 1993].

motion aftereffect

After continuously observing moving objects, when movement stops, stationary objects appear to move {motion aftereffect, illusion}.

motion-induced blindness

If screen has stationary color spots and has randomly moving complementary-color spots behind them, mind sees stationary spots first, then does not see them, then sees them again, and so on {motion-induced blindness} [Bonneh et al., 2001].

wagon-wheel illusion

Spokes in turning wheels seem to turn in direction opposite from real motion {wagon-wheel illusion} [Gho and Varela, 1988] [Wertheimer, 1912] [Zeki et al., 1993].

waterfall illusion

If people view scenes with flows, when they look at stationary scenes, they see flow {waterfall illusion}. Waterfall illusion can be a series of still pictures [Cornsweet, 1970].

1-Consciousness-Sense-Vision-Problems

vision problems

Multiple sclerosis, neglect, and prosopagnosia can cause vision problems {vision, problems}. Partial or complete color-vision loss makes everything light or dark gray, and even dreams lose color.

astigmatism

Cornea can have different curvature radiuses at different orientations around visual axis and so be non-spherical {astigmatism}|. Unequal lens curvature causes astigmatism.

cinematographic vision

Vision can turn on and off ten times each second {cinematographic vision} [Sacks, 1970] [Sacks, 1973] [Sacks, 1984] [Sacks, 1995].

diplopia

Failure to combine or fuse images from both eyes results in double vision {diplopia}.

phosphene

People can see subjective sparks or light patterns {phosphene}| after deprivation, blows, eyeball pressure, or cortex stimulation.

retinitis pigmentosa

Genetic condition causes retina degeneration {retinitis pigmentosa}| and affects night vision and peripheral vision.

stereoblindness

People with vision in both eyes can lose ability to determine depth by binocular disparity {stereoblindness}.

strabismus

Extraocular muscles, six for each eye, can fail to synchronize, so one eye converges too much or too little, or one eye turns away from the other {strabismus}|. This can reduce acuity {strabismic amblyopia} {amblyopia}, because image is not on fovea.

1-Consciousness-Sense-Vision-Problems-Color

achromatopsia

Left inferior parietal lobe fusiform gyrus damage causes scene to have no color and be light and dark gray {achromatopsia} [Hess et al., 1990] [Nordby, 1990].

anomalous trichromacy

Cone pigments can differ in frequency range or maximum-sensitivity wavelength {anomalous trichromacy}. Moderately colorblind people can have three photopigments, but two are same type: two different long-wavelength cones {deuteranomalous trichromacy}, which is more common, or two different middle-wavelength cones {protanomalous trichromacy} [Asenjo et al., 1994] [Jameson et al., 2001] [Jordan and Mollon, 1993] [Nathans, 1999].

color blindness

8% of men cannot distinguish between red and green {color blindness} {colorblind} {red-green colorblindness}, but can see blue. They also cannot see colors that are light or have low saturation. Dichromats have only two cone types. Cone monochromats can lack two cone types and cannot distinguish colors well. Rod monochromats can have no cones, have complete color blindness, see only grays, and have low daylight acuity.

color-anomalous

People can have all three cones but have one photopigment that differs from normal {color-anomalous}, so two photopigments are similar to each other. They typically have similar medium-wavelength cones and long-wavelength cones and cannot distinguish reds, oranges, yellows, and greens.

deuteranope

People can lack medium-wavelength cones, but have long-wavelength cones and short-wavelength cones {deuteranope}, and cannot distinguish greens, yellows, oranges, and reds.

protanope

People can lack long-wavelength cones, but have medium-wavelength cones and short-wavelength cones {protanope}, and cannot distinguish reds, oranges, yellows, and greens.

tritanope

People can lack short-wavelength cones, but have medium-wavelength cones and long-wavelength cones {tritanope}, and cannot distinguish blue-greens, blues, and violets.

1-Consciousness-Sense-Vision-Problems-Lesion

lesion in brain

Brain can have wounded or infected areas {lesion, brain}. If lesion is in right hemisphere, loss is on left visual-field side {contralesional field}. If lesion is in right hemisphere, loss is on right visual-field side {ipsilesional field}.

akinetopsia

Mediotemporal (MT) damage causes inability to detect motion {akinetopsia}.

double dissociation

Two brain lesions in different places typically cause different defects {double dissociation}.

hemianopia

Lateral-geniculate-nucleus damage causes blindness in half visual field {hemianopia} [Celesia et al., 1991].

Kluver-Bucy syndrome

Removing both temporal lobes makes monkeys fail to recognize objects {Klüver-Bucy syndrome, lesion}.

scotoma

Visual-cortex region can have damage {scotoma}|. People do not see black or dark area, but only have no sight [Teuber et al., 1960] [Teuber, 1960].

visual-field defect

Visual-nerve damage can cause no or reduced vision in scene regions {visual-field defect}.

1-Consciousness-Sense-Vision-Problems-Lesion-Blindsight

blindsight

People with visual-cortex scotoma can point to and differentiate between fast movements or simple objects but say they cannot see them {blindsight}|. They can perceive shapes, orientations, faces, facial expressions, motions, colors, and event onsets and offsets [Baron-Cohen, 1995] [Cowey and Stoerig, 1991] [Cowey and Stoerig, 1995] [Ffytche et al., 1996] [Holt, 1999] [Kentridge et al., 1997] [Marcel, 1986] [Marcel and Bisiach, 1988] [Marzi, 1999] [Perenin and Rossetti, 1996] [Pöppel et al., 1973] [Rossetti, 1998] [Stoerig and Barth, 2001] [Stoerig et al., 2002] [Weiskrantz, 1986] [Weiskrantz, 1996] [Weiskrantz, 1997] [Wessinger et al., 1997] [Zeki, 1995].

properties: acuity

Visual acuity decreases by two spatial-frequency octaves.

properties: amnesia

Amnesiacs with medial temporal lobe damage can use non-conscious memory.

properties: attention

Events in blind region can alter attention.

properties: color

Color sensitivity is better for red than green.

properties: contrast

Contrast discrimination is less.

properties: dark adaptation

Dark adaptation remains.

properties: face perception

People who cannot see faces can distinguish familiar and unfamiliar faces.

properties: hemianopia

Cortical-hemisphere-damage blindness affects only half visual field.

properties: motion

Complex motion detection is lost. Fast motions, onsets, and offsets can give vague awareness {blindsight type 2}.

People with blindsight can detect movement but not recognize object that moved [Morland, 1999].

properties: perception

Blindsight is not just poor vision sensitivity but has no experience [Weiskrantz, 1997].

properties: reflexes

Vision reflexes still operate.

properties: threshold

Blindsight patients do not have altered thresholds or different criteria about what it means to see [Stoerig and Cowey, 1995].

brain

Blindsight does not require functioning area V1. Vision in intact V1 fields does not cause blindsight [Weiskrantz, 1986]. Brain compensates for visual-cortex damage using midbrain, including superior colliculus, and thalamus visual maps, allowing minimal visual perception but no seeing experience. Right prefrontal cortex has more blood flow. Blindsight uses dorsal pathway and seems different for different visuomotor systems [Milner and Goodale, 1995]. Animals with area V1 damage react differently to same light or no-light stimuli in normal and blindsight regions, with reactions similar to humans, indicating that they have conscious seeing.

senses

People can perceive smells when visual cortex has damage [Weiskrantz, 1997]. People can perceive sounds when visual cortex has damage [Weiskrantz, 1997]. People with parietal lobe damage can use tactile information, though they do not feel touch {numbsense} {blind touch}.

Riddoch phenomenon

Blindsight patients can be conscious of fast, high-contrast object movements {Riddoch phenomenon}. Retinal output for motion can go to area V5 [Barbur et al., 1993].

1-Consciousness-Sense-Vision-Techniques

anorthoscopic perception

If an object moves behind a slit, people can faintly glimpse whole object {anorthoscopic perception}. Object foreshortens along motion direction. People can also recognize an object that they see moving behind a pinhole, because memory and perception work together.

distortion of vision

People wearing glasses that make everything appear inverted or rotated {visual distortion} {distortion, vision} soon learn to move around and perform tasks while seeing world upside down. Visual distortion adaptation involves central-nervous-system sense and motor neuron coding changes, not sense-organ or muscle changes. Eye, head, and arm position-sensations change, but retinal-image-position sensations do not change. People do not need to move to adapt to visual distortion.

ganzfeld

To try to induce ESP, illumination can be all white or pink with no features, and sound can be white noise {ganzfeld} {autoganzfeld}.

grating

Gratings {grating, vision} have alternating dark bars and light bars. Each visual-angle degree has some bar pairs {cycles per degree}. Gratings have cycles per visual-angle degree {spatial frequency, grating}. Gratings {phase, grating} can have relative visual-image positions. Gratings {sine wave grating} can have luminance variation like sine waves, rather than sharp edges.

Mooney figures

Figure sets {Mooney figures}, to display at different orientations or inversions, can show ambiguous faces (C. M. Mooney) [1957]. Faces have analytic face features and different configurations, so people typically perceive only half as faces.

ophthalmoscope

Instruments {ophthalmoscope}| can allow viewing retina and optic nerve.

rapid serial visual presentation

At one location, many different stimuli can quickly appear and disappear {rapid serial visual presentation} (RSVP), typically eight images per second.

spectrogram

Three-dimensional graphs {spectrogram} can show time on horizontal axis, frequency on vertical axis, and intensity as blue-to-red color or lighter to darker gray.

stereogram

Picture pairs {stereogram}| can have right-eye and left-eye images, for use in stereoscopes. Without stereoscopes, people can use convergence or divergence {free fusion} to resolve stereograms and fuse images.

Troxler test

If people stare at circle center, circle fades {Troxler test} (Ignaz Troxler) [1804].

1-Consciousness-Sense-Vision-Techniques-Binocular Rivalry

binocular rivalry

If eyes see different images, people see first one image and then the other {binocular rivalry}| [Andrews and Purves, 1997] [Andrews et al., 1997] [Blake, 1989] [Blake, 1998] [Blake and Fox, 1974] [Blake and Logothetis, 2002] [Dacey et al., 2003] [de Lima et al., 1990] [Engel and Singer, 2001] [Engel et al., 1999] [Epstein and Kanwisher, 1998] [Fries et al., 1997] [Fries et al., 2001] [Gail et al., 2004] [Gold and Shadlen, 2002] [Kleinschmidt et al., 1998] [Lee and Blake, 1999] [Lehky and Maunsell, 1996] [Lehky and Sejnowski, 1988] [Leopold and Logothetis, 1996] [Leopold and Logothetis, 1999] [Leopold et al., 2002] [Levelt, 1965] [Logothetis, 1998] [Logothetis, 2003] [Logothetis and Schall, 1989] [Logothetis et al., 1996] [Lumer and Rees, 1999] [Lumer et al., 1998] [Macknik and Martinez-Conde, 2004] [Meenan and Miller, 1994] [Murayama et al., 2000] [Myerson et al., 1981] [Parker and Krug, 2003] [Pettigrew and Miller, 1998] [Polonsky et al., 2000] [Ricci and Blundo, 1990] [Sheinberg and Logothetis, 1997] [Tong and Engel, 2001] [Tong et al., 1998] [Wilkins et al., 1987] [Yang et al., 1992]. Vision has disparity detectors [Blakemore and Greenfield, 1987].

dominant image

In binocular rivalry, vision sees one image {dominant image} with more contrast, higher spatial frequency, and/or more familiarity for more time.

flash suppression

If eyes see different images and briefly presented stimulus follows one image, that image is less intense and people see other image more {flash suppression} [Krieman et al., 2002] [Sheinberg and Logothetis, 1997] [Wolfe, 1984] [Wolfe, 1999].

1-Consciousness-Sense-Vision-Theories

modes of presentation

Perhaps, physical and phenomenological are different visual-appearance types {modes of presentation} {presentation modes}, with different principles and properties. However, how can people know that both vision modes are about same feature or object or how modes relate.

motor theory

Perhaps, motor behavior determines visual perception {motor theory of perception}. However, eye movements do not affect simple visual sense qualities.

phenomenal concept

Perhaps, visual phenomena require concepts {phenomenal concept}. Phenomenal concepts are sensation types, property types, quality relations, memory indexes, or recognition principles. Phenomenal concepts refer to objects, directly or indirectly, by triggering thought or memory. However, if physical concepts are independent of phenomenal concepts, physical knowledge cannot lead to phenomenal concepts.

sense-datum as image

Perhaps, in response to stimuli, people have non-physical inner images {sense-datum, image}. Physical objects cause sense data. Sense data are representations. Mind introspects sense data to perceive colors, shapes, and spatial relations. For example, perceived colors are relations between perceivers and sense data and so are mental objects. However, sense data are mental objects, but brain, objects, and neural events are physical, and non-physical inner images cannot reduce to physical events.

sensorimotor theory of vision

Perhaps, coordination among sense and motor systems builds visual information structures {sensorimotor theory of vision}. Sense input and motor output have relations {sensorimotor contingency laws}. Body, head, and eye movements position sensors to gather visual information and remember semantic scene descriptions. Objects have no internal representations, only structural descriptions. Vision is activity, and visual perception depends on coordination between behavior and sensation {enactive perception, Noë} [Noë, 2002] [Noë, 2004] [O'Regan, 1992] [O'Regan and Noë, 2001]. However, perception does not require motor behavior.

1-Consciousness-Sense-Vision-Theories-Color

adaptivism

Perhaps, different species classify colors differently, because they inhabit different niches {adaptivism}. Perhaps, perceived colors are adaptive relations between objects and color experiences, rather than just categories about physical surfaces. However, experiences are mostly about physical quantities.

adverbialist theories

Perhaps, perceived color states are relations between perceivers and physical objects {adverbialist theories} {adverbialism} and are neural states, not non-physical mental states. However, experiences do not seem to be relations.

dispositionalism

Perhaps, colors are dispositions of physical-object properties to produce visual color states {dispositionalism}. Physical properties dispose perceiver to discriminate and generalize among colors. Colors have no mental qualities. Alternatively, physical-object properties dispose perceivers to experience what it is like to experience color physical properties. Mental qualities allow knowing qualitative similarities among colors. However, experienced colors do not look like dispositions.

intentionalism and vision

Perhaps, perceived colors are representations {intentionalist theories} {intentionalism and vision}, with no qualitative properties. However, afterimages have colors but do not represent physical objects.

mental color

Perhaps, colors have mental qualitative properties {mental color}. Mental colors are what it is like for perceivers to have color consciousness. However, mental colors can have no outside physical basis, whereas experienced colors correlate with physical quantities.

physicalism and vision

Perhaps, colors are objective non-relational physical-object properties and are describable in physical terms {physicalism, color}. For example, physical colors are surface-reflectance ratios. Object surface color remains almost constant during brightness and spectrum changes, because surface reflectances stay constant. Because objects with different surface reflectances can cause same color, physical colors are disjunctions of surface reflectances. However, experience does not provide information about surface reflectances or other physical properties.

projectivism

Perhaps, perceived colors are physical-object properties or brain states experienced in space {projectivist theories} {projectivism, vision}. However, mental locations are not physical locations. Mental properties cannot be physical properties, because mental states differ from objects.

retinex theory

Perhaps, vision can compare blue, red, and green surface-reflectance ratios between image segments to determine color {retinex theory}. Background brightness is ratio average. Surface neutral colors depend on blue, red, and green reflectance ratios [Land, 1977]. However, vision does not use local or global brightness or reflectance averages.

trichromatic vision

Three coordinates can define all colors that humans can perceive {trichromacy} {trichromatic vision} {trichromatic theory of color vision} {Young-Helmholtz theory}. Humans have three photopigments in three different cone cells that provide the three coordinates. Trichromatic vision is only in Old World monkeys, apes, and humans.

1-Consciousness-Sense-Experience

phenomenal experience

Sense qualities {conscious experience} {phenomenal character} {phenomenal experience} {phenomenal property} {phenomenally conscious mental state} {phenomenological property} {qualitative character} {qualitative state} {raw feel} {sense quality} {sensory quality} {subjective quality} can be what something is like to observer, rather than physically is. Qualia are ways things seem when awake, dreaming, or hallucinating.

comparisons

Experience differs from awareness because it has meaning. Sensations of reality, illusions, and hallucinations are similar. Experience differs from perception because it requires awareness. People can know that they are having experience and can know its type. However, phenomena then are about perception rather than object.

types

Sensations are colors, sounds, touches, temperatures, smells, and tastes. Sensations track feature and object positions, momenta, energies, and times. Sensations correspond to physical intensities, frequencies, materials, and other properties. Tastes are liquid-like. Smells are gaseous-like. Touches are surface contours and motions. Sounds are vibratory. Sights are surfaces.

People hear sounds, which have loudness intensity and tone frequency. People can hear thousands of tones. Sounds have harmonics, with fundamentals and overtones.

People smell air molecules, based on molecule shapes, sizes, rotations, and vibrations, at different intensities. People can smell thousands of smells. Olfaction sense qualities are acrid or vinegary, floral, foul or sulfurous, fruity, minty, ethereal like pear, musky, resinous or camphorous, smoky, and sweet.

People taste molecules dissolved in water, based on molecule polarities and acidities, at different intensities. Gustation sense qualities are saltiness, sourness, sweetness, bitterness, and savoriness.

People feel compression, tension, and torsion pressure, at different intensities. People feel temperature by random molecule motions. People can feel gentle touch, motion, shape, sliding, texture, tickle, vibration, warmth, and coolness.

People can see visible light. People can see millions of hues, including blacks, whites, grays, and browns, with different brightness and saturation.

field

People can be conscious of many events and objects simultaneously. Subject experience has one moving viewpoint, which differs from others' viewpoints. Observation is having sensations. Observed and observer are an observing system. Processing and memory registers are observations, and reader and writer are observers. High-level perception builds scene, perceptual space, or phenomenal world, which is like ovoid including eye, face, periphery, front, and focal point. Unusual body motions can break sense-field coherence [Bayne and Chalmers, 2003] [Cleeremans, 2003].

Perception uses a self-centered egocentric reference frame, which has forward point during motion, receiving point for incoming stimuli, and vestibular-system gravity-aligned vertical axis. Consciousness has world-centered or object-centered allocentric reference frame, which has two horizontal axes and vertical axis.

variation

Verbal reports indicate that most people have similar sensations. Gene alleles, culture, and age can vary experiences. Sense qualities of yellow can change with age. Sensitivity, acuity, precision, accuracy, discrimination, and generalization can vary. Conscious activities change often.

properties

Sensations always have location, size, duration, time, intensity, and phenomenal sense qualities. Phenomena can shift, compress, stretch, twist, rotate, or flip.

Sensations are continuous, with no discontinuities, no gaps, and no units [VanRullen and Koch, 2003]. Inputs from small and large regions, and short and long times, integrate to make continuity [Dainton, 2000].

Sensations are immediate, and so not affected by activity, reasoning, or will [Botero, 1999].

Sensations are incorrigible, and so not correctable or improvable by activity, reasoning, or will.

Sensations are ineffable, with no description except their own existence.

Sensations are intrinsic, with no dependence on external processes [Harman, 1990].

Sensations are private, and so not available for others' observation or measurement.

Sensations are privileged, and so not possible to observe except from first-person viewpoint [Alston, 1971].

Sensations are subjective, and so intrinsic, private, privileged, and not objective [Kriegel, 2005] [Nagel, 1979] [Tye, 1986]. Subject experience belongs only to subject. No one else can have that experience or know it. Physical objects, such as stars, have no owner or have other owners, such as cars.

Perhaps, phenomena belong to mental state rather than to subject.

Sensations are transparent, with no intermediates [Kind, 2003].

Sensations are analytic, and so, like sounds, independent with no mixing.

Sensations are synthetic, and so, like colors, dependent with mixing.

Sensations are not physical.

Sensations have no mass but have a type of density.

Subjective experiences seem not to be ignorable and have self-intimation.

Sensations always feel indubitable.

Sensations seem unerring and infallible.

Sensations always feel irrevocable.

Sensations are not about microscopic things but about macroscopic regions.

Sensations are not relational and not comparable.

Sensations are the only thing that has meaning, because brain uses them for reference. However, sensations do not always have meaning.

Subject experience is not observable by others and so is personal and not directly communicable, because it has no units with which to measure.

non-locality

Physical events happen locally and instantaneously. Mental relations characteristically relate two or more physically separated points, within one psychologically simultaneous time interval, and so are non-local. Mentality requires time to gather information from separated locations to integrate them. Mentality requires space to gather information from separated times, memories and current perceptions, to integrate them. Perceptions unify local sense processing about features, objects, and events. Mentality unifies separate things into structures or processes.

surface property

Sensations are about surfaces from which information began, not about information carrier to sense organ. Intensity energies carry surface information to sense organs but have no sense qualities. Information channels cannot have sense qualities. For example, electromagnetic radiation has no color. Sound waves have no sound.

Only surfaces can have qualities. Color is not about waves traveling through space but is about surface from which waves emanated. Sound is not about waves traveling through medium but is about surface from which waves emanated.

Visual sense qualities are about surface sizes and reflectances. Aural sense qualities are about surface vibration intensities and frequencies. Touch sense qualities are about surface torsion, compression, hardness, and texture. Taste and smell sense qualities are about surface molecular configurations.

Experience is of objects and events, which people can invent or extend. Cognition, category making, distinction finding, and memory are consciousness foundations [Seager, 1999]. Sensations are about objects, events, and features, which cognition later interprets.

brain

Perhaps, sensations are brain events. However, experiences do not seem to be in brain or be like brain. Brain produces perceptions internally but perceives sensations externally, at spatial positions on surfaces. Consciousness itself does not provide knowledge of things external to mind, only of internal mental things [Seager, 1999].

Perhaps, external references are to object and event concepts or properties, rather than to external objects and events.

Sensations can come from inside and outside body. When thinking, people talk to themselves and hear same sounds as if really talking.

Perhaps, sensations are judgments or dispositions to do something about perceptions.

Animal behaviors make it appear that only humans have experiences.

nature

Perhaps, sense phenomena are physical-object qualities. Identical objects then have same phenomena. However, same person can have different phenomena about same object, and different people can have different phenomena about same object.

Perhaps, sense phenomena are experience or object physical properties. However, experience does not provide access to surface-reflectance relations, other physical properties, or experience relations.

Mind and mental states use thoughts, perceptions, emotions, and moods {propositional attitude, phenomena}, which associate phenomenon with representation or intentional content.

Perhaps, sense phenomena are relations to external or internal objects. However, experience seems to be about object features, not about relations.

Sensations are meaningful because they represent something outside mind [Cummins, 1989] [Cummins, 1996] [Darling, 1993] [Papineau, 1987] [Perner, 1993]. Sensations represent physical data only to level useful for acting quickly and correctly in most situations. However, sensations can be different phenomenon, such as inverted spectrum, though intentional content does not vary. Sensations can be the same, by automatic sense processing, though high-level representations differ, such as Inverted Earth. Experiences, such as feeling depressed, can have no representations.

Perhaps, when representation becomes explicit, it is conscious. Implicit representations are not conscious, though implicit activity can become explicit [Adolphs et al., 1999] [Zeki, 2001].

categorization

Sense processes categorize sensations, breaking continuous values into ranges, such as different colors with different brightnesses. Among senses, ability to categorize depends on pairwise comparisons between multisensory neurons. Within sense, ability to categorize depends on pairwise comparisons between sense neurons [Donald, 1991].

media

Like television, brain receives coded information and translates code into visual array. However, sensations have no substrate or medium to carry them. They are not physical and do not need substrates. They are their own medium.

Having experience is not like looking at holograms, printed pages, or television displays. Those displays have boundaries, whereas sensations have no definite boundary. Those displays cover only some visual field, whereas sensations cover all space. Those displays have controls for adjusting display color, brightness, and contrast, but people cannot will sense-quality changes. Those displays often have distortions or false colors, but sensations are consistent and complete. However, they can distort if people take drugs. Size, flatness, and errors can distinguish displays from real world, but sensations are not distinguishable from real world, because people's memories depend on same abilities. Observers can look away from television displays but cannot look away from sensations. However, observers can look at different sense-quality parts, just as people watching television can look around.

memory

Sensations summarize and categorize whole-field and full-spectrum processing results to compress information for storage and recall. Previous experiences affect later experiences, automatically. Repeating similar experiences changes experience.

labeled lines

Sense organs make same sensations no matter which physical energy strikes them. For example, tapping eye causes light flashes. Receptor stimulation and brain-region stimulation cause same response type.

time

Consciousness requires time to integrate. Time is short enough to be psychologically simultaneous and long enough to integrate locations and parts. Psychologically simultaneous events are within 20-millisecond to 50-millisecond intervals. Features, objects, events, and scenes integrated during this interval automatically associate in space and time.

information

All senses require large information amounts.

passivity

Sense qualities require awake or dreaming brain processing but seem not to need conscious effort or will.

emotions

Colors, sounds, touches, smells, and tastes can convey emotion, such as anger, disgust, fear, happiness, sadness, surprise, and remembrance. Most sensations have no associated emotion. Sensations can attract or repel, so people like or dislike them. People can feel doubt or confidence in statements. Feeling level goes from pleasure to pain. Success level goes from reward to punishment.

autocerebroscope

Imagine one can look inside brain while it is working {autocerebroscope}, to see all physical activity. How can this activity make many different phenomena?

qualia

People can attend to intrinsic, non-intentional experience features, sensations, or sense qualities {qualia}| {quale}.

sentience

People have mind and experiences {sentience}|. Sentience requires sensation, perception, awareness, mind, and experience. Sentience is state, not process, and requires no thoughts. Perhaps, only humans are sentient.

aspect nature

Consciousness can experience objects without knowing what they are, only that they are something {aspect nature}.

1-Consciousness-Sense-Experience-Features

perceptual intensity

Conscious or dreaming people having above-threshold stimuli are aware of stimulus energy flow, density, pressure, flux, or amplitude {perceptual intensity}. For example, vision has brightness, and hearing has loudness. Conscious or dreaming people having below-threshold stimuli do not experience intensity. Unconscious people have no intensity awareness. For vision, intensity ranges are specular reflection, brilliant white, white, light gray, gray, dark gray, and black. For sound, ranges are whisper, normal, and intense. For touch, ranges are tickle, light pressure, touch, push, and pain. For taste, ranges are hint, full, and intense. For smell, ranges are whiff, signal, light, definite, strong, and pain.

properties

Intensities comes from surfaces. Intensity is about energy flow, not space or time, but has space and time locations. Sense-receptor membrane depolarization measures intensity, and neuron axon-impulse rate measures intensity. Perceptual intensity depends on stimulus intensity, nearby intensities, memories, and expectations, so intensity is relative. Perceptions do not have actual energy. Intensity has just-noticeable, dull, average, acute, and painful levels. Smallest intensity results from several energy quanta. Intensity is continuous, not continual or discrete. Intensity typically changes, flickers, or fades. Intensity has contrasts.

quality

People do not experience pure intensity. For perceived surface points, perceptual processing integrates remembered and current information about physical-stimulus intensity level and energy type, such as light, into non-physical quality, such as phenomenal bright red, pale yellow, or dark brown. Perceptual intensity and quality unite.

perceptual quality

Conscious or dreaming people, having above-threshold stimuli, perceive intensity types {perceptual quality}. Conscious or dreaming people having below-threshold stimuli are not aware of qualities. Unconscious people have no awareness of quality. Perhaps, only mammals experience sense qualities.

types

Hearing can detect formant sound frequency bands. Vision can detect color bands: black, gray, white, red, green, blue, yellow, pink, brown, purple/violet, orange, and indigo/ultramarine. Smell can detect air molecule types: esters, ketones, aldehydes, sulfur compounds, aromatics, and alcohols. Taste can detect water molecule types: salts, acids, bases, glutamate, and sugars. Touch can detect pressure types: tickle, tingle, pain, and pleasure. Touch can detect temperature types: warmth and coolness.

categorization

Sense qualities have quality spectra and overlapping categories. Sense categories form continuous ranges, with categories similar to and opposite from other categories.

properties

Qualities are like coded and compressed intensity-frequency spectra. Qualities are on space surfaces. Qualities are continuous, not discrete. Qualities are not about space, time, or energy, but have space and time locations. Whole image determines sense qualities.

intensity

People do not experience pure quality. Only quality has intensity. Quality categories have intensity.

meaning

Sense qualities are the only things that have meaning.

perceptual space

Conscious or dreaming people are aware of seemingly stationary infinite three-dimensional space {perceptual space} {theater of the mind} {subjective space} {sensory field} {visual field} in and around body, bounded by surfaces near and far. Conscious or dreaming people having below-threshold stimuli are still aware of space. Unconscious people have no awareness of space. Smallest space interval is one millisecond of arc.

properties

Sensations always are at three-dimensional-space locations, with directions and distances. Three planes define space outside head: horizontal at ground, vertical pointing straight-ahead, and vertical and parallel to face one meter away. People are aware only of three-dimensional space, not zero-dimensional, one-dimensional, two-dimensional, four-dimensional, or higher-dimensional space. Space is about distance intervals appropriate to body actions, microns to centimeters, not about electrochemical and physical processes taking place at molecular distances. Space does not seem to stretch evenly but can compact and expand. Objects can seem to have longer or shorter extensions depending on nearby-object sizes and orientations. Space does not change, flicker, or fade. Space seems continuous, not discrete. Space has no intensity, density, energy, or mass.

field

People experience sense qualities at different distances. People feel that scenes extend to regions with no sense qualities, such as behind head.

meaning

Space is necessary for meaning, because it provides reference locations.

processing

To construct space, brain processing first constructs body-centered two-dimensional space, then body-centered two-and-a-half dimensional space, which transform during body motions and do not have symbol grounding or sensations.

Three-dimensional space is stationary. Body, head, and eye movements change observer perspective, making different viewpoints. Body, head, and eye movements transform egocentric space coordinates, using mostly translational and vibrational transformations. Sense processing transforms egocentric space coordinates to maintain stationary allocentric space, using mostly rotation transformations. Geometric coordinate transformations maintain spatial relations during eye, head, or body movements. Egocentric-space transformations maintain stationary allocentric space. Sense-processing tensors compensate for body movements that change egocentric space, and coordinate transformations create and maintain allocentric stationary space [Olson et al., 1999] [Pouget and Sejnowski, 1997].

Space uses absolute or relative body-centric and environment-centric coordinates, which are transformed during body movements.

multisensory

All senses seem to share same perceptual space. Cortical vision processing makes three-dimensional perceptual space. Temporal-and-parietal-lobe sound processing makes three-dimensional perceptual space. Hippocampus memory processing makes three-dimensional memory space. Cerebellum sensory-motor processing makes three-dimensional sensory-motor space. Frontal lobe and association cortex merge sensory, memory, and motor spaces to make unified perceptual space.

observer

People feel that they are behind sensory apparatus, observing outward. Observer or self seems to be at three-dimensional-space center.

1-Consciousness-Sense-Experience-Features-Time

perceptual time

Conscious or dreaming people are aware of seemingly infinite one-dimensional time {perceptual time}. Conscious or dreaming people having below-threshold stimuli are still aware of time. Unconscious people have no awareness of time. Shortest sensations last one millisecond.

properties

People are aware of one-dimensional time, not zero-dimensional time, two-dimensional time, or higher-dimensional time. Time information must be in real time, so brain does not lose information because processing is too slow, and brain does not need to add information because processing is too fast. Time does not change, flicker, or fade. Time seems continuous, not discrete. Time has past and future, before and after. Time has no intensity or space location.

People experience time flow, which seems faster with more events each second and slower with fewer events each second. Felt time-flow rate differs from brain-processing time-flow rate [Dennett and Kinsbourne, 1992] [Held et al., 1978] [Flaherty, 1999] [Pastor and Artieda, 1996] [Pöppel, 1978] [Pöppel, 1997]. Sense qualities are about time intervals appropriate to body actions, time scale of 20 milliseconds to hours. Sense qualities are not about electrochemical and physical processes at millisecond time intervals nor instantaneous events [Clifford et al., 2003] [Elman, 1990] [Price, 1996].

meaning

Time is necessary for meaning, because it provides references to past, present, and future.

delays

Time consciousness requires time delay. Time delay can use extra loop, temporary store, shuttle, stretch or shrink mechanism, or chemical delays. Circuits can have bypass circuits to adjust time. Main circuit can have inhibition while processing in bypass. Bypass can remove inhibition or overcome it.

multisensory

All senses seem to share same time.

observer

Observer or self seems to be at one-dimensional-time center. Self seems to be observing events in the present, looking backward to memories, and looking forward in imagination. Events circumscribe observer in time, forming envelope around observation point [Sellars, 1963].

minimal perceptual moment

Sensations last at least minimum time {minimal perceptual moment}. Perhaps, activation builds until it reaches threshold. Perhaps, positive feedback causes response spiking.

protracted duration

In dangerous situations, people experience shorter moments and decreased time flow {protracted duration}, because they experience more moments per second.

specious present

Conscious time seems to cover interval of 1 to 3 seconds {specious present}. Brain processes inputs from many sources, taking time intervals to integrate. Information overlaps over time.

backwards referral

After neurosurgery, memory time markers can move backward in time {backwards referral in time} {subjective referral} {subjective antedating} [Libet, 1993] [Libet et al., 1999].

Libet's delay

Consciousness requires minimum stimulation time {Libet's delay} {time-on theory} of 0.5 seconds, no matter what the intensity, to reach neuronal adequacy [Eccles, 1965] [Iggo, 1973] [Koch, 1999] [Libet, 1966] [Libet, 1973] [Libet, 1993] [Libet et al., 1999] [Meador et al., 2000] [Ray et al., 1999].

neuronal adequacy

Consciousness requires minimum stimulation time of 0.5 seconds {neuronal adequacy}, no matter what the intensity [Eccles, 1965] [Iggo, 1973] [Koch, 1999] [Libet, 1966] [Libet, 1973] [Libet, 1993] [Libet et al., 1999] [Meador et al., 2000] [Ray et al., 1999].

1-Consciousness-Sense-Experience-Processing

common sense

Mental faculty {common faculty} {common sense, sensation} compares and associates shapes, sizes, and motions from all senses [Bayne and Chalmers, 2003] [Cleeremans, 2003].

observation

Subject observers can have sensations {observation} of objects observed. Sensations are like reports in parallel. People feel that they are behind sensory apparatus, observing outward. Observations are in three-dimensional space and one-dimensional time. Self seems to be observing events in the present, looking backward to memories, and looking forward in imagination. Events circumscribe observer in time, forming envelope around observation point [Sellars, 1963].

preconscious processing

Stimuli can have intensity too low or duration too short for conscious awareness, but information affects behavior {preconscious processing}. EEG and brain blood flows indicate that sense regions, motor regions, association areas, emotion areas, and memory areas are active during unconscious processing.

If attentional load is high, people can be unaware of non-attended stimuli, but information affects behavior. Anesthetized patients can remember and process information, so unconscious processing can affect conscious perceptions. Brain-damaged patients can remember and process information, so unconscious processing can affect conscious perceptions.

reality monitoring

Self knows about past, present, and future and can distinguish imagination, memory, and reality {reality monitoring} {reality discrimination} [Sellars, 1963]. People typically can discriminate between what they imagine and what they receive from environment or body [Johnson and Raye, 1981].

self-presentation

Consciousness involves presentation to self {self-presentation} of quality type {cognitive quality}.

subjective threshold

Stimuli have three intensity levels that affect same brain regions differently.

objective threshold

Intensity below threshold level {objective threshold, experience} is too low for perception.

perception

Intensity above objective threshold causes non-conscious perception. If stimulus intensity level is above objective threshold but below subjective threshold, stimulus does not become conscious but can influence preferences for same or associated stimuli [Kunst-Wilson and Zajonc, 1980] [Murphy and Zajonc, 1993].

subjective threshold

At higher intensity level {subjective threshold, experience}, people begin to detect sensations. For all senses, consciousness requires intensity level higher than intensity level needed for brain to detect and use stimuli [Dehaene et al., 1998] [Morris et al., 1998] [Morris et al., 1999] [Whalen et al., 1998].

accumulation

Perhaps, activation must build to pass subjective threshold. Building counters dissipative and inhibitory processes and has positive feedback and signal recursion.

feedback

Perhaps, positive feedback must cause response spiking to pass subjective threshold. After spiking, activity falls, but sensations can linger [Cheesman and Merikle, 1984] [Kihlstrom, 1996].

symbol grounding

Subjective experiences require relation {symbol grounding, experience} between internal thing or event and external thing or event. External things or events are physical memories or environmental effects. Internal things or events are sensations. Symbol grounding includes both perceptions and mental experiences.

symbol

Symbols are perceptions that label, index, or refer to perceptions or concepts. Both symbol and reference perceptions are mental representations. Perceptions have relations and form reference system. Nothing is intrinsically symbol, because only relations make symbols. As perceptions, symbols have space, time, intensity, and quality. Most symbols are non-conscious, but symbols, such as colors, can be conscious.

symbol system

Most perceptions are objects that are not in systems. Symbols have added meaning, because they have relations in coding systems. Coding systems use symbol sets and have processing mechanisms that have symbol reading, processing, and writing rules. When symbol appears, typically in a symbol series, coding-system processing mechanism follows rules to use symbol. Results/outputs are symbol meaning. Meaning occurs only in symbol systems.

environment

Perhaps, isolated systems cannot have subjective experiences. Perhaps, systems must learn, have memory, or interact with environment. Learning can supply outside information. Memory can supply secondary information sources. Environment can provide intention references [Harnad, 1990] [McGinn, 1987] [McGinn, 1989] [McGinn, 1991] [McGinn, 1999] [Velmans, 1996] [Velmans, 2000]. For example, computer programs on installation CDs do not interact with other information {isolation, system}. They cannot run, receive input, or produce output. Installing programs on computers allows programs to receive environment input, so they can establish references to real things [Chalmers, 1996] [Chalmers, 2000].

1-Consciousness-Sense-Experience-Properties

alertness

People can have different awake-consciousness levels {alertness}|. Alertness can be high, normal, or low. Physiological factors, such as hormones, stimulation level, novelty, nutrient levels, sleepiness, diseases, and moods, set alertness level. All mammals have alertness levels.

immediacy

Experience seems to happen immediately or in one step {immediacy}. Activity, reasoning, or will does not affect phenomenal-experience generation [Botero, 1999]. People cannot be aware of brain processing. Sensations are after processing. Sensations appear and do not change. Processing does not continue, and quality does not become more refined. (Quality can change with new information.) Perhaps, quality reaches optimum, then equilibrium holds. Perhaps, brain modifies processing to trick consciousness.

incorrigibility

Activity, reasoning, or will cannot correct or improve sensations {incorrigibility}| [Seager, 1999]. People cannot be aware of brain processing. Sensations are after processing. Sensations appear and do not change. Processing does not continue, and quality does not become more refined. (Quality can change with new information.) Sensations can misidentify. Sensations can misremember.

ineffability

Sensations are complex and can have no description except their own existence {ineffability}|. Nothing can substitute for experience. Knowledge about experience requires having the experience [Harman, 1990]. However, language can describe sense-quality properties.

intrinsicness

Subjects are integrated sets of sensations, which depend only on internal processes. Experience is a property, state, process, or essence of subjects {intrinsicness}. Experience depends on subject structures and functions. Alternatively, subject can have experiences [Seager, 1999]. Experience does not need screen or external aid. Experience does not depend on external things or events.

minimal properties of consciousness

Perhaps, consciousness requires ineffability, intrinsicness, immediacy, and privateness {3I-P} {minimal properties of consciousness}.

object unity

Object properties seem to belong to object and thus associate {object unity}. For example, object can be red and spherical. Subject perceives red spherical object, not red object and spherical object with a relation. There is only one object, not two objects. Phenomena link in objects [Seager, 1999].

phenomenal unity

All experiences, including thoughts, moods, and emotions, at one time associate {phenomenal unity}. For example, sight and sound perceived at nearby locations associate. Brain processing adds links that unify them [Seager, 1999].

privateness

Sensations are only available to subject, and direct observation from outside cannot measure them {privateness}. No one else can have the same experience [Seager, 1999].

privileged access

Only subjects, with first-person viewpoints, can have sensations {privileged access} [Alston, 1971] [Gertler, 2003]. Comparing reactions to experiences and subjective-knowledge reports can result in objective knowledge.

space-filling

Sensations depend on whole scene or image and fill all of space and time {space-filling}, leaving no gaps or overlaps.

subjective character

Consciousness happens in people for that person only {subjective character}. Experience is phenomenon to subject, and no other subject can have that experience [Davidson, 2001] [Georgalis, 2005] [Kriegel, 2005] [Nagel, 1979] [Shoemaker, 1996] [Tye, 1986].

transparency of consciousness

People do not perceive experience qualities but only object properties and qualities {transparency, consciousness}. Experiences do not themselves have knowable phenomena or properties [Kind, 2003] [Loar, 2002]. After experiencing object properties, people are only aware that they are having experiences of phenomenal character. Subject can perceive no intermediate to experiences, which are immediately available. Hallucinations are only about object properties.

1-Consciousness-Sense-Experience-Properties-Sense Types

analytic sense

Hearing does not mix tones {analytic sense}. Analytic senses analyze signals from source into independent elements. Touch, smell, and taste are both synthetic and analytic.

synthetic sense

Vision mixes colors {synthetic sense}. Synthetic senses mix signals from source to synthesize resultant sensation. Touch, smell, and taste are both synthetic and analytic.

1-Consciousness-Mental Theories

mental theories

Mind may be process, property, state, structure, or substance {mental theories}. However, phenomena are insubstantial, cannot change state, have no structure, do not belong to objects or events, and are results not processes.

property theories of mind

Perhaps, consciousness is a property {property theories of mind}. Physical-substance properties are features or variables that have values. Properties are stimulus type, frequency range, intensity or concentration range, comparison method, surface location, surface size, surface orientation, sensor and processing location, detection type, and comparison with other sense systems.

state theories of mind

Perhaps, consciousness is a state {state theories of mind}. Physical-substance states are part and energy configurations, with positions and momenta or energies and times.

structure theories of mind

Perhaps, consciousness is a structure {structure theories of mind}. Physical-substance structures are part arrangements with patterns and relations. Brain regions are structures.

Mental structure has non-physical parts and relations. If structure is non-physical, it must still have physical means to move physical things. Physical means must follow physical laws.

Perhaps, hidden natural non-logical structure {hidden structure, consciousness} mediates between mental and physical. Hidden structure allows physical and mental to interact.

substance theories of mind

Perhaps, consciousness is a substance {substance theories of mind}. Mind does not "look at" sense qualities but "is" sense qualities.

non-physical

Perhaps, consciousness is special non-physical substance, such as soul, Ideal, or Form. New substances can explain complex or mysterious phenomena by having needed properties. For example, people noticed that the main difference between living and non-living things was that living things move parts and bodies, so they imagind a new substance, élan vital, that can animate non-living things. People experienced sense qualities and noticed that only humans can reason, have moral feelings, use language, and/or have subjective experiences. Perhaps, a new substance, soul or conscience, can inhabit body and provide it with consciousness. Non-physical substance must work using non-biological, non-chemical, and non-physical processes. People can learn nothing more about them and cannot test them, so they do not provide satisfactory explanations.

physical

Perhaps, consciousness is a new physical substance type. New substances can explain complex or mysterious phenomena by having needed properties. New physical substance types can work through either ordinary biological, chemical, and physical processes or new biological, chemical, and physical processes. For example, people felt warmth and noticed that it flows from cooler object to warmer object, so they imagined a new substance, phlogiston, that can flow. People sleep and notice that people become sleepy, so they imagined a new chemical, the dormant principle, that can induce sleep when released in brain. Invented physical substances include electrons, quarks, and Higgs particles. For physical substances, experiments can reveal physical, chemical, and biological properties.

1-Consciousness-Mental Theories-Process

process theories of mind

Perhaps, consciousness is a process {process theories of mind}. Physical processes are events that transform input to output. Processes involve energy or information and flows or transformations. Processes can be top-down or bottom-up.

algorithm

Processes can be algorithms, such as finding square roots, which take input, transform it, and make output. Iterative algorithms can be slow.

non-algorithmic processes

Brain can use non-algorithmic procedures. Vectors and matrices are containers. Point graphs and edge graphs are container adapters. Matrices and graphs represent tables, polynomials, and object properties. Comparing numbers or strings, sorting, and generating random numbers are direct functions. File manipulations include renaming, moving, copying, and making directories.

circuit

Brains with enough mass and complexity have neural circuits and energy to run them. Circuits are loops that allow continuous flow, which brain can modulate and augment. Circuits can have input and output branches. Circuits can interact. Circuits have different lengths, axon numbers, and synapse numbers.

pathway and flow

Brains with enough mass and complexity have pathways and energy to make flows. Pathways carry different intensities, which brain can modulate and augment. Flows have different speeds. Flows can accumulate or dissipate energy or information, over time and space, in registers or other containers. Pathways have different lengths, axon numbers, and synapse numbers. Pathways can branch.

actuality

Perhaps, sense qualities are real properties that cause perception {occurrence, phenomena} {actuality, phenomena}.

reflexive thinking

Perhaps, experience happens if people think about perception reflexively {thinking, reflexive} {reflexive thinking}.

unconscious belief

Perhaps, events observed with sense organs create thoughts {unconscious belief} about perceptual contents, which cause unconscious higher-order recognition or action.

1-Consciousness-Mental Theories-Phenomenal Unity

higher-order sense

Perhaps, phenomenal states unify by higher-order sense {higher-order sense} (HOS) or awareness, which is conscious. However, then higher sense is another conscious object {just more contents}.

higher-order thought

Perhaps, experiences unify by higher-order thought {higher-order thought} (HOT), which is not conscious.

subsumption

Perhaps, phenomenal states are only whole {subsumption}, with no separable parts.

1-Consciousness-Prerequisites

prerequisite functions

Consciousness requires specific functions {prerequisite functions for consciousness} {consciousness, prerequisite functions}.

coding

In a one-millisecond interval, one neuron axon can carry one spike or no spike. Time sequences can make serial binary codes. In a one-millisecond interval, one neuron synapse can carry one neurotransmitter packet or no packet. Time sequences can make serial binary codes. Neuron sets can make parallel binary codes.

Body analog signals can be chemical concentrations, ion concentrations, pressures, electric currents, or voltages. Neurons can change analog wave inputs into digital spike outputs, using comb filters. Neurons can change axon digital spike inputs into analog wave outputs using parallel oscillators or resistances for frequency modulation.

frequency

Neuron axons can carry frequencies up to 800 Hz. Neuron assemblies can detect mechanical frequencies up to 20,000 Hz, using phase differences. Neurons can detect frequency bands.

Neuron assemblies can modulate neuron-code carrier-wave frequency and amplitude to transfer signals.

intensity

Neurons can detect energy flow using consecutive starting and ending flow points. Neurons can detect intensity ranges. Neurons can detect mass or weight using consecutive starting and ending lifting points.

Brain can use Wheatstone-bridge-like electrochemical mechanisms to null sense quantity to measure it precisely.

Brain can use Wheatstone-bridge-like electrochemical mechanisms to tune to value to measure it precisely.

space

Neuron pairs can detect starting and ending space points for locations, directions, and distances. Hearing uses amplitude and timing differences to locate sounds. Vision uses angle and distance differences, plus timing and amplitude differences, to locate features. Touch, taste, and smell use relative distance and amplitude differences to locate surfaces. Objects and events can be near/far, right/left, and up/down. Near/far determination uses projection. Right/left determination uses midline. Up/down determination uses horizon.

ON-center cells can detect local spatial relations. For example, cell can have horizontal band at center to detect space between two objects, band above to detect object above, or band below to detect object below. Cells with horizontal and/or vertical bands can detect all local spatial relations.

motion

Neuron pairs can detect motion using consecutive starting and ending points. Detecting body motions uses correlated sensations, such as front-to-back airflows and kinesthetic sensors. Trajectory perception allows extrapolation and/or interpolation.

vibration

Touch receptors can detect mechanical vibrations up to 30 Hertz, which are also the lowest-frequency vibrations detected by hearing receptors. Below 20 Hz, people feel pressure changes as vibration, rather than hearing them as sound. Images flashed at 20-Hz rate begin to blend. 20-Hz vibration rate is also maximum breathing, muscle-flex, and harmonic-body-movement rate. Muscles can contract at 20-Hz maximum rate, if muscles have no reflex or rebound response, unlike bird or insect wings.

transformation

Neuron assemblies can perform translation, inversion, rotation, reflection, and rotation/reflection by coordinate transformations using tensors.

reference frame

When eyes, head, or body moves intentionally, brain can make perceptual world remain stationary. Brain transforms whole scene, inverting linear transformations that caused movement, to cancel body movements and positions. Brain can transform through all dimensions and all angles for all muscle and body-part motions.

pattern manipulation

Brain can manipulate patterns, such as subtracting string from string or dividing two-dimensional arrays. For example, for strings "1122334455" and "4455", brain can subtract second from first to get "112233" in one operation: $1 - $2 = $3. For matrix, brain can find submatrix in one operation: M1/M2 = M3. In the same way, brain can find remainder in one operation. Brain has directory, pattern, file, and operating-system commands, such as "cmp" or "diff" Unix commands. Using these, brain can find greatest common pattern between two patterns and combine patterns to make larger patterns.

time

Neurons can synchronize signals using timing neurons and chemicals. ON-center cells can detect local temporal relations using spatial-relation changes. For example, cell can have horizontal band at center to detect time between two events, band above to detect event above, or band below to detect event below. Cells with horizontal and/or vertical bands can detect all local temporal relations. Neuron pairs can detect starting and ending time points, using phase differences to measure microsecond intervals.

mathematics

Neurons can add by summing excitatory inputs. Neurons can subtract by adding positive excitatory input and negative inhibitory input. Neurons can multiply using synapses that act like transistors, priming cell bodies to enhance dendritic inputs, or gating axons near synapses. Neurons can divide by multiplying reciprocal quantities. Neurons can find ratios by division, which is inverse multiplication. Neuron assemblies can integrate differentials. Neuron assemblies can differentiate integrals. Neuron assemblies can determine functions by correlating function domain and range values. Neural circuits allow iteration.

indexing

Neuron assemblies index all sensed objects and events, for memory and recall, including body parts and positions. Indexes have cross-references and links. Reference types can differ. For example, text documents can reference document path, page, paragraph, heading, bookmark, footnote, table, figure, slide, equation, or other named or numbered location. Document references can be to numbered pages, numbered paragraphs, text, something above reference, or something below reference.

language

Perhaps, consciousness requires language. However, sensations vary more than language [Ramachandran, 2004]. Perhaps, only humans are self-consciousness and have feelings, because only they have language.

prerequisite structures

Consciousness requires specific structures {prerequisite structures for consciousness} {consciousness, prerequisite structures}.

body

Body, separate from brain, carries sensors and performs movements. Body surfaces interface with environment and define body-part positions. Body surfaces encounter obstacles and interact with objects.

Body and brain pair structures, for redundancy and cooperation. Redundancy allows one structure to perform required function, while other structure can start new function.

Voluntary muscles allow repositioning sense organs, moving toward and way from perceived objects, and obtaining different viewpoints.

Gravity establishes vertical pressure gradient from toe to head.

Chemical gradients across body, organs, and organ modules establish growth, repair, and development axes.

brain

Brain, separate from body, optimizes distances between processing centers, for fastest speed and most interaction. Brain regions have locations that optimize function. Frontal lobe is in front, so its pathways can loop through thalamus and cortex with correct timing and spatial layout. Cerebellum is at rear, to connect to brainstem and to have proper timing with motor and touch sense systems. Cortex overlaps lower brain regions, to time multisensory and motor processes correctly. Brainstem is central, to activate and interact efficiently.

Brain regions simulate body and environment spatial relations, using topographic neuron layers.

Brain has three-dimensional registers to store information and to make behavior and memory lookup tables.

Neural pathways go out and return to make ring or loop that allows feedback, feedforward, iteration, memory read, memory write, and reverberation.

Brain regions can have circuits that cross up-down, right-left, and front-back. Switchboards allow read and write operations.

Neurons interact with neighboring and distant neurons (neuron assemblies) to make overall behavior, mental function, and mood. Neurons adhere to other neurons and glia to form groups and layers that determine processing.

Neurons have different shapes and chemicals to allow different processing types.

Neurons have excitatory and inhibitory chemical synapses, with synapse plasticity.

Neurons have support cells (glia) that regulate chemical environment.

senses

Sense systems have receptor and neuron-type spatial and temporal patterns, with interconnections to make distinctive signals and codes [Ackerman, 1990].

Multiple sense systems, especially vision, kinesthetic, and touch systems, allow comparisons among sense spaces and intensities to make consensus space and time.

Body needs pain, temperature, kinesthesia, and vestibular systems (inside senses) to know body-part locations.

Body needs vision, hearing, and touch (outside senses) to know outside object locations.

Sense receptors absorb stimulus energy to polarize membranes and measure stimulus intensities. Sense systems have many receptor types.

ground of Earth

Main object outside body is the mostly horizontal ground. Ground establishes horizontal, with fundamental directions straight ahead and right/left. Body is perpendicular to ground, so ground establishes vertical direction. Specialized body surfaces contact ground, and feet feel substantial pressure.

1-Consciousness-Purposes

consciousness purposes

Perhaps, consciousness has purposes {consciousness, purposes}.

categories

Perhaps, without sense qualities, brain can detect only major categories. With sense qualities, brain can detect subtleties needed to recognize complex objects, such as people.

evolution

Perhaps, consciousness performs functions that help organisms get food, defend against predators, or reproduce. If consciousness has survival functions or structures, evolution can adapt it. Evolution, history, random effects, and physical laws affect adaptation, and adaptive traits are not best or perfect [Baars, 1988] [Crick and Koch, 1995] [Johnson-Laird, 1983] [Johnson-Laird, 1988] [Mandler, 2002] [Minsky, 1968] [Minsky, 1985] [Velmans, 1991] [Velmans, 1996]. If awareness and consciousness have no function, evolutionary processes cannot affect them [Cosmides et al., 1992].

needs

Perhaps, consciousness manages drives, desires, moods, and emotions.

self-knowledge

Perhaps, consciousness allows one to know what one will do next.

synapse strength

Perhaps, consciousness strengthens or modifies synapses.

no purpose

Perhaps, consciousness does not cause anything and does not have purposes.

behavior and consciousness purposes

Perhaps, consciousness affects behavior, somatic functions, habits, skills, and reflexes {behavior, consciousness purposes} {consciousness purposes, behavior}.

complex movement

Perhaps, consciousness performs delicate and intricate acts and controls complex and subtle physical movements and forces.

correlation

Perhaps, consciousness correlates body movements and perceptions.

preventing fixed action

Perhaps, only animals with consciousness and will can stop or change fixed action patterns before they begin. Perhaps, consciousness is earlier in the pathway than action and fixed-action triggers.

reflex blocking

Perhaps, consciousness can suppress reflexes and automatic behaviors [Anderson and Green, 2001] [Mitchell et al., 2002].

voluntary movement

Perhaps, consciousness provides frameworks for voluntary movement control.

control

Perhaps, consciousness sends control signals to initiate feedback, feedforward, comparison, and reinforcement. Perhaps, consciousness improves behavior control and complexity, by improving selection among alternative or ambiguous patterns, meanings, or behaviors.

Perhaps, consciousness decides among multiple sense and action modes. Perhaps, consciousness affects decisions, such as whether to fight or flee. Perhaps, consciousness participates in choosing among alternatives or creating new options.

Perhaps, consciousness relates to punishment. People can realize which previous error, event, or choice caused pain, hunger, or frustration.

Perhaps, consciousness relates to reward. People can realize which previous event or choice caused pleasure or success.

Perhaps, consciousness motivates toward goals.

1-Consciousness-Purposes-Cognition

cognition and consciousness purposes

Perhaps, consciousness manages brain information flow and processing {cognition, consciousness purposes} {consciousness purposes, cognition}.

attention

Attention is to input. Perhaps, consciousness marks objects or places. Attending requires muscle and perception coordination. Animals with consciousness can attend to something only if they are already aware. Perhaps, consciousness is selective attention. Sense consciousness uses attention, shape, planning, and goal brain regions [Chalmers, 2000] [Ffytche, 2000] [Kanwisher, 2001] [Lumer, 2000] [Lumer et al., 1998]. Attention can be faster than consciousness. Attention to something else can distract attention to something, before awareness reaches consciousness.

communication

Perhaps, consciousness allows better communication, by resolving alternative meanings, making analogies, setting expectations, using recursions and nesting, commenting on truth, and marking situations as good or bad. Perhaps, consciousness improves self-communication, using grammar and syntax.

Perhaps, consciousness is necessary for thinking in words and sentences and using symbols in grammatical combinations. Perhaps, consciousness is necessary for grammar.

Single symbols do not require language.

Communication allows lying.

People talk to themselves {talking to oneself} without speaking aloud and hear themselves without using ears.

emotion

Perhaps, consciousness provides positive, negative, or neutral feelings about external objects, internal body parts, and external and internal body events. Sensations can relate to emotions. For example, colors are happy and sad, or light and dark.

Affect includes lust, caring, panic, play, fear, anger, and search. It is about arousal, instinct, and drives. Affect does not seem to have cognition. Perhaps, consciousness realizes affect.

Fundamental emotions are propensities to move toward, or away from, something and so are like instincts. Emotions cause attractions or repulsions. Emotions cause behavior, such as attention, "freezing", flight, fighting, and/or embracing.

imagination

Perhaps, consciousness forms and evaluates concepts, images, and actions, without actually using or performing them.

learning

Perhaps, consciousness connects features over space to make patterns or time to make sequences.

memory

Perhaps, consciousness organizes memory and recall. Perhaps, consciousness is a compression and decompression mechanism, allowing efficient memory storage and retrieval in systems with multisensory data. Perhaps, consciousness organizes memory sequences, to assign rewards and punishments, and manages registers.

perception

Perhaps, consciousness affects perception to generate spatial-field sensations. Sensors can move and change processing to gather useful data and detect complex patterns. Sense-cell arrangements can move or change in anticipation of, and/or in reaction to, perceptions. Mammals use voluntary muscles to explore object to learn more and to locate objects relative to other objects. For example, animals turn body or head to gather more information from ambient signals. Tasting and smelling can actively use tongue and nose.

Perhaps, complex movement and energy detection require consciousness.

Consciousness makes general, specific, and autobiographical images. Vision can remember and scan images. People use viewer-centered coordinates in imagery. Brain cannot readily manipulate images. Images can be in sequence. Eye position is cue to access next image in sequence. Subjects do not image themselves at three-dimensional-scene center.

Perhaps, consciousness fills gaps and integrates regions, to make images consistent and complete.

Perhaps, consciousness combines information from different senses and manages sense interactions. Multisensory convergence can have excitation-excitation (amplification), excitation-inhibition (inhibition and disinhibition patterns), inhibition-excitation (rare), or inhibition-inhibition (rare).

Perhaps, detecting novelty requires consciousness.

Perhaps, consciousness allows image to stay in mind longer, so all scene parts can interact. Attention, serial inspection, planning, and decision-making processes can complete.

Perception can find only scalar intensity values. Perhaps, consciousness allows vector and tensor values, using space or time over which to differentiate.

planning

Perhaps, consciousness participates in setting goals and making plans. Perhaps, though self seems to have goals, brain has competing homeostatic processes, drives, desires, and immediate dangers whose totality results in goals.

will

Perhaps, consciousness controls will, and voluntary actions require sensory space. Though self seems to will actions, people often use voluntary muscles and exert force without being aware, as when people move tables at séances or move ouija boards. People can feel that they control events, but conscious thoughts actually follow actions. Because thought and action happen at similar times, and thought relates to action, people assume that they control action. However, brain actually initiated event earlier in time, so action coordinated with all other actions. If people introspect, they realize that they do not know real causes.

meaning

Perhaps, consciousness aids meaning, because it interprets information using symbol grounding. Meaning is abstract declarative knowledge about relations, and so implies sets and boundaries. Meanings have references, cross-correlations, and associations. Meaning can package things into groups, as when sounds become language. Meaning can compress, categorize, generalize, and discriminate.

Complex syntax can add properties and features that carry meaning. Properties add at higher level, because higher level can include new items and still keep lower level consistent and complete. For example, language keywords can have categories or lengths.

categorization

Perhaps, consciousness aids marking and grouping. Things inside marked boundary form group, and group has name or index. Features, objects, events, patterns, scenes, sequences, frames, and schemas have indexes. Marks and indices also separate figure and ground, or important and not important.

Perhaps, consciousness aids marking and splitting. Markers divide space or time into two regions, with different property values, and so discriminate between them. For example, marks distinguish figure and ground, remember and do not remember, or important and not important.

Perhaps, consciousness compares previous with past situations to find differences, errors, or successes. For example, people can realize that later perception shows that previous perception was incorrect or motion did not have expected effect. People can realize that action was correct or motion was effective.

Perhaps, consciousness relates features. For example, consciousness finds feature ratios or products.

Perhaps, consciousness can recognize individuals, such as face perception and voice tone.

Perhaps, consciousness verifies perceptions, by adding information about interactions to unconscious perception. Added information contributes marginally to better performance, but enough to be adaptive.

social

Perhaps, people require consciousness to distract attention and deceive. For example, primates practice deception by distracting attention, allowing them to steal food or mate [Byrne and Whiten, 1988] [Whiten and Byrne, 1997].

Perhaps, consciousness allows imitation.

Perhaps, consciousness can solve unfamiliar, non-routine problems or solve them more rapidly.

Reporting requires consciousness.

Perhaps, consciousness is for socialization.

1-Consciousness-Self-Consciousness

self-consciousness

Consciousness can refer to itself as whole that exists or perceives {self-consciousness, recursion}| {self-awareness}. Consciousness can know that it has experiences. Consciousness can be aware of its abilities, actions, feelings, memories, perceptions, plans, thoughts, and will. Perhaps, only higher primates have self-awareness.

self

Self can be physical, psychological, or non-physical substance {substantivalism, self}, be property, or have no substance or property per se. Perhaps, self is biological body, mind, ego, or soul.

self: non-self

Living systems can distinguish self from non-self, so they do not eat themselves, fight with themselves, or try to reproduce with themselves. Perceptions from inside senses and outside senses affect other body sensors and make feedback and feedforward signals that help define self and not-self. Algorithms can distinguish inside-body stimuli, as self, and outside-body stimuli, as non-self. Tightening muscles actively compresses, to affect proprioception receptors that define body points. During movements or under pressure, body surfaces passively extend, to affect touch receptors that define external-space points.

nature

Perhaps, self-consciousness is beliefs about body, mind, or world.

Self-consciousness involves thinking about what one has just done, is doing, and/or will do. Self-consciousness is cognition of mental concepts as acting, attending, learning, remembering, perceiving, or reporting, such as perceiving oneself deciding problems or using language.

Perhaps, self-consciousness is knowledge about body, mind, or world.

Perhaps, self-consciousness requires symbol use, language, memory, and/or society.

Self-consciousness involves thinking about what one did in recent and/or distant past.

Perhaps, self-consciousness results from proprioception or other unconscious body perception. Perhaps, self-consciousness results from sense perception.

Self-consciousness requires higher-order thought and is extended consciousness [Damasio, 1999], autonoetic consciousness [Tulving, 1985], higher-order consciousness [Edelman and Tononi, 2000] [Tononi and Edelman, 1998], or reflective consciousness [Block, 1995].

Perhaps, self-consciousness requires interactions with other people, who act as mirrors for oneself, allowing self-observation. People can learn to be self-aware, using verbal reports. As they communicate with more people at higher levels, self-consciousness develops.

Creativity, self, and reason arise from social life, which use language reflexively. Language and symbolic interaction allow humans to be self-conscious.

hypnosis

Hypnosis can reduce self-consciousness and critical appraisal.

apperception and consciousness

Mental content {apperception, self-conscious}| can be self-conscious.

imageless thought

Self-consciousness does not require having sensation, body image, imagination, or proprioception, because people can have thoughts with no images {imageless thought} [Külpe, 1893].

life story

Thinking about what they have just done, are doing, and/or will do allows people to create stories {narrative, consciousness} {life story}, in which earlier events cause later events, from past to present to future.

prereflexive self-intimacy

Selves are subjects of experiences and have self-consciousness {präreflexive Selbstvertrautheit} {prereflexive self-intimacy}. Self-consciousness is not a concept but a feeling, in which person or self is a subject of experiences, such as seeing someone touch one's hand.

self-monitoring

Consciousness allows monitoring {self-monitoring} behavior and thoughts.

1-Consciousness-Self-Consciousness-Theories

interdependence thesis

Perhaps, self-consciousness depends on representation and comparison. Subject distinguishes self and not-self, positions self in space and time, realizes difference between perceiving act and perceived object, looks at itself objectively in third person, and compares itself to other selves using theory of mind {interdependence thesis}.

interpreter theory

At age 18 months, language areas become active. Perhaps, as they develop, self-consciousness develops [Gazzaniga and LeDoux, 1978] [Gazzaniga, 1980] [Gazzaniga, 1992]. Left-brain regions label, categorize, and describe experiences using language. Language underlies self-consciousness {interpreter theory}.

1-Consciousness-Subjectivity

subjectivity

Consciousness has general subjective feel {subjectivity} {experiencer} {subject, consciousness}, associated with sensations. Consciousness seems to center on self. Subjects seem to stay the same. Subjects seem continuous and indivisible into units. Subjects seem insubstantial and immaterial, with neither mass nor energy. Subjects seem not to extend in space. Subjects seem not to extend in time. Subjects seem to have personality. Subjects seem to be just one observer, with only one viewpoint.

processing

Subjects have beliefs and thoughts. Subjects have external and internal three-dimensional sensory field. Subjects can be simultaneously conscious of more than one object or event, because one visual fixation can identify multiple objects.

Subjects seem to observe sense qualities in space. Observers seem to be behind sensory apparatus, looking outward at consciousness contents. Observers seem to be at experience center and in scene. Subjects take viewpoints on objects. People are aware they can take different viewpoints and can imagine such viewpoints. Viewpoints and objects are in scenes.

alterity

Otherness, object, and representation {alterity} are opposite of ipseity.

ipseity

I-ness, selfhood, minimal subjective sense, primitive self, minimal self, pre-reflective self, and background consciousness {ipseity} are opposite of alterity.

stream of consciousness

People seem to have personal continuous experience {stream of consciousness}|. Subjective selves think, know, and are thoughts that continue into next thoughts. Stream of consciousness includes near past and near future. Stream of consciousness is continuous. Scene changes every 100 to 150 milliseconds. Like waves transitioning from one wavelength to another wavelength, stream of consciousness has no transition between states. Affect, mood, and aesthetic, dramatic, and religious memories affect stream of consciousness.

Perhaps, subjects have no stream of consciousness, only current-content observations.

unity

Self unifies faculties that act, attend, decide, have conscience, have goals, judge, reason, remember, select, sense, and will [Hodgson, 1870] [James, 1890]. Perhaps, stream of consciousness is consciousness, because "...thought itself is the thinker..." [James, 1890].

1-Consciousness-Studies

consciousness studies

Consciousness studies use first-person, second-person, and third-person methods {consciousness, studies}. First-person methods are purely subjective. Second-person methods are both subjective and objective. Third-person methods are purely objective.

1-Consciousness-Studies-First-Person Methods

first-person methods

People can analyze their thoughts, feelings, perceptions, and subjective experiences. First-person methods {first-person methods} involve existence, infallibility, introspection, phenomenology, privileged access, and subjective knowledge.

Perhaps, sense qualities are different existence-or-being kinds than physical things. Both existence kinds can connect using thinking mind [Searle, 1983] [Searle, 1992] [Searle, 1997].

Perhaps, people can have special subjective knowledge about their sense qualities, knowledge that differs from objective knowledge. Both knowledge kinds can connect using thinking mind [Metzinger, 2003].

introspection

In early first-person methods {introspection}|, people trained themselves to attend to, and think about, their subjective experiences, then report their observations.

People can introspect only mental states that are subjective or have subjective, phenomenological characteristics. Introspection does not reveal body processes [James, 1890] [Titchener, 1904] [Wundt, 1873].

Individuals have large phenomenal-experience differences, and introspective reports are not reproducible. Observing always requires hypotheses about what is happening [Lyons, 1986]. Introspection does not always understand, predict, or control [Barlow, 1987] [Barlow, 1995].

phenomenology method

In early first-person methods {phenomenology method}, people trained themselves to try to suspend all judgments and hypotheses while they attended to their subjective experiences [Heidegger, 1996] [Husserl, 1905] [Husserl, 1907] [Husserl, 1913] [Merleau-Ponty, 1945] [Richardson and Velmans, 1997] [Stevens, 1997] [Stevens, 2000]. Consciousness study today uses some phenomenology methods [Depraz, 1999] [Hut, 1999] [Stevens, 2000] [Varela and Shear, 1999].

1-Consciousness-Studies-Second-Person Methods

second-person methods

Methods {second-person methods} can involve consensus, intersubjective analysis, intersubjectivity, and neurophenomenology. Comparing experience reactions and subjective-knowledge reports can result in objective knowledge. By observing from both their and other viewpoints and comparing responses, people can reach consensus about subjective experiences. Operational procedures that reproducibly and reliably report sensations allow objective study of subjective experience [Velmans, 1996] [Velmans, 1999] [Velmans, 2000].

intersubjective analysis of consciousness

Rather than only considering first-person viewpoint, experiencer can include a second person. Two people can try to understand consciousness by each asking what other is experiencing and then exchanging asker and asked roles {intersubjective analysis of consciousness} [Thompson, 2001].

intersubjectivity

Interchanging experiencer and researcher roles can help understand experience. People can exchange reports about same stimulus {intersubjectivity} and so agree on subjective-experience aspects. Combining first-person and third-person provides intermediate second-person intersubjective viewpoints. Consciousness includes phenomena experienced in world, plus body feelings and thoughts [Velmans, 2000].

neurophenomenology

Human-experience reports constrain cognitive-science objective knowledge, and vice versa {neurophenomenology}. Neural assemblies built over time represent recent past, now, and immediate future and correlate with specific sense qualities [Varela, 1997] [Varela, 1999] [Varela et al., 2001].

reflexive model

Mental models or experiences are in surrounding three-dimensional space, where they seem to be. Consciousness includes phenomena experienced in world, plus body feelings and thoughts {reflexive model of consciousness} [Velmans, 2000].

1-Consciousness-Studies-Third-Person Methods

third-person methods

Third-person methods {third-person method} involve experimental phenomenology, heterophenomenology, and reports. People can gather objective knowledge about subjective experience. Animals, robots, and software can model consciousness [Dennett, 1997] [Dennett, 2001].

apprehension span report

Time interval between sensations and verbal report is 100 milliseconds {span of apprehension, report} {apprehension span, report}.

experimental phenomenology

Verbal reports describe stimuli, which are independent variables, and subjective impressions or responses, which are dependent variables. If stimulus spatiotemporal organization changes, responses and response categories change quantitatively and qualitatively. Phenomenological can equate with phenomenal. Subjective phenomena relate to stimuli and their objects {experimental phenomenology}, allowing theories of contents {eidology} and relations between contents {logology} [Stumpf, 1890].

heterophenomenology

Fundamentally, experiences report brain-activity output or results. Researchers can ask people to report their experiences, observe their behaviors, and scan their brains. Researchers can build stories about subject experiences {heterophenomenology}. Stories can be as close to experience truth as time and effort allow [Dennett, 1991] [Dennett, 2001].

higher-order thought method

A conscious report to oneself {higher-order thought method} can accompany conscious mental states, so brain can monitor itself. Such control system allows recursion, through self-representations [Hofstadter, 1979] [Hofstadter and Dennett, 1981].

phrastic meaning

Feeling has content {phrastic meaning}, which differs from mood and force {neustic meaning} [Hare, 1952] [Hare, 1981].

reporting

People can attend to, learn, perceive, or remember {reporting}| organisms, objects, features, times, and locations. People can make verbal or non-verbal objective reports about subjective experience, during or after experience. Reports are about perceptions, memories, imaginings, beliefs, cognitive states, higher-order thoughts, mental events, mental states, and phrastic meanings. Reporting requires former sensation, perception, memory, and awareness. People cannot report unconscious thoughts or perceptions. Reports about sensations indicate that most people's experiences are similar. Only humans can report using complete language, but all mammals can communicate.

types

People can report their feelings, and judge emotion reports, objectively [Hare, 1952] [Hare, 1981].

People have private stimuli and responses, only inside themselves and not observable by others. Private stimuli and responses are like reports to oneself {verbal report}. People learn to be self-aware by verbal reports [Skinner, 1938] [Skinner, 1953] [Skinner, 1957].

properties

People can report only some conscious thoughts and perceptions.

People do not express thoughts {unexpressed thought} about which they have no intention to report.

Reports are objective and verifiable, allowing scientific analysis and theories.

criticism

Methods similar to literary criticism can analyze reports about consciousness. Criticism can use only actual words, only actual work, emotional reactions, feelings, history, meaning, objective standards, other works, personal viewpoint, principles, relativity, true wording, and theory.

testimony method

People can report their perceptions {testimony method}, which depend on object recognition.

1-Consciousness-Tests

consciousness tests

Consciousness is private, so direct observation from outside cannot sense consciousness. Perhaps, objective tests can prove that systems have sensations and subjective feeling {consciousness, tests}. Such tests require that subjective and objective interact. Are there tests, such as EEG wave or pattern, to prove that human, animal, or machine has consciousness? [Drummond, 2000] [Kulli and Koch, 1991] [Madler and Pöppel, 1987].

animals and human tests

Perhaps, indirect tests, using human-perception methods, can indicate whether animals have consciousness. Perhaps, animals that perceive same visual illusions as humans have same sense qualities. Perhaps, animals that respond to color that only human-like opponent processes can detect have same sense qualities. Probably, anatomy and physiology differences cause animals and humans to have different consciousness levels and types.

behaviors

Sensations do not have to cause overt body movement or glandular activity. Overt behavior appears to need no mental component or factor, so movements or behaviors cannot prove consciousness. Vocalizations, voluntary movements, involuntary movements, learning, anticipation, fear, emotions, preferences, choices, and adaptability to new situations show intelligence but not consciousness.

blushing

Perhaps, blushing reveals that organisms are aware, because only humans blush.

communication

Animals can communicate without consciousness, so communication does not show consciousness.

complexity

Computers can use high-level processing, have complex functioning, use reasoning, use language, and have pseudo-emotional responses, but they do not show consciousness. Turing test is about intelligence, not consciousness.

confidence

Perhaps, perception confidence shows consciousness amount [Arnold et al., 2001] [Dennett and Kinsbourne, 1992] [Kolb and Braun, 1995] [Kunimoto et al., 2001] [Moutoussis and Zeki, 1997] [Nishida and Johnston, 2002] [Zeki, 1998] [Zeki and Bartels, 1999] [Zeki and Moutoussis, 1997].

crying

Perhaps, crying reveals that organism is conscious, because only humans cry.

delay test

Perhaps, animal, baby, or patient ability to delay response after sense stimulus {delay test} indicates consciousness.

emotion complexity

Perhaps, having complex emotions, like confidence and regret, proves consciousness. Emotional feeling typically involves consciousness.

eye focusing

Unconscious people cannot focus eyes. Eye focusing can show that people are conscious.

imitation

Perhaps, animals can feel subjective experiences only if they can imitate. Imitation requires that organism be able to use perception and memory to form behavior representation that can initiate movement. Human infants can imitate sounds, gestures, and body positions. Birds can imitate bird songs. Parrots can imitate sounds. However, animal imitation happens in same situation that caused original behavior and so can just happen by chance and then reinforce. Whales can imitate whale songs [Reiss, 1998].

mirror imitation

Pigeons, monkeys, and apes can use mirrors to guide movements. Children, including autistic children, use mirrors as human adults do.

indirect test

Perhaps, indirect tests, comparing system to human conscious perception, can indicate consciousness. For example, animals can perceive visual illusions or features that depend on consciousness. Perhaps, voluntary muscle or gland action is after stimulus, but no direct pathway connects stimulus and action, such as if face muscles react to toe pain.

intelligence

Vocalizations, voluntary movements, involuntary movements, learning, anticipation, fear, emotions, preferences, choices, and adaptability to new situations show intelligence but not consciousness.

machine ability

Machines without consciousness have high-level processing abilities, complex functioning, reasoning ability, language usage, and pseudo-emotional responses. Perhaps, indirect tests, using human-perception methods, can indicate whether machines have consciousness. Perhaps, machines that can perceive the same visual illusions as humans have similar sense qualities. Perhaps, machines that respond to color that only human-like opponent processes can detect have same sense qualities.

movement

Sensations do not have to cause overt body movement or glandular activity, so no specific movements indicate consciousness. Overt behavior appears to need no mental component or factor, so movements or behaviors cannot prove consciousness. Vocalizations, voluntary movements, involuntary movements, learning, anticipation, fear, emotions, preferences, choices, and adaptability to new situations show intelligence but not consciousness.

problem-solving

Animals can solve problems without consciousness, so problem-solving does not show consciousness.

self-control

Machines without consciousness can control themselves, so self-control does not show consciousness.

high-level function

Perhaps, animals can have lower-level subjective experiences but cannot perform higher functions, such as being kind, knowing beauty, being friendly, laughing, being moral, having goals, having motivation, being in love, using high-level language ability, using metaphor, being truly creative, thinking about itself, solving new problem types, and having sensation. People can judge that animals are conscious, because they have high-level processing abilities [Feinberg, 1969].

Turing test

Computers and brain parts receive and analyze information unconsciously. Machines are intelligent if they can successfully imitate humans in a special test {Turing test}| {Turing's test} [Turing, 1950].

In test first part, human must determine gender of man and woman in separate room by writing questions to both and analyzing written responses {imitation game}. In test second part, machine replaces man or woman, and human must determine which is machine and which is person. Intelligent machines can equal human performance.

For arithmetic, humans perform much worse than computers, allowing human questioner to detect that machine has higher intelligence.

Turing test is about intelligence, not consciousness [Millican and Clark, 1999].

Prediction Algorithm Test of Understanding

Four-rule algorithm with one-bit input and one-bit output can test machine and human understanding {Prediction Algorithm Test of Understanding}.

test

For random input and output trials, check the following.

If input is ON and output is ON, input stays ON and output stays ON.

If input is ON and output is OFF, input stays ON and output becomes ON but then goes OFF.

If input is OFF and output is ON, input becomes ON and output becomes OFF but then goes ON.

If input is OFF and output is OFF, input stays OFF and output stays OFF.

Machines or people observe trials until they learn the four rules. After that, can people or machines make switches stay OFF or ON by setting input and output?

test alternate

Alternatively, input characters in medium become one switch, with OFF and ON positions. Output characters in medium become one switch, with OFF and ON positions. System has first circuit with solar cell, input switch run by second solenoid, and solenoid. System has second circuit with solar cell, output switch run by first solenoid, and solenoid.

When first-circuit switch is ON and second switch is ON, first solenoid switches second switch ON, and second solenoid turns first switch ON, so both switches stay ON.

When first-circuit switch is ON and second switch is OFF, first solenoid switches second switch ON, and second solenoid turns first switch ON, so both switches stay ON.

When first-circuit switch is OFF and second switch is ON, first solenoid switches second switch OFF, and second solenoid turns first switch OFF, so both switches stay OFF.

When first-circuit switch is OFF and second switch is OFF, first solenoid switches second switch OFF, and second solenoid turns first switch OFF, so both switches stay OFF.

In sun, switches can be OFF or ON; but in shade, both switches can only be OFF. Outside force is what switches first switch ON or OFF, with no cause or effect from otherwise-isolated system.

Outside detector uses reflected light from sun to measure switch position, with no cause or effect from otherwise-isolated system.

Time interval from first-switch setting to second-switch final position is less than one unit. Outside force sets first switch at time 0, and detector then measures second-switch position at time 1 unit.

properties

Seemingly, experiment conditions are complete as to hardware, mechanism, and possible events. Inputs and outputs are precise in physical form. Process algorithm is in system and has precise rules. System hard-wired computer program transforms input to output.

1-Consciousness-Speculations

consciousness speculations

Biological sciences, computer sciences, mathematics, physical sciences, and psychology can contribute ideas about sensations, conscious observers, and mental space {consciousness, speculations} {speculations, consciousness}.

dimensionless neuron outputs

Neuron inputs and outputs do not use units, and so they are dimensionless, allowing neuron inputs and outputs to add {dimensionless neuron outputs}. Perceptual input and output are about information, which has no physical units [Shannon, 1948].

factoring and neurons

Factoring can separate wholes into separate parts with no remainders {factoring and neurons}. Neuron-series-array arrays can check factors to make regions consistent and complete.

factorable

Dividing factorable numbers by correct factors leaves no remainder and results in one dividend, with no addition. For example, 15 is factorable, because 3 * 5 = 15, and 3 and 5 are unique and prime. Dividing factorable polynomials by correct factors leaves no remainder.

remainder

Dividing factorable numbers by incorrect factors leaves incorrect-factor fractions (remainder) that add to dividend. For example, 10/3 = 3 + 1/3. Dividing factorable polynomials by incorrect factors leaves remainders.

logarithms

Dividing factorable numbers by all correct prime factors equals one. For example, 8 has three factors 2, 2, and 2, and 8/(2*2*2) = 2^3 / 2^3 = 1. Logarithm of 1 is 0. For example, log(2^3) / log(2) = log(2^2), log(2^2) / log(2) = log(2^1), log(2^1) / log(2) = log(1) = 0. Alternatively, log(2^3) = 3 and log(2) = 1, so 3 - 1 - 1 - 1 = 0.

Dividing factorable numbers by incorrect factors does not equal one. For example, dividing 8 by 3 makes 2 + 2/3. Logarithm is not zero. For example, log(8) - log(3) is not zero.

harmonic ratios and neurons

Ratios are more harmonic if numerator and denominator have smaller integers {harmonic ratios and neurons}. The most-harmonic ratios are 1:1 and 2:1. The second-most harmonic ratios are 3/2, 4/3, 5/3, 5/4, and 6/5.

Adding 1 to 1, for duplication, makes 2/1 ratio. Dividing 1 by 2, for splitting, makes 1/2 ratio. Repeated doubling and splitting can make all whole-number ratios. Because they can add and divide, neuron assemblies can build harmonic ratios.

Opponent-process output is a ratio with range from 1 to 2. Therefore, high end to low end has ratio 2/1. Middle to low end has ratio 1.5 = 3/2. Middle, of low end to middle, to low end has ratio 1.25 = 5/4. Middle, of middle to high end, to low end has ratio 1.33 = 4/3. Continuing makes ratios 6/5, 7/6, 8/7, 9/8, and so on, including 15/8, 7/4, 5/3, and 7/5. Opponent-process harmonic ratios can indicate categories.

line length

Line lengths have ratios. Geometric figures with sides in harmonic ratios have symmetries and high information. Geometric figures with sides in non-harmonic ratios have few symmetries and low information.

1-Consciousness-Speculations-Observer

semi-observer

The first observer {pre-observer} {semi-observer} had no space, no time, no intensity, and no quality. Semi-observer had semi-space and semi-qualities. Semi-observer had semi-feelings of self-awareness. It was like objectless meditation, blindsight, and attention without awareness.

vector space and observer

Vector space can model observer and observations as integrated system {vector space and observer}. Coordinate origin is observer. Vector termini are observations.

1-Consciousness-Speculations-Observer-Biology

body motions and observer

Head and body rotate around centers. Vestibular, kinesthetic, and visual feedback makes motor centers into perceptual centers, which define observation point {body motions and observer}.

circuit flows and observer

Brain is large and complex and can have internal circuit flows, one of which represents observer {circuit flows and observer}. Loops allow reverberations, feedback, and feedforward, to maintain processes. Observer and observed circuit flows interact.

origins of self

Sensory and central neurons have electrochemical processes, have associative memories, and control motor neurons. Ganglia use neuromodulators, have procedural memories, and use statistical and vector processes to control motor-neuron sets. Brains are ganglia sets that use statistical and tensor processes to coordinate body, head, and limb motions. Vertebrate brains have perceptions and declarative memories and use nested processes [Hofstadter, 2007]. Self began with a central perception and behavior process {origins of self} that nests and controls other brain processes.

Algorithms can distinguish inside-body stimuli, as self, and outside-body stimuli, as non-self. Tightening muscles actively compresses, to affect proprioception receptors that define body points. During movements or under pressure, body surfaces passively extend, to affect touch receptors that define external-space points.

resonating wave and observer

Brain can have resonating waves, one of which can represent observer {resonating wave and observer} {self-wave}.

1-Consciousness-Speculations-Observer-Psychology

frames and observer

Sensations occur in contexts (frames), which indicate spatial relations. One context is observer {frames and observer}.

knowing and having meaning

Knowing {knowing, meaning} can be recognition or association.

property

Properties are about something measurable, such as location, time, intensity, or sense quality. Objects and events have properties. Properties have relative values. Perception measures property values while keeping property natures or types abstract and separate. Sensation measures values but also assigns property types and natures, so colors, sounds, and so on, have meaning.

category

Categories are object or event groups. Category objects share at least one property value. People can group objects and events to make categories (association). People can put objects or events into (previously memorized) categories (recognition). Categories have subcategories and supercategories.

meaning

Meaning {meaning, knowing} requires knowing something about property, not just property values. For example, meaning requires knowing something about red, not just intensity value.

meaning: value relations

Perception builds property-value series from repeated situations. Property-value sequences can reveal functions, such as x = x + 1, and other relations. Value changes (gradients or flows) and value-change changes (accelerations or forces) can have relations and reveal property-value functions. By remembering and comparing property values, brain can find property meaning by transforming to new properties that can be parameters, by associating properties to make categories, and by recognizing category members.

symbol grounding

Symbol systems give meaning to symbols. Property systems give meaning to property types. Symbols have grounding when they associate with spatial or temporal patterns. Property types have grounding when they have spatial or temporal patterns. Property types depend on symbol systems with grounding.

self and first sensation

Perhaps, sense qualities derive incoming-stimuli receiving point, to define observing self {self and first sensation}. However, behavior is not sense qualities, and self seems complex.

1-Consciousness-Speculations-Sensation

intensity and space and sensations

Sense qualities require intensity, space, and time {intensity and space and sensations}.

intensity fluctuation type and sensations

Light-wave amplitude has frequency. Sound-wave amplitude has frequency. Touch involves vibrations with frequencies. Smell and taste involve molecule collisions that cause molecule vibrations. For all senses, stimulus intensity fluctuates. Different senses have different vibration types. Perhaps, sense qualities are intensity-fluctuation types {intensity fluctuation type and sensations}. Intensity fluctuation involves frequency modulation and/or amplitude modulation.

self-observable sensations

People can observe sensations but not physical stimuli. Sensations have observers. Sensations are self-observable {self-observable sensations}.

sensation parameters

Sense functions have three parameters {sensation parameters}, making a function family. Different senses have different subfamilies. Within a sense, sense qualities have the same function with different parameter values.

sensation parameters: intensity

Intensity goes from zero to pain. Vision and other senses add receptor inputs to find intensity, and then compare adjacent intensities to find relative intensity. Intensity involves time, distance, momentum, and energy, which are never negative.

sensation parameters: non-opposing quality

Non-opposing quality goes from zero/low through middle to high maximum. Examples are frequency, concentration, polarity, shape, density, and absolute temperature, which are never negative. Light, sound, and vibrations have frequency. Molecules have shape, concentration, and polarity. Materials have temperature and density.

sensation parameters: opposing quality

Opposing quality can go from negative to zero to positive. Examples are charge and spin. Opposing quality can go from below to neutral to above. Examples are acidity and cool to neutral to warm temperatures, as well as compression to equilibrium to tension. Opposing quality can go from left to symmetric to right. Examples are handedness and parity. For opposing qualities, range ends are opposites.

sensations

Sensations always have intensity and have at least one opposing quality and at least one non-opposing quality. Senses combine sensation parameters to make sensations.

stimulus matching and sensations

Sensations evolve to become best matches to physical information {stimulus matching and sensations}. Brain analyzes inputs to find categories and relations and then synthesizes abstract variables to replace physical variables to make output model the input stream. Output and input integrate to make a unified whole.

time and sensations

Physical processes use gravitational and/or electromagnetic forces and so very short times. Mental processes are about information flows and have arbitrary times. Low-level mental processes occur over 20-millisecond to 200-millisecond intervals {time and sensations}. High-level mental processes occur over hours. 20-millisecond and longer times allow neuron-assembly activity-pattern integration and expression as sensations.

tingle

Neuron-assembly activity patterns evolved to become tingles {tingle} {pre-qualities} {semi-qualities}, which later differentiated to become sense qualities. All sensations share an underlying vibrational state.

predecessor

Physical quantities have real-number amounts and have units, such as volts or frequencies, which may have directions. Physical quantities occur at one time and place. Physical intensities use only energy, area, direction, and time. Nerve impulses and neuron-assembly activity patterns involve physical quantities. Sense systems use neuron-assembly activity patterns.

tingles

At first, all senses had only intensities. Evolution then began to work on neuron-assembly activity patterns to make kinetic effects and overall oscillations. These abstract vibrations are tingles. Tingles are non-local. Tingles have intensities. Tingles are the beginnings of sense qualities. Tingles are vibrations of new abstract physiological variables that combine physical quantities. Different senses have different vibration types. Within a sense, different sense qualities have the same vibration type but different frequencies and harmonic ratios.

Tingles derive from neuron transient vibrations after stimuli. Neuron assemblies can combine inputs to make microphonic neuron signals [Saul and Davis, 1932], with frequencies up to 800 Hz.

Semi-qualities are undifferentiated sense qualities and have semi-feeling and semi-meaning. Tingles are between neuron physiology and sense qualities. Like neuron signals, tingles can vary in wavelength, frequency, frequency range, frequency distribution, amplitude, amplitude change rate, amplitude acceleration, intensity, phase, persistence after stimulus, direction, rotation, orientation, pitch, roll, yaw, and wobble. Waves can have different forms, such as longitudinal, transverse, polarized, spherical, and ellipsoidal. Like sense qualities, tingle semi-qualities have semi-time, semi-space, and semi-intensity.

antecessor

As tingles evolved, they differentiated into sense qualities with intensities. Sensations have no real-number amounts and have no units. Rather, sensations have relative amounts and sense qualities. Sensations combine intensity amount and unit into a post-tingle. Tingle frequency, spatial extent, and amplitude differentiated to make different sense types. Within a sense type, post-tingles vary and make sense qualities, such as red.

1-Consciousness-Speculations-Sensation-Biology

animals and sensations

Animals are Pre-Cambrian invertebrates, Cambrian invertebrates, chordates, vertebrates, fish, lobe-finned fish, freshwater lobe-finned fish, amphibians, reptiles, mammals, primates, Old World monkeys, apes, Home erectus, and Homo sapiens, in order of evolution. Many people believe that mammals have consciousness and sense qualities {animals and sensations}. However, mammal brain parts and functions are similar to other-vertebrate brain parts and functions, so mammals seem to have nothing fundamentally new in brain.

behavior and perception

Perception is like behavior in that input triggers output {behavior and perception}. Coordinated switches trigger muscle movements, gland outputs, and perceptions. Perception and behavior have feedback, looping, and exchanging. Muscles, glands, and nerves work together, as do sense receptors and brain. Behavior and perception use whole brain and body.

energy flow and sense intensity

Sense receptors measure kinetic-energy flow onto their receptive-field area {energy flow and sense intensity}. For example, taste receptors measure salt concentration as salt-to-receptor binding per second, which transfers kinetic energy per second. Kinetic energy flow transforms to potential-energy change. Membrane-potential changes and molecule-potential-energy changes continually transfer stimulus energy to new neurons. Neuron coding represents neuron potential-energy and kinetic-energy transfers. Electrochemical flows are like kinetic energy, and electrochemical states are like potential energy {electrochemical potential and kinetic energy}. Electric circuits have resistances, capacitances, and inductances, and pipes have constrictions and standpipes.

1-Consciousness-Speculations-Sensation-Biology-Brain

brain evolution and first sensation

Perhaps, sense qualities arose in humans or mammals from new brain regions or functions {brain evolution and first sensation}. However, human and mammal brain regions and functions are similar to other-vertebrate brain regions and functions, so humans and mammals seem to have nothing fundamentally new in brain.

brain region duplication and multisense qualities

After sense-region duplication, original region performs original function, so duplicated region can evolve to perform new functions, such as receive from another sense and integrate two senses {brain region duplication and multisense qualities}.

color processing

Vision processing {color processing} represents color brightness, hue, and saturation.

photoreceptors

Rods have photopigment with maximum sensitivity at bluish-green 498 nm, to measure light intensity. Cone types have maximum sensitivity at one wavelength and lower sensitivities at other wavelengths.

Non-primate mammals have cones with photopigments with maximum sensitivity at indigo 424 nm to 437 nm (short-wavelength receptor) and yellow-green 555 nm to 564 nm (long-wavelength receptor). Non-primate mammals can distinguish colors over the same light-frequency range as primates. Because they have only one color dimension, they may or may not see subjective colors.

Primates have cones with photopigments with maximum sensitivity at indigo 437 nm (short-wavelength receptor), green 534 nm (middle-wavelength receptor), and yellow-green 564 nm (long-wavelength receptor). Because they have two color dimensions, they may see subjective colors.

neurons

ON-center and OFF-center neurons calculate cone-input sum, which represents intensity, or ratio, which represents light frequency. The first opponent-process ratio was for yellowness and blueness. The second opponent-process ratio was for redness and greenness.

Later processing categorizes colors. Perhaps, whiteness can change to light yellowness, and blackness can change into dark blueness. Perhaps, yellowness split into darker orangeness and lighter greenness, which mixes blueness and yellowness. Perhaps, orangeness becomes redness.

labeled lines and topographic maps

Visual-tract axons carry color-blob opponent-process information from retina to lateral-geniculate-nucleus and primary-visual-cortex topographic maps. Senses have labeled lines because their neurons follow sense-specific pathways and have physiological specializations.

color lightness

The lightness color parameter relates directly to the difference between brightness and short-wavelength-receptor output: M + L - S. In order of increasing color lightness, black causes no response. Blue has small M-receptor and L-receptor outputs and large S-receptor output. Red has middle M-receptor and L-receptor outputs and small S-receptor output. Green has large M-receptor and L-receptor outputs and medium S-receptor output. Yellow has large M-receptor and L-receptor outputs and medium-small S-receptor output. White has very large M-receptor and L-receptor outputs and medium S-receptor output. Therefore, subjective color lightness relates directly to the blue-yellow opponent process.

color temperature

The temperature (warmth and coolness) color parameter relates directly to difference of long-wavelength-receptor and middle-wavelength-receptor outputs: L - M [Hardin, 1988]. In order of increasing color temperature, blue has small L-receptor and medium-small M-receptor outputs. Green has medium L-receptor and large M-receptor outputs. Black causes no response. White has very large L-receptor and M-receptor outputs. Yellow has large L-receptor and large M-receptor outputs. Red has large L-receptor and medium M-receptor outputs. Therefore, subjective color temperature relates directly to the red-green opponent process.

brightness, lightness, temperature

If black has brightness 0, and if blue, red, and green have maximum brightness 1, then brightness ranges from 0 to 3. Magenta adds blue and red to make 2. Cyan adds blue and green to make 2. Yellow adds red and green to make 2. White adds blue, red, and green to make 3.

If blue, red, and green have lightness 1, 2, and 3, respectively, lightness ranges from 0 to 6. Magenta adds blue and red to make 3. Violet adds blue and half red to make 3. Orange adds red and half green to make 3.5. Cyan adds blue and green to make 4. Chartreuse adds half red and green to make 4. Yellow adds red and green to make 5. White adds blue, red, and green to make 6. Blue and yellow, red and cyan, and green and magenta add blue, green, and red to make white 6.

If blue, green, and red have temperature -2, 0, and 2, respectively, temperature ranges from -2 to +2. Cyan averages blue and green to make -1. Magenta averages blue and red to make 0. White averages blue, red, and green to make 0. Blue and yellow, red and cyan, and green and magenta average blue, green, and red to make white 0. Chartreuse averages half red and green to make 0.5. Yellow averages red and green to make 1. Violet averages blue and half red to make 1. Orange averages red and half green to make 1.5.

If brightness is first coordinate, lightness is second coordinate, and temperature is third coordinate, blue is (1,1,-2), red is (1,2,2), and green is (1,3,0). Magenta is (2,3,0). Cyan is (2,4,-1). Yellow is (2,5,1). White is (3,6,0). Black is (0,0,0). Darkest gray is (0.5,1.0,0.0). Dark gray is (1,2,0). Gray is (1.5,3.0,0.0). Light gray is (2,4,0). Lightest gray is (2.5,5.0,0.0).

brightness and blackness

The brightness color property depends on the brightness color parameter, which sums long-wavelength-receptor and middle-wavelength-receptor outputs: L + M. Black has low brightness. Blue wavelength is far from L and M maximum-sensitivity wavelengths, so blue is dim. Red wavelength is closer to L and M maximum-sensitivity wavelengths, so red has average brightness. Green wavelength is close to L and M maximum-sensitivity wavelengths, so green is bright. White adds green, red, and blue and is brightest.

saturation and whiteness

Colors can have whiteness. White adds to primary colors linearly and equally. Any color mixture has red, green, and blue. In any color mixture, red, green, or blue has the lowest brightness, and the other two colors have at least that brightness. Therefore, whiteness is three times the lowest-brightness-primary-color brightness. Subtracting lowest-brightness-primary-color brightness from the other two primary-color brightnesses, and then adding the two results defines hue brightness. Saturation is hue brightness divided by total brightness. Unsaturation is whiteness divided by total brightness. Hue brightness and whiteness add to 100%. Vision processing compares adjacent and overall brightnesses to adjust brightness and so saturation.

hue

Three photoreceptor types and two opponent processes determine color categories [Krauskopf et al., 1982]. Two color-blob-neuron opponent processes detect red-green and blue-yellow ranges [Livingstone and Hubel, 1984].

Retina unit areas have one Long-wavelength, one Middle-wavelength, and one Short-wavelength cone. See Figure 1. Any-wavelength light excites all cones. Retina opponent processes calculate L - M and L + M - S. See Figure 2. Comparing opponent processes, using thresholds to separate continuous frequency-intensity spectra into discrete categories, selects three color-categories. If both opponent-process ranges can be from -1 to +1, blue is (-1,-1), green is (0,0), and red is (+1,+1), where the first value is the L - M range, and the second value is the L + M - S range.

Comparing opponent processes selects four color-categories. Blue is (-1,0), green is (0,+1), yellow is (+1,+1), and red is (+1,0). See Figure 2.

Adding the black-gray-white sense process selects the red, yellow, green, blue, black, gray, and white color categories. See Figure 3.

Vision processing subtracts the smallest primary-color brightness from the other two primary-color brightnesses, and then adds the two results to find hue brightness. Vision processing compares adjacent and overall hue brightnesses to adjust local hue.

opponency pairs

Brightness opponency pairs with darkness opponency. Yellow-blue opponency pairs with blue-yellow opponency. Red-green opponency pairs with green-red opponency. Brain compares opponency pairs for verification and discrimination.

color constancy

Visual-area-V4 neurons account for background illumination, which reflects differentially from local areas, to make color constancy. Spreading excitation, lateral inhibition, and object and object-relation knowledge help make color constancy.

location

A separate visual system finds color spatial locations. The location system finds visual angle (space direction) and distance.

color and location integration

Location system and color system information integrate to specify contrast, color, orientation, shape, location, distance, and time.

continuity and sensations

Television-screen electron guns excite phosphors that shine until beam returns, so picture persists. Sensory-motor processing exchanges information and interconnects neurons faster than neuron signals decay, making spaces, times, intensities, and sense qualities continuous {continuity and sensations}.

high-level processing and sensations

Low-level processing determines high-level processing, and high-level processing sends feedback to low-level processing. However, high-level-processing feedback is not noticeable, because it causes only secondary effects, has complex features, is statistical, uses whole brain, and takes much longer times {high-level processing and sensations}.

invariants and sensations

Holding all variables, except one, constant can find the derivative with respect to the non-constant variable. Unchanging partial differentials are invariants. Neuron-assembly processing can detect perceptual invariants {invariants and sensations}. Invariants persist and so can become memories.

perception change and first sensation

Perhaps, brain compared before-and-after or adjacent perceptions, and perception changes caused first sensation {perception change and first sensation}. Perhaps, brain compared perception and memory, and sense qualities arose from spatial-gradients, temporal-gradients, differences, or errors. For example, people can realize that motion does not have expected effect. Error can cause punishment or can lower reward. Perhaps, brain detected position differences, and sense qualities arose as movement perception. Perhaps, sense qualities arose as perceptual-process modification, distinction, realization, notice, feeling, or comparison. However, changes, differences, gradients, and errors use same units as original quantities and so are not new things.

probability and sensations

Conscious sense qualities have largest combination number and so highest-probability state {probability and sensations}.

response internalization and first sensation

Stimuli tend to cause muscular or glandular responses. Perhaps, sense qualities arose as responses became notes, marks, or signals {response internalization and first sensation}. Alternatively, brain processes can inhibit tendencies or internal signals. However, behavior is not sense qualities.

statistics and sensations

Sensory-motor processes use many parallel processes and storage registers and are statistical {statistics and sensations}. Because many points contribute to results, narrowing to one distribution and average, resolution can be high.

synchronization and arousal

Synchronizing neuron signals increases intensity by causing simultaneous arrival. Synchronous alpha waves cause arousal {synchronization and arousal}.

1-Consciousness-Speculations-Sensation-Biology-Brain-Topographic Map

circuit flows and sensations

Topographic maps can have neuron circuits [Gutkin et al., 2003]. Circuit-flow waves and local patterns can represent objects and sense qualities {circuit flows and sensations}. Vibrations, accelerations, jolts, eddies, vortexes, turbulence, and streamlining, with varying dimension, frequency, phase, and amplitude, can represent sense intensities and qualities. Different senses have different flow patterns.

Intensity is kinetic energy flow per area in flow longitudinal direction.

Liquid flows have lateral-pressure patterns, liquid pools have transverse waves from wind and forces, and moving charges have transverse magnetic fields. Sense-quality information is in two transverse potential energy (not distance) coordinates. Circuit flows have cross-sectional shapes, like random stereograms hold stereoscopic patterns.

Reticular-formation input starts and sustains circuit flows. Topographic-map circuit elements control, analyze, and modulate flows, using stimuli, feedback, feedforward, or hormones.

neuron-array output ratios and sensations

Sense qualities are topographic-map local neuron-activity patterns [Schiffman, 2000] {neuron-array output ratios and sensations}.

registers and sensations

Topographic maps have variable-size three-dimensional registers that hold objects with sense qualities {registers and sensations}. Registers work together to represent motions.

topographic-map displays and sensations

Topographic-map neurons can be Off, On, or in between, like a black-and-white TV screen {topographic-map displays and sensations}. Topographic-map neuron activities can make geometric patterns, such as lines, circles, and ellipsoids. Changing neuron activities can make movements, flows, vibrations, orbits, spins, and waves.

1-Consciousness-Speculations-Sensation-Biology-Neuron

coding and sensations

Neuron electrochemical processes cause axon impulse instantaneous frequency, average frequency, and frequency changes, and synapse neurotransmitter-packet release rate {coding and sensations}. Impulses and packets are discrete and make digital code. Summing, averaging, synaptic transmission, transmitter binding, feedback, neuron interactions, and neurohormones blur digital code to make essentially analog code.

stimulus energy and receptors

Stimuli transfer kinetic energy to receptors, which require a minimum (threshold) energy to respond. Light photons collide with, and transfer kinetic energy to, retinal photoreceptors {stimulus energy and receptors}. Photons have energy E that depends on electric-field frequency f: E = h * f, where h is Planck constant. Blue-light-photon-energy to red-light-photon-energy ratio is approximately 1.75. Photoreceptors have maximum-sensitivity wavelength and respond less to higher and lower wavelengths.

thresholds as category boundaries

To cause one nerve impulse, neuron input must make neuron membrane potential higher than threshold potential. Below threshold, neuron axon has no impulse, equivalent to 0. Above threshold, neuron axon has one impulse, equivalent to 1. Neuron thresholds split instantaneous input-value range into two output opposites, to make intensity categories {thresholds as category boundaries}. Thresholds convert analog signals to digital signals. Neuron series with increasing thresholds can indicate increasing accumulations/intensities and so categories. Thresholds are the lowest level of meaning: yes or no, present or not, true or false.

neurochemical waves and sensations

After receiving sufficient stimulus input, axon-impulse-frequency and synapse-neurotransmitter-packet-release rates typically increase from baseline level, peak, then decrease to baseline, making one wave {neurochemical waves and sensations}, which has 2-millisecond to 20-millisecond time interval. Because they involve few axon impulses, single waves cannot have amplitude or frequency modulation. Neuron-assemblies have coordinated waves that make neuron-assembly activity patterns, to code stimulus intensity, quality, and location.

1-Consciousness-Speculations-Sensation-Chemistry

biochemicals and sensations

Hallucinogens distort mental space and sense qualities. Perhaps, normal biochemicals make undistorted mental space and sense qualities {biochemicals and sensations}.

chemical reactions and sensations

Perhaps, stimuli are like reactants, and perceptions are like products {chemical reactions and sensations}.

1-Consciousness-Speculations-Sensation-Computer Science

filtering and sensations

Filtering removes values outside a frequency and/or intensity range. Filtering defines a category by specifying boundaries {filtering and sensations}. Neuron assemblies use thresholds to establish boundaries and make categories.

information processing and sensations

Computers and brains have readers that input data and algorithms, processors that use data and algorithms to make output data, and writers that output data. Computers and brains process information in circuits, transfer information over channels, and store information in structures. Perhaps, mind is dynamic information in brain structures {information processing and sensations}.

operating system and sensations

Computer operating systems control basic functions, such as file manipulation, gathering input, and sending output. Perhaps, minds are like operating systems {operating system and sensations}.

simultaneous mutual interactions and sensations

Analog computers receive continuous voltages or currents and output continuous voltages or currents, so feedback and feedforward simultaneously affect output and input. Simultaneous mutual interaction requires system governors to prevent stops or exponential increases.

Serial digital computers have clocks that step through algorithms in discrete isolated operations that wait for specific input and deliver specific output. Parallel digital computers use clocks and software to deliver inputs and use outputs when appropriate. Digital neural networks step through network layers, so inputs from one layer affect next layer.

Two neurons can exchange information in both directions. One neuron can send excitation directly to other neuron. Other neuron can send excitation directly to third neuron, which sends inhibition directly to first neuron. Brain electrochemical signaling continuously goes through many interconnected circuits simultaneously, so inputs continually affect outputs, and outputs continually affect inputs, unifying and nesting sensation and action and causing continual recursion {simultaneous mutual interactions and sensations}. Perhaps, mind requires simultaneous mutual interactions.

1-Consciousness-Speculations-Sensation-Computer Science-Coding

analog coding and sensations

Analog coding is continuous and tracks physical processes directly. Digital coding prevents degradation and other errors and makes categories. Brain uses digital processing for axon impulses and neurotransmitter packets. Perhaps, mind uses analog processing to make continuous sensations {analog coding and sensations}.

code types and sensations

Current computers can code numbers and so code sense intensities but cannot code types and meanings and so cannot represent sense qualities {code types and sensations}.

coded messages and meaning

Perhaps, brain is like ink, and mind is like message {coded messages and meaning}.

1-Consciousness-Speculations-Sensation-Computer Science-Files

data structures and sensations

Computers and brains use data structures, such as files, tables, arrays, and displays. Files have elements, such as bytes, numbers, strings, dates, times, and booleans, separated by tabs, commas, and/or spaces. Files can have rows, with fixed or variable column numbers. Perhaps, mind uses three-dimensional displays {data structures and sensations}.

file access and sensations

Computers open and close files to read or write data. Perhaps, brain opens and closes files {file access and sensations}. Opening files is like awakening and becoming conscious, by accessing memory. Closing files is like sleeping, by blocking memory.

structure files and sensations

To describe object collections, structure files list object types and relative coordinates and distances. For example, to describe molecules, chemical structure files list atoms and relative coordinates and distances [Dalby et al., 1992]. Brains can use structure files to describe visual displays {structure files and sensations}.

1-Consciousness-Speculations-Sensation-Computer Science-Language

programming languages and sensations

Computer-processor programs use binary code. Assembly languages express hardware operations in simple grammar. Human-readable programming languages have sentence-like statements. Programming languages can emphasize procedures that manipulate objects or objects that have procedures. BASIC and C are procedure oriented. Java and C++ are object oriented. High-level code translates unambiguously into low-level code, and vice versa. Brain uses low-level code and/or procedure-oriented programming languages. Perhaps, mind uses object-oriented programming {programming languages and sensations} to represent geometric objects and perform geometric operations.

1-Consciousness-Speculations-Sensation-Computer Science-Vision

ray tracing and sensations

Ray tracing {ray tracing and sensations} tests light-source and surface-reflection light rays, to see where they land on object-depth-indexed two-dimensional-surface displays. Ray tracing indexes object locations, directions, and distances, as well as shapes, overlaps, shadows, light sources (emissions), absorptions, reflections, refractions, opaqueness, translucency, transparency, and color variations [Glassner, 1989].

vector graphics and sensations

Vector graphics {vector graphics and sensations} [Foley et al., 1994] represents scenes using geometric-figure descriptors, such as "circle", which have parameters, such as "color", "radius", and "center", which have values, such as "black" or "2". Descriptors have positions relative to other descriptors or to the display.

Vector graphics represents images using mathematical formulas for volumes, surfaces, and curves (including boundaries) that have parameters, coordinates, orientations, colors, opacities, shading, and surface textures. For example, circle information includes radius, center point, line style, line color, fill style, and fill color. Vector graphics includes translation, rotation, reflection, inversion, scaling, stretching, and skewing. Vector graphics uses logical and set operations and so can extrapolate and interpolate, including filling in.

1-Consciousness-Speculations-Sensation-Mathematics

complex number analogy

Complex-number real-number part indicates physical measurement. Imaginary-number part, and interactions between real and imaginary numbers, account for factors affecting solutions or processes. Complex-number multiplication is commutative: (a + b*i) * (c + d*i) = (c + d*i) * (a + b*i). Other complex-number operations can be non-commutative, so complex-number operations can represent all physical interactions. Complex-number functions and series can represent physical states or processes, because they can model translations, rotations, reflections, inversions, and waves, including interference, superposition, resonance, and entanglement. Complex-number operations make complex numbers, not new number types, and so can model physical situations, because physical interactions make only existing physical properties, not new ones.

Perhaps, brain is like real numbers, and mind is like imaginary numbers {complex number analogy}. Like real and imaginary numbers, brain and mind are separate and independent but can interact.

duals and sensations

In networks, links and nodes are duals. In two-dimensional projective geometry, points and lines are duals. In three-dimensional projective geometry, planes and points are duals. In three-dimensional space, lines bound surfaces, and surfaces bound lines, so mathematical theorems about lines have corresponding mathematical theorems about surfaces. On n-manifolds, p-forms and (n-p)-forms are duals, so 1-form (covariant tensor, linear function of coordinates, or manifold gradient) and vector field (contravariant tensor, function, or manifold) are duals, and they interact to make scalar products. For vectors, tangent vectors have covector duals. Perhaps, mind and brain are duals, and phenomena and manifolds are duals {duals and sensations}.

principal components and perception

From intensity and intensity-change comparisons, brain can build variables that are optimal for describing sensations {principal components, perception}. Different senses have different principal components. Within a sense, different qualities have the same principal components but with different values. Principal components are the same for everybody.

spherical harmonics and sensations

Indefinite spherical harmonics build to make indefinite Fourier three-dimensional waves that model/simulate sensations {spherical harmonics and sensations}.

1-Consciousness-Speculations-Sensation-Mathematics-Color

algebra and color

Algebras have elements, such as integers. Algebras have operations on elements, such as addition. Integer additions result in integers. Integer addition commutes: 13 + 27 = 40 = 27 + 13. Integer addition is associative: (13 + 27) + 5 = 45 = 13 + (27 + 5). Integer identity element adds to integers to make the same integer: 13 + 0 = 13, and 0 + 0 = 0. Integer inverse elements add to integers to make zero: 13 + -13 = 0, and 0 + 0 = 0. Finite or infinite tables can show operation results for all element pairs.

If elements are colors and operation is additive color mixing, adding two colors makes color, by wavelength-space vector addition, following Grassmann's laws {algebra and color}. Order of adding two colors does not matter, so color addition is commutative. Sequence of adding three colors does not matter, so color addition is associative. Colors have complementary additive-inverse colors, and adding both colors makes white, so color addition has inverses. Adding black, white, or gray to color does not change color hue but does change saturation, so black, white, or gray are like identity elements. Unlike integer addition, adding color to itself makes same color.

distributive property

Identity, inverse, commutation, and association work whether colors come from light sources or reflect from pigments. Colors from light sources and colors from pigment reflections can mix. If reflected color mixes with mixture of two source colors, or if reflected color mixes with each of two source colors and then mixtures combine, same color results, like the distributive property.

harmonic ratios and color

Tone and color frequencies and wavelengths have harmonic ratios {harmonic ratios, color}.

harmonics

Harmonic ratios have small integers in numerator and denominator. In increasing order of denominator, harmonic ratios are 1:1, 2:1, 3:2, 4:3, 5:3, 5:4, and so on.

color wavelengths

The purest red color is at light wavelength 683 nm, with orange at 608 nm, yellow at 583 nm, green at 543 nm, cyan at 500 nm, blue at 463 nm, and violet at 408 nm. Magenta can be at 380 nm or 760 nm.

color wavelength ratios

Color wavelength ratio for red/yellow, 683/583 = 1.17, and green/blue, 543/463 = 1.17, is 7/6 = 1.17 or 6/5 = 1.20. Color wavelength ratio for red/green, 683/543 = 1.26, and yellow/blue, 583/463 = 1.26, is 5/4 = 1.25. Color wavelength ratio for red/blue, 683/463 = 1.48, is 3/2 = 1.5. Color wavelength ratio for yellow/green, 583/543 = 1.07, is 13/12 = 1.085. Color wavelength ratio for red/violet, 683/408 = 1.67, and magenta/indigo, 725/435 = 1.67, is 5/3 = 1.67. See Figure 1.

color frequency ratios

Color frequency ratio for yellow/red, 518/436 = 1.19, and blue/green, 652/556 = 1.17, is 7/6 or 6/5. Color frequency ratio for green/red, 556/436 = 1.28, and blue/yellow, 652/518 = 1.26, is 5/4. Color frequency ratio for blue/red, 652/436 = 1.50, is 3/2. Color frequency ratio for green/yellow, 556/518 = 1.07, is 13/12. Color frequency ratio for violet/red, 740/436 = 1.70, and indigo/magenta, 694/420 = 1.66, is 5/3.

additive complementary colors

Additive complementary color pairs have same wavelength ratio, 4/3 = 1.33. Red/cyan is 683/500 = 1.37 to 650/500 = 1.30. Yellow/blue is 583/463 = 1.26 to 583/435 = 1.34. Chartreuse/indigo is 560/435 = 1.29 to 560/408 = 1.37. Magenta/green is 722/543 = 1.33.

Additive complementary-color triples have three color-pairs, whose average wavelength ratio is also 4/3. For three additive complementary colors, ratios are red/blue, 683/463 = 1.48, red/green, 683/543 = 1.26, and green/blue, 543/463 = 1.17. Arithmetic average is (1.5 + 1.25 + 1.2)/3 = 1.32. Geometric average is (1.5 * 1.25 * 1.2)^0.333 = 1.32. For three subtractive complementary colors, ratios are magenta/cyan, 722/500 = 1.45, magenta/yellow, 722/583 = 1.24, and yellow/cyan, 583/500 = 1.17. Average wavelength ratio is 4/3.

Three complementary colors have same relative values: red = 1.5, green = 1.2, and blue = 1, or magenta = 1.5, yellow = 1.2, and cyan = 1.

subtractive complementary colors

Because mixing darkens and blues colors, subtractive complementary color pairs have increasing wavelength ratios. Red/green is 683/543 = 1.26. Orange/blue is 608/463 = 1.31. Yellow/indigo is 583/435 = 1.34. Chartreuse/violet is 560/408 = 1.37.

color wavelength ratios starting at red

Starting with red at 1/1 = 683/683, orange is 8/7 = ~683/608, yellow is 7/6 = ~683/583, green is 5/4 = ~683/543, cyan is 4/3 = ~683/500, blue is 3/2 = ~683/463, violet is 5/3 = ~683/408, and magenta is 7/4 = ~683/380.

color wavelength ratios starting at green

Magenta is 2/3 = ~380/543. Violet is 3/4 = ~408/543. Blue is 5/6 = ~463/543. Cyan is 8/9 = ~500/543. Green is 1/1 = 543/543. Yellow is 17/16 = ~583/543. Orange is 9/8 = ~608/543. Red is 5/4 = ~683/543. Magenta is 4/3 = 720/543.

color wavelength ratios starting at red

Starting with red at 1/1 = 683/683, orange is 8/7 = ~683/608, yellow is 7/6 = ~683/583, green is 5/4 = ~683/543, cyan is 4/3 = ~683/500, blue is 3/2 = ~683/463, violet is 5/3 = ~683/408, and magenta is 7/4 = ~683/380.

color wavelength ratios starting and ending at magenta

On color circles, complementary colors are opposites. Complementary-color pairs have same wavelength ratio, so cyan/red = blue/yellow = magenta/green. Colors separated by same angle have same wavelength ratio, so yellow/red = green/yellow = cyan/green = blue/cyan = magenta/blue = red/magenta. Example color circle has red = 32, yellow = 16, green = 8, cyan = 4, blue = 2, and magenta = 1 and 64. Put into octave as exponentials, red = 2^0.83, yellow = 2^0.67, green = 2^0.5, cyan = 2^0.33, blue = 2^0.17, and magenta = 2^0 and 2^1. Put into octave, magenta = 2/1, red = 9/5, yellow = 8/5, green = 7/5, cyan = 5/4, blue = 9/8, and magenta = 1/1. Complementary colors have ratio 1.412 = ~7/5. Neighboring colors have ratio 1.125 = 9/8. Example wavelengths with these ratios are magenta = 750 nm, red = 668 nm, yellow = 595 nm, green = 531 nm, cyan = 473 nm, blue = 421 nm, and magenta = 375 nm, close to actual color wavelengths.

color harmonic ratios

Color frequency categories are at harmonic ratios: 48 Hz for red, 60 Hz for green, and 72 Hz for blue. 60/48 = 1.25 = 5/4. 72/48 = 1.5 = 3/2. 72/60 = 1.2 = 6/5. See Figure 2. Color-pair wavelength ratios have harmonic relations. Red/magenta = 7/4. Red/violet and magenta/indigo = 5/3. Red/blue = 3/2. Complementary colors red/cyan, yellow/blue, chartreuse/indigo, and magenta/green = 4/3. Red/green and yellow/blue = 5/4. Red/yellow and green/blue = 6/5 or 7/6. Red/orange = 8/7. Green/cyan = 9/8. Yellow/green = 13/12. See Figure 3.

Red, green, and blue add to make white. Magenta, cyan, and yellow add to make black. For red, green, and blue, and for magenta, cyan, and yellow, average of the three color-pair wavelength ratios is 4/3.

Looking at only primary colors red, green, and blue, color-pair wavelength ratios are red/blue 3/2, red/green 6/5, and green/blue 5/4. Red:green:blue relations have 6:5:4 ratios.

Looking at wavelength differences rather than wavelength ratios, magenta, red, orange, yellow, green, cyan, blue, and violet have approximately equal wavelength differences between adjacent colors. See Figure 2. Setting wavelength difference equal to one, color wavelengths form series 8, 7, 6, 5, 4, 3, 2, and 1. See Figure 3.

Assuming colors are like tones, colors can fit into one octave. Primary colors red, green, and blue, and complementary colors cyan, magenta, and yellow, respectively, are equally spaced in octave from 2^0 to 2^1. Magenta, red, yellow, green, cyan, blue, and magenta form series 6, 5, 4, 3, 2, 1, and 0. Magenta = 2^1, red = 2^0.83, yellow = 2^0.67, green = 2^0.5, cyan = 2^0.33, blue = 2^0.17, and magenta = 2^0. Adjacent colors have ratio 2^0.17 = 1.125 = 9/8. All complementary colors have the same ratio, 2^0.5. All complementary-color triples, such as red/green/blue, average 2^0.5. White, gray, and black have average color-pair wavelength ratio 2^0.5. In this arrangement, color-pair ratios are red/magenta ~ 9/5, yellow/magenta ~ 8/5, green/magenta ~ 7/5, cyan/magenta ~ 5/4, and blue/magenta ~ 9/8. See Figure 3. In this arrangement, whites, grays, and blacks are farthest from being octaves and so have dissonance. Other colors have smaller integer ratios and so more consonance. Color categories are at harmonic ratios.

multiple harmonics

One pair has two or three categories, like tone intervals or red/green or red/green/blue. Two pairs make six or seven categories, like octave whole tones or main spectrum colors. Three pairs make 12 categories, like octave half tones or major spectrum colors. Four pairs make 24 categories, like octave quarter tones or major and minor spectrum colors.

summary

Using physical-color wavelengths, wavelength ratios are red/magenta = 7/4, red/violet = magenta/indigo = 5/3, red/blue = 3/2, red/cyan = yellow/blue= chartreuse/indigo = magenta/green = 4/3, red/green = yellow/blue = 5/4, red/yellow = green/blue = 6/5 or 7/6, red/orange = 8/7, green/cyan = 9/8, and yellow/green = 13/12.

Additive complementary-color pairs, such as red/cyan, yellow/blue, chartreuse/indigo, and magenta/green, have same 4/3 wavelength-ratio.

For red, green, and blue additive complementary colors, average of the three wavelength ratios, red/blue, red/green, and green/blue, is 4/3. For magenta, cyan, and yellow subtractive complementary colors, average of the three wavelength ratios, magenta/cyan, magenta/yellow, and yellow/cyan, is 4/3.

These intervals are harmonic musical tones in an octave: C, E, and G in the key of C. Blue and red make a major fifth interval. Blue and green make a minor third interval. Green and red make a major third interval.

mathematical group and color

Mathematical groups have elements, such as triangles. Operations map group elements to the same or another element. For example, if element is equilateral triangle, rotations around center by 120 degrees result in same element. Finite or infinite tables can show operation results for all elements.

If elements are colors and operation is additive color mixing, adding two colors makes color, by wavelength-space vector addition, following Grassmann's laws {mathematical group and color}. Adding black, white, or gray to color does not change color hue but changes color saturation, so color addition is not a single operation.

vectors and colors

Because they cannot be negative but can complement each other, color qualities are vectors {vectors and colors}. Color vectors have three components: hue, saturation, and brightness, or red, green, and blue.

1-Consciousness-Speculations-Sensation-Mathematics-Information

information compression and sensations

Perhaps, sense qualities are compressed intensity-frequency spectra {information compression and sensations}.

negative information

Sense information includes negative information {negative information}, such as not blue.

1-Consciousness-Speculations-Sensation-Physics

acceleration pattern and sensations

Perhaps, sense qualities depend on acceleration patterns {acceleration pattern and sensations} and are about flow vibrations, jolts, eddies, vortexes, streamlining, and turbulence.

motions

Translations, vibrations, rotations, reflections, inversions, and transitions can have accelerations. Vibrations, rotations, and transitions have acceleration changes.

accelerations

Forces include tensions, compressions, and torsions. Photon and molecule interactions transfer energy and cause forces, accelerations, and acceleration changes. Materials resist tension, compression, and torsion and reduce initial acceleration to zero acceleration. Accelerations have location, duration, maximum, minimum, and change rate.

jolt types

Acceleration changes (jolts) are vectors, with magnitude and direction, and have different types.

Acceleration can be zero (with no net force), so jolt is zero. Acceleration can be constant positive (due to constant positive force), so jolt is zero. Acceleration can be constant negative (due to constant negative force), so jolt is zero.

Acceleration can increase at constant rate (due to constant positive force change), so jolt is constant positive. Acceleration can decrease at constant rate (due to constant negative force change), so jolt is constant negative.

Acceleration can increase at increasing rate (due to increasing positive force change), so jolt is increasing positive (until resisting force makes it constant positive). Acceleration can decrease at increasing rate (due to increasing negative force change), so jolt is increasing negative (until resisting force makes it constant negative). Acceleration can increase at decreasing rate (due to decreasing positive force change), so jolt is decreasing positive (until it becomes constant). Acceleration can decrease at decreasing rate (due to decreasing negative force change), so jolt is decreasing negative (until it becomes constant).

Acceleration can oscillate between increase and decrease, so jolt oscillates.

collisions

Before inelastic collisions, colliding-object deceleration is zero, and velocity is maximum: kinetic energy = 0.5 * mass * velocity^2. In inelastic collisions, colliding-object deceleration quickly reaches maximum, and velocity starts to decrease: Force = mass * acceleration = k * (S - x) = -k * S, where k is material's resistance factor, x is distance traveled in material, and S is distance at which object stops. After inelastic-collision process ends, colliding-object deceleration and velocity decrease to zero: F = k * (S - x) = 0.

vibrations

At vibration equilibrium point, molecule acceleration is zero, and velocity is maximum. Then, velocity decreases, and deceleration increases. At maximum displacement, velocity is zero, and deceleration is maximum. Then, velocity increases, and acceleration increases, in opposite direction. At equilibrium point, acceleration is zero, and velocity is maximum, in opposite direction. Jolts depend on wave frequency and amplitude.

receptor acceleration patterns

Energy transfer causes receptor-molecule change. Molecule atoms accelerate, change energy to potential energy as they decelerate, and then stop moving. Then, molecule transfers energy to signal molecule, and receptor molecule returns to resting state. Signal molecules go to other cell receptors, making a cell signaling system. Neurotransmitters, neurohormones, and neuroregulators go to receptors on other cells and sustain the code. Receptors, signal molecules, and later molecules have vibrations and so acceleration changes.

dimension and sensations

String theory has extra spatial dimensions. Universe may have hidden spatial or temporal dimensions. Perhaps, mind is in hidden dimensions, and experience is orthogonal to normal space-time {dimension and sensations}. However, physical activity must affect mind, so mind does not involve extra spatial or temporal dimensions.

electromagnetism and sensations

Electromagnetism can relate to sense qualities {electromagnetism, sensations}.

electromagnetic induction

Changing electric fields induce magnetism, and changing magnetic fields induce electric force. Perhaps, brain induces mind. For example, brain particles cause mind waves.

fire

Fire is electromagnetic radiation from excited electrons in oxidation reactions. Perhaps, brain has reactions whose secondary effects make mind. Burning is like unconsciousness, and fire is like consciousness.

magnetism

Though net electric charge is zero, electric-charge relativistic motions make observable net electric charge perpendicular to motion, creating magnetism. Perhaps, mind depends on active "charges" whose relative motions and interactions create net effects, but whose static states are not observable.

hidden variable and sensations

Physical and mental descriptions use variables. Variables can be measurable and have units. Variables can be ratios with no units. Variables can be not measurable and have no values or units. Perhaps, brain is about measurable variables, but mind involves hidden immeasurable variables {hidden variable and sensations}. Properties that combine other properties can seem ineffable. For example, words can sound the same when used as nouns or verbs, but actually have subtle noun-marker or verb-marker sound features.

quantum mechanics and sensations

Matter and energy properties are discrete. For example, energy has quanta. Matter and energy are both particle and wave. Waves allow probabilistic physical events and transitions without intermediate states. Particle waves are infinite and allow action at distance and non-local effects. Quantum-mechanical mathematical waves simultaneously represent multiple points and energies, and string-theory moving strings simultaneously represent multiple points. Perhaps, sense qualities are brain-activity quantum-mechanical effects {quantum mechanics and sensations}.

mathematical waves

Perhaps, mind and consciousness involve mathematical waves, similar to quantum-mechanical waves. Infinite waves have no definite position and fill space, accounting for sensory field. Waves can have wave packets, accounting for sensations.

complementarity

Quantum-mechanical waves and particles describe event positions and energies (complementarity). Forces are particle exchanges, and energies depend on wave superpositions. Perhaps, brain and mind have complementarity. Brain uses particle motions, and mind uses abstract waves.

electronic transition

Electrons orbiting atomic nuclei move to other orbits with no intermediate stages. Quantum-mechanical waves change frequency with no intermediate frequencies. Perhaps, mind is like quantum-mechanical waves or is intermediate to physical interactions.

virtual particle

Quantum-mechanical particle interactions and wave energy transformations can create particle pairs that exist for less than one quantum time unit. For example, spontaneous energy fluctuations create virtual particle pairs in space vacuum. Interaction cannot create single particles, because one particle cannot conserve momentum. Instruments cannot observe virtual particles, because they recombine rapidly to return vacuum energy to more-probable state. Though they have short existence, virtual particles can interact with real particles. Perhaps, mind is like virtual particles, which can affect brain but have no direct measure.

orbitals

Electron orbitals have one resonating wave, with frequency, amplitude, inertia, and moment. Perhaps, sense qualities are resonating wave packets in three-dimensional orbitals. Orbital amplitude represents intensity. However, orbitals cannot model colors, because colors also have saturation.

spins

Particles have spin, with frequency, amplitude, inertia, and moment. Perhaps, sense qualities are particles with spins. However, spins cannot model colors, because primary-color spins cannot interact or sum to make secondary-color spins.

relativity and sensations

General relativity shows that masses and energies change space shape, and changed space alters particle motions through space. Perhaps, brain is like masses and energies, and mind is like space {relativity and sensations}. Brain masses affect mind space, and mind space affects brain masses.

right-left symmetry and sensations

Universe has right and left forms, and most physical laws have parity. Perhaps, universe has another right-left-like asymmetry that causes reality to have two sides, physical and mental {right-left symmetry and sensations}. Mind can look behind reality. For example, surfaces have two sides, and back can affect front and vice versa. Mental reality is entirely physical but is complementary to physical reality.

subphysical processes

Particle and object collisions, gravitation, and electromagnetism are relatively strong (primary) forces. Perhaps, mental forces and energies are very weak (secondary) forces and energies {subphysical processes}.

superphysical processes

Superphysical processes transcend physical forces by extending them {superphysical processes}. Perhaps, mental forces and energies are superphysical.

1-Consciousness-Speculations-Sensation-Physics-Energy

energy and sensations

Perhaps, sense qualities are energies, and their intensities are energy densities {energy and sensations}. Perhaps, perceptual surfaces have types of kinetic and/or potential energy.

Physical forces have one dimension, because they are interactions between particles. Vectors represent forces. Physical energies have no dimensions, because they are integrals of forces over distances. Scalars represent energies. Physical energies can flow, so intensities have one dimension. Vectors represent intensities. Perhaps, sensations have more than one dimension, because they combine properties.

heat and temperature and sensations

Heat, an extensive quantity, makes temperature, an intensive quantity. Perhaps, brain energy makes mind intensity {heat and temperature and sensations}.

potential energy and sensations

Potential energies are scalars and have type, amount, radial distance, azimuth, and elevation. Sensations are not vectors, because sense qualities do not have direction or flow. Like potential energies, sensations are in fields. Sensations have azimuth, elevation, and radial distance. Perhaps, sensations are like non-physical potential energies {potential energy and sensations}.

1-Consciousness-Speculations-Sensation-Physics-Interaction

interaction and sensations

Independent things add. Same objects and properties can add. Summing or subtracting two same-type quantities results in values with same unit. Integration involves summation. Summations make extensive quantities.

Interacting objects and properties multiply. Same or different objects and properties can multiply. Multiplying or dividing two quantities results in values with different unit. Differentiation involves division. Divisions make intensive quantities.

Two masses or two charges interact to make gravitational or electric force. Multiplying same units can make intensive quantities: (4 kg) * (6 kg) / (2 meters)^2 = 6 N of force. Multiplying 4 newtons of force and 5 meters of distance makes 20 newton-meters, 20 joules of energy. Multiplying different units can make extensive quantities. Multiplying 4 coulombs and 5 kilograms makes 20 coulomb-kilograms. However, combining charge and mass has no physical meaning, because charge and mass do not interact. Multiplying can make extensive or intensive quantities.

Only continuous quantities can interact. Discrete quantities cannot affect each other. For example, 4 oranges times 5 bananas results in 20 banana-oranges, which do not exist. Multiplying 4 oranges and 5 oranges results in 20 orange-oranges, which do not exist.

New things arise from physical or mathematical interactions. Perhaps, sense qualities arise from physical or mathematical interaction mechanisms {interaction and sensations}. Neurons use no units.

joining and sensations

Joining existing things can produce something new {joining and sensations}. Joining alters or destroys existing objects.

new force or energy

Physics is still discovering new physical forces and energies, with unknown properties. Perhaps, mind has new physical forces, energies, and fields {new force or energy}. However, mind does not measurably affect physical world.

splitting and sensations

Splitting existing things can produce something new {splitting and sensations}. Making something from physical void or vacuum requires splitting. Void can split into opposites: point and anti-point, pole and anti-pole, left spin and right spin, and ON mark and OFF mark. Splitting can destroy existing properties.

1-Consciousness-Speculations-Sensation-Physics-Phase

crystals and sensations

Perhaps, colors are like crystals, with different symmetries and harmonics {crystals and sensations}.

particles and sensations

Perhaps, sense qualities result from kinetics and dynamics of many abstract particles, which make phases {particles and sensations}.

phases and sensations

Solid, liquid, and gas phases depend on material, temperature, and pressure. Within a sense type, sense qualities are like phases {phases and sensations}. Perhaps, red, green, and blue are different phases. Complementary colors mix phases, like at double points. Color mixtures that result in white, gray, and black are at triple points.

1-Consciousness-Speculations-Sensation-Physics-Wave

phosphorescence and sensations

Brain is like phosphors, which phosphoresce for seconds after stimulation {phosphorescence and sensations}. Long times allow neuron activities to integrate.

waves and sensations

Oscillations can be longitudinal along one dimension, such as chemical-bond-length oscillations. Oscillations can be transverse along two dimensions, such as chemical-bond-angle oscillations. Violin-string points oscillate transversely across resting-string line. Plane waves, such as in vibrating guitar strings, can rotate around travel-direction axis, like helices, in three dimensions. Electromagnetic-wave points have transverse electric-field oscillation and transverse perpendicular magnetic-field oscillation. Electromagnetic waves also travel, so they have three dimensions and can rotate around travel direction. Electrons in electronic orbitals can oscillate in three dimensions.

Waves spread over space. Perhaps, mind is like waves {waves and sensations}. However, waves cannot model colors, because primary-color waves cannot interact or sum to make secondary-color waves.

wave modulation

Television and radio signals have basic frequencies. To carry information about music or scenes, basic-wave amplitude or frequency can vary with signal intensity and frequency. Flow modulation can carry information. Perhaps, mind is modulated brain waves or flows.

frequency transitions

Waves change frequency in one cycle, with no intermediate stages. Mind transitions between mental states with no intermediate states.

1-Consciousness-Speculations-Sensation-Psychology

attention and first sensation

People can attend to new or contrasting stimuli without full awareness {attention and first sensation} [Berns et al., 1997] [Debner and Jacoby, 1994] [Hardcastle, 2003] [He et al., 1996] [Lamme, 2003] [McCormick, 1997] [Merikle and Joordens, 1997] [Posner, 1994] [Robertson, 2003] [Tsuchiya and Koch, 2007].

blindsight and first sensation

Cortical disease or injury can result in minimal experience {blindsight and first sensation}, such as blindsight, in which people are only aware of object or surface presence, existence, or motion [Azzopardi and Cowey, 1997] [Barbur et al., 1993] [Güzeldere et al., 2000] [Holt, 1999] [Kolb and Braun, 1995] [Sanders et al., 1974].

labeling and sensations

Sensations label perceptions {labeling and sensations}, to provide meaning for perceptions. For example, the color red is a label for a feature, and "red" and "green" name features. Using a symbol, label, index, reference, or name defines a category, feature, or variable type. Applying a label groups objects, events, relations, or ideas. Indexing helps memory and recall.

Label meaning can depend on relation to body parts. Many means more than number of fingers. Large means larger than body. Right means nearer to right arm than left arm. Up means nearer to head than feet. Complex labels, such as elephant or victory, combine simple labels.

marking and first sensation

Perhaps, sense qualities arose as marking {marking and first sensation}. Markers provide reference signs, such as indexes, to which other signs can relate. For example, consciousness can mark figure and not ground. Marking has no units, such as length units. However, marking is only information bits and so is not a new thing.

musical instrument analogy and first sensation

Like music from instruments, brain produces mind {musical instrument analogy and first sensation}.

synthesis and first sensation

Analysis finds differences, parts, and functions. Synthesis finds similarities, wholes, and goals. After neuron-assembly information analysis, brain synthesizes intensity and frequency to make first sensation {synthesis and first sensation}.

1-Consciousness-Speculations-Sensation-Psychology-Sense

sense properties and first sensation

Sense properties relate to first sensation {sense properties and first sensation}. Sensations require duration, location, intensity, and quality.

intensity

Intensity alone cannot make sensation. Something or nothing, on or off, yes or no, true or false, or 0 or 1 has no type. Thresholds make switches, with no units. Intensity is only information bits and so is not a new thing. Intensity at spatial location has no type. Intensity for duration has no type.

intensity type

Intensity type alone, without intensity, spatial, or temporal information, has no amount. Intensity-type in space, without time or intensity, has no amount. Intensity type at space location for duration has no amount. Intensity type for duration has no amount. Intensity and intensity-type, without temporal or spatial information, has amount and type.

time

Before and after, time flow, or cycles in time, without space, intensity, or intensity type, has no type.

position

Space location alone, without intensity, intensity-type, or temporal information, has no type.

space

Perhaps, sense qualities arose as nearness or farness, right or left, or up or down in space. Space location for duration has no type.

surface

Perhaps, first sensations indicate only surface presence, existence, or motion, with no phenomenal quality, intensity, or pattern, purely mathematical, spatial, and geometric.

hearing properties

Tones can be harsh or smooth, be sharp or flat, and have acute or gradual onset and offset {hearing properties} {tone properties}. Tone pairs can have consonance or dissonance and major or minor intervals.

Physically, sound waves have frequencies with intensities. Frequencies have ratios, so sounds have harmonics, such as octaves, fifths, thirds, fourths, sixths, and sevenths. Physiologically, sounds are independent and unmixed (analytic) and have loudness and tone. Hearing perceptual processes [Kaas and Hackett, 2000] compare adjacent and harmonic frequency intensities to find loudness and tone. Relative sound intensity determines loudness. Loudness ranges from painful to whisper. People can distinguish 100 loudness levels. Sound frequency determines tone. Tones have width, deepness, shrillness, and thickness. High frequencies are narrow, shallow, shrill, and thin. Low frequencies are wide, deep, dull, and thick. People can distinguish 10 octave levels and 12 (or 24) harmonic levels, so people can distinguish 120 tones.

pain and pleasure

Pain can be high-amplitude pain, acute pain, or dull pain. Pleasure can be high-amplitude pleasure, acute pleasure, dull pleasure, or orgasm {pain properties} {pleasure properties}.

Physically, pains have inelastic distortions. Physiologically, people feel dull or acute pain. Pain perceptual processes [Chapman and Nakamura, 1999] compare nociceptor inputs. Inelastic distortion determines pain, which can be acute or dull. People can distinguish 10 pain levels.

smell properties

Odors are sweet, putrid, cool, hot, sharp, and flat {smell properties} {odor properties}. Odors can be sweet, like fruit, or putrid, like goat or sweat. Odors can be cool, like menthol, or hot, like heavy perfume. Odors can be sharp and harsh, like vinegar or acid, or flat and smooth, like ether or ester. Aromatic, camphorous, ether, minty, musky, and sweet are similar. Camphor, resin, aromatic, musk, mint, pear, flower, fragrant, pungent, fruit, and sweets are similar. Goaty, nauseating, putrid, and sulphurous are similar. Smoky/burnt and spicy/pungent are similar. Putrid or nauseating, foul or sulfur, vinegar or acrid, smoke, garlic, and goat are similar. Acidic and vinegary are similar. Acidic and fruity are similar. Vegetable smells are similar. Animal smells are similar.

Physically, air-borne chemicals have concentrations, sizes, shapes, and sites and attach to nasal-passage chemical receptors. Physiologically, smells are strong or weak fruity, flowery, sweet, malty, earthy, savory, grassy, acrid, putrid, minty, smoky, pungent, camphorous, musky, urinous, rubbery, tobaccoey, woody, spermous, nutty, fishy, rotten, and medicinal. Smell detects aldehyde smells first, floral smells second, and lingering musky, sweet spicy, and woody smells later. Smells are mild-pungent (flat-sharp) and sweet-putrid. Foul, sulfurous, acidic, acrid, and putrid are pungent and putrid. Pungent, burnt, and spicy are pungent and neutral. Mint, ether, and resin are pungent and sweet. Flowery and fruity are mild and sweet. Musk is mild and neutral. (Mild cannot be putrid.) Smells can be cool, like menthol, or hot, like heavy perfume. Cool and hot mix mild-pungent and sweet-putrid. Smell perceptual processes [Firestein, 2001] [Laurent et al., 2001] compare alcohols (fruity), ethers in concave and trough-shaped sites (ethereal and flowery), esters as chains (sweet), aldehydes (malty), dioxacyclopentanes (earthy, moldy, and potatoey), furanones (savory spice), hexenals and alkene aldehydes (grassy and herby), smallest positively charged carboxylic acids (acrid or vinegary), larger positively charged carboxylic acids as chains (putrid and sweaty and rancid), oxygen-containing-side-group benzene rings in V-shaped sites (minty), polycyclic aromatic hydrocarbons and phenols (burnt and smoky), negatively charged aryls as compact (spicy and pungent), multiple benzene rings in small concave sites (camphorous), multiple-benzene-ring ketones in large concave sites (musky), steroid ketones (urinous), isoprenes (rubber), carotenoids (tobacco), sesquiterpenes (woody), aromatic amines (spermous), alkyl pyrazines (nutty), three-single-bond monoamines (fishy), sulfur compounds (foul and sulfurous and rotten), methyl sulfides (savory), and halogens (pharmaceutical and medicinal). Concentration determines odor intensity, which can range from faint to harsh. People can distinguish 10 intensity levels. Molecule atoms and bonds determine odor shape, size, and site. Sites can be alcohol, ether, ester, aldehyde, ketone, acid, aryl, isoprene, amine, sulfur, and halogen. Shape can be chain, oblong, or ball, with sharp, medium, or smooth shape edges. People can distinguish 1000 odors.

taste properties

Tastes are salty, sweet, sour, and bitter {taste properties} {flavor properties}. Sour acid and salt are similar. Bitter and salt are similar. Sweet and salt are similar. Sour (acid) and bitter (base) are opposites. Sweet (neutral) and sour (acid) are opposites. Salt and sweet are opposites.

Physically, water-borne chemicals have concentrations, sizes, shapes, sites, acidity, and polarity and attach to tongue chemical receptors. Physiologically, tastes are acid, salt, base, sugar, and savory. Taste has sweetness-saltiness and sourness-saltiness-bitterness. Taste perceptual processes [Kadohisa et al., 2005] [Pritchard and Norgren, 2004] [Rolls and Scott, 2003] compare sugar, acid, base, salt, and umami receptor inputs to find intensity, acidity, and polarity. Acid-salt-base and salt-sweet opponent processes share salt. Concentration determines taste intensity. People can distinguish 10 intensity levels. Molecule atoms and bonds and electric charge determine taste acidity, which can be acidic, neutral, or basic. People can distinguish 3 acidity levels. Molecule atoms and bonds and molecule-electron properties determine taste polarity, which can be polar, half polar, or nonpolar. People can distinguish 3 polarity levels. Polar and acid define sour. Polar and neutral define salt. Polar and base define bitter. Nonpolar and neutral define sweet. Between sour and salt defines umami-glutamate. (Nonpolar cannot be acid or base.)

temperature properties

Temperature can be warm or cool {temperature properties}.

Physically, temperatures have random motions. Physiologically, people feel cool or warm. Temperature perceptual processes compare thermoreceptor inputs. Heat flow determines temperature, which ranges from cold to warm to pain. People can distinguish 10 temperature levels.

touch properties

Touches can be acute or smooth, steady or vibrating, and light or heavy {touch properties}.

Physically, touches have transverse motions and pressures (compression, tension, and torsion) that displace surface areas. Physiologically, people feel hardness, elasticity, surface texture, motion, smooth surface texture, rough surface texture, tickle, sharp touch, and tingle. Touch perceptual processes [Bolanowski et al., 1998] [Hollins, 2002] [Johnson, 2002] compare free nerve ending (smooth or rough surface texture), hair cell (motion), Meissner corpuscle (vibration), Merkel cell (light compression and vibration), pacinian corpuscle (deep compression and vibration), palisade cell (light compression), and Ruffini endorgan (slip, stretch, and vibration) inputs to find compression-tension and vibration. Pressure compression and tension determine hardness, elasticity, surface texture, motion, smooth surface texture, rough surface texture, tickle, sharp touch, and tingle. People can distinguish 10 compression-tension levels. Stimulus intensity and frequency determines vibration. People can distinguish 10 motion levels.

1-Consciousness-Speculations-Sensation-Psychology-Sense-Vision

color parameters

Colors have brightness, lightness, and temperature {color parameters}. Brightness defines the order black-white, blue/darkest_gray-yellow/lightest_gray, and red/dark_gray-green/light_gray. Color lightness (unsaturability, transparency, sparseness) defines the order black-white, blue-yellow, and red-green. Color temperature (texture, noisiness) defines the order blue-red, cyan-yellow, and green-magenta-black-white-gray.

A coolness-warmth axis and a perpendicular darkness-lightness axis define a color wheel. Blue, green, and red are on the circumference, with equal arcs between them. Coolness-warmth runs from blue -1 through green 0 then red +1, where -1 is cool and +1 is warm. Darkness-lightness runs from blue -1 through red 0 then green +1, where -1 is dark and +1 is light, in the opposite direction around the color circle. Dark and cool make blue (-1,-1). Light and neither warmth nor coolness make green (+1,0). Neither dark nor light and warm make red (0,+1).

Brightness is perpendicular to the color wheel, and the three axes define color space.

color properties

Color has brightness, hue, and saturation {color properties}. Color properties come from black-white, red-green, and blue-yellow opponent processes.

hue

Hue depends on electromagnetic-wave frequency [Krauskopf et al., 1982]. Fundamental color categories are white, gray, black, blue, green, yellow, orange, brown, red, pink, and purple [Kay and Regier, 2003]. White, gray, and black mix red, green, and blue. Brown is dark orange. Pink mixes red and white. Purple mixes red and blue. See Figure 1.

Alternatively, colors have six categories: white, black, red, yellow, green, and blue. Blue and red have no green or yellow. All other colors mix main colors. Purple mixes red and blue. Cyan mixes green and blue. Chartreuse mixes yellow and green. Orange mixes red and yellow. Pink mixes red and white. Brown mixes orange and black.

brightness and blackness

Color brightness depends on electromagnetic-wave intensity [Krauskopf et al., 1982]. Darkness is the opposite of brightness and is the same as added blackness. Colors can add black, and white can add black. Black adds to colors linearly and equally. See Figure 2. At all brightness levels, white looks lightest, yellow looks next lightest, green looks next lightest, red looks next lightest, blue looks darker, and black looks darkest.

White surroundings blacken color. Complementary-color surroundings enhance color. Black surroundings whiten color.

saturation and whiteness

Color saturation depends on electromagnetic-wave frequency distribution [Krauskopf et al., 1982]. Colors can add white, and black can add white. White adds to colors linearly and equally. Complete saturation means no added white. Lower saturation means more white. No saturation means all white. Less saturation makes colors look lighter. See Figure 3. Black looks most saturated. At all saturation levels, blue looks next most saturated, red looks somewhat saturated, and green looks less saturated. White looks least saturated.

transparency and opacity

Color transparency depends on source or reflector electromagnetic-wave density. Opaqueness means maximum color density, with no background coming through. Transparency means zero color density, with all background coming through. See Figure 4. With a white background, opacity is the same as saturation, and transparency is the same as no saturation, so colors are the same as in Figure 3. With a black background, opacity is the same as lightness, and transparency is the same as darkness, so colors are the same in Figures 2 and 4. Blue looks most opaque, and green looks least opaque.

color strength

For all color brightnesses, when both colors have equal brightness, black suppresses one color more {color strength}. Red is stronger than blue, because frequency is lower and wavelength is higher. Blue is stronger than green, because frequency is lower and wavelength is higher. Green is stronger than red, because frequency is lower and wavelength is higher. See Figure 6. Less blue needs to balance green and red, so blue is darker than red and green. Less red needs to balance green, so red is darker than green.

For all color brightnesses, when stronger color is 32 bits lower, weaker color can appear. See Figure 6.

Relative color strengths are the same no matter the computer-display color profile, contrast level, or brightness level.

mixtures

Blue is most dark, opaque, saturated, and cool. Red is less dark, opaque, and saturated and most warm. Green is least dark, opaque, and saturated and neither cool nor warm. See Figure 5, which displays the primary colors, their 1:1 mixtures plus CYMK mixtures, and their 2:1 mixtures.

Magenta mixes blue and red. In its group, it is most dark, opaque, saturated, and neither cool nor warm. Cyan mixes blue and green and so is less dark, opaque, and saturated and most cool. Yellow mixes red and green and so is least dark, opaque, and saturated and most warm. Because they add colors, magenta, cyan, and yellow do not directly compare to blue, green, and red.

Violet mixes blue and some red. In its group, it is most dark, opaque, and saturated and slightly cool. Purple mixes red and some blue and so is less dark, opaque, and saturated and is slightly warm. Turquoise mixes blue and some green and so is less dark, opaque, and saturated and is slightly cool. Orange mixes red and some green and so is less dark, opaque, and saturated and is warm. Spring green mixes green and some blue and so is less dark, opaque, and saturated and is neither warm nor cool. Chartreuse mixes green and some red and so is least dark, opaque, and saturated and is neither warm nor cool. Because they add colors differently, these six colors do not directly compare to magenta, cyan, and yellow or to blue, green, and red.

Mixing blue and yellow, green and magenta, or red and cyan makes white, gray, or black, because blue, green, and red then have ratios 1:1:1. White is lightest, because it adds blue, green, and red. Gray is in middle, because it mixes blue, green, and red. Black is darkest, because it subtracts blue, green, and red.

color properties

Physically, light waves have frequencies with intensities. Physiologically, colors are dependent and mixed (synthetic) and have brightness, hue, and saturation. Brightness depends on intensity and ranges from dim to bright. People can distinguish 100 intensity levels. Hue depends on average light frequency and ranges across the color spectrum, from red to violet. People can distinguish 100 hues. Saturation depends on light-frequency distribution and ranges from unsaturated to saturated. People can distinguish 100 saturation levels. Brightness, hue, and saturation define colors. People can distinguish one million colors. Vision perceptual processes also find color temperature and color lightness. Relative light intensities determine brightness. People can distinguish 100 brightness levels. Relative salience and activity determine color temperature, which ranges from cool to warm. People can distinguish 100 color temperatures. Relative transparency determines color lightness, which ranges from dark to light. People can distinguish 100 color lightnesses. Color brightness, temperature, and lightness define colors.

color facts

Colors are insubstantial, cannot change state, have no structure, do not belong to objects or events, and are results not processes {color facts}.

number of colors

Colors range continuously from red to scarlet, vermilion, orange, yellow, chartreuse, green, spring green, cyan, turquoise, blue, indigo, violet, magenta, crimson, and back to red. People can distinguish 150 to 200 main colors and seven million different colors.

discrimination

Humans can discriminate colors better from cyan to orange than from cyan through blues, purples, and reds.

people see same spectrum

Different humans see similar color spectra, with same colors and color sequence. Adults, infants, and animals see similar color spectra. Colorblind people have consistent but incomplete spectra.

purity

For each person, under specific viewing conditions, blue, green, and yellow can appear pure, with no other colors, but red does not appear pure.

location

Colors appear on surfaces.

adjacency

Adjacent colors affect each other and enhance contrast.

metamerism

Identical objects can have different colors. Different spectra can have the same color {metamerism}.

hue

Colors have hue. Colors respond differently as hue changes. Reds and blues change more slowly than greens and yellows.

brightness

Colors have brightness (lightness) or absence of black.

opaqueness

Colors have opaqueness. Transparency means no color.

saturation

Colors have saturation or absence of white. Different hues have different saturability and number of saturation levels.

emotion

Psychologically, red is alerting color. Green is neutral color. Blue is calming color.

depth

Blue objects appear to go farther away and expand, and red objects appear to come closer and contract, because reds appear lighter and blues darker.

Color can have shallow or deep depth. Yellow is shallow. Green is medium deep. Blue and red are deep.

lightness

Dark colors are sad because darker, and light colors are glad because lighter. Yellow is the lightest color, comparable to white. Colors darken from yellow toward red. Red is lighter than blue but darker than green. Colors darken from yellow toward green and blue. Green is lighter than blue, which is comparable to black. Therefore, subjective lightness increases from blue to red to green to yellow. See Figure 1. Lightness relates directly to transparency, unsaturability, and sparseness. Blue is dark, opaque, saturable, and dense. Red is lighter, less opaque, less saturable, and less dense. Green is light, more transparent, unsaturable, and sparse. Yellow is lightest, most transparent, most unsaturable, and sparsest.

Blue is similar to dark gray. Red is similar to medium gray. Green is similar to gray. Yellow is similar to very light gray. Magenta is similar to gray. Cyan is similar to light gray. See Figure 1.

temperature

Colors can be relatively warm or cool. Blue is coolest, then green, then yellow, and then red [Hardin, 1988]. White, gray, and black, as color mixtures, have no net temperature. Temperature relates directly to sharpness, emotion level, expansion, size, and motion toward observer. Blue is cool, is sharp and crisp, causes calmness, seems to recede, and appears contracting and smaller than red. Green has neutral temperature, is less sharp and less crisp, has neutral emotion, neither recedes nor approaches, and is neither smaller nor larger. Red is warm, is not sharp and not crisp, causes excitement, seems to approach, and appears expanding and larger than blue. See Figure 2. Red and blue are approximately equally far away from green, so green is average. Magenta has neutral temperature, because it averages red and blue. Cyan is somewhat cool, because it averages green and blue. Yellow is somewhat warm, because it averages green and red. Black, grays, and white have neutral temperature, because mixing red, green, and blue makes average temperature.

Warmness-coolness, excitement-calmness, approach-recession, expansion-contraction, and largeness-smallness relate to attention level, so temperature property relates to salience.

change

Colors change with illumination intensity, illumination spectrum, background surface, adjacent surface, distance, and viewing angle.

constancy

Vision tries to keep surface colors constant, by color constancy processes, as illumination brightness and spectra change.

white

White is relatively higher in brightness than adjacent surfaces. High colored-light intensity makes white.

black

Black is relatively lower in brightness than adjacent surfaces. Black is not absence of visual sense qualities but is a color. Low colored-light intensity makes black.

gray

Gray is relatively the same brightness as adjacent surfaces. Increasing gray intensity makes white. Decreasing gray intensity makes black. Increasing black intensity or decreasing white intensity makes gray.

red

Red light is absence of blue and green. Red pigment is absence of green, its subtractive complementary color. Red is alerting color. Red is warm color, not cool color. Red has average lightness. Red mixes with white to make pink. Spectral red blends with spectral cyan to make white. Pigment red blends with pigment green to make black. Spectral red blends with spectral yellow to make orange. Pigment red blends with pigment yellow to make brown. Spectral red blends with spectral blue or violet to make purples. Pigment red blends with pigment blue or violet to make purples. People do not see red as well at farther distances. People do not see red as well at visual periphery. Red has widest color range. Red can fade in intensity to brown then black.

blue

Blue light is absence of red and green. Blue pigment is absence of red and green. Blue is calming color. Blue is cool color, not warm color. Blue is dark color. Blue mixes with white to make pastel blue. Spectral blue blends with spectral yellow to make white. Pigment blue blends with pigment yellow to make black. Spectral blue blends with spectral green to make cyan. Pigment blue blends with pigment green to make dark blue-green. Spectral blue blends with spectral red to make purples. Pigment blue blends with pigment red to make purples. People see blue well at farther distances. People see blue well at visual periphery. Blue has narrow color range.

green

Green light is absence of red and blue. Green pigment is absence of red. Green is neutral color in alertness. Green is cool color. Green is light color. Green mixes with white to make pastel green. Spectral green blends with spectral magenta to make white. Pigment green blends with pigment magenta to make black. Spectral green blends with spectral orange to make yellow. Pigment green blends with pigment orange to make brown. Spectral green blends with spectral blue to make cyan. Pigment green blends with pigment blue to make dark blue-green. People see green OK at farther distances. People do not see green well at visual periphery. Green has wide color range.

yellow

Yellow light is absence of blue. Yellow pigment is absence of indigo or violet. Yellow is neutral color in alertness. Yellow is warm color. Yellow is lightest color. Yellow mixes with white to make pastel yellow. Spectral yellow blends with spectral blue to make white. Pigment yellow blends with pigment blue to make green. Spectral yellow blends with spectral red to make orange. Pigment yellow blends with pigment red to make brown. Olive is dark yellow-green or less saturated yellow. People see yellow OK at farther distances. People do not see yellow well at visual periphery. Yellow has narrow color range.

orange

Spectral orange can mix red and yellow. Pigment orange can mix red and yellow. Orange is slightly alerting color. Orange is warm color. Orange is light color. Orange mixes with white to make pastel orange. Spectral orange blends with spectral blue-green to make white. Pigment orange blends with pigment blue-green to make black. Spectral orange blends with spectral cyan to make yellow. Pigment orange blends with pigment cyan to make brown. Spectral orange blends with spectral red to make light red-orange. Pigment orange blends with pigment red to make dark red-orange. People do not see orange well at farther distances. People do not see orange well at visual periphery. Orange has narrow color range.

violet

Spectral violet can mix blue and red. Pigment violet has red and so is purple. Violet is calming color. Violet is cool color. Violet is dark color. Violet mixes with white to make pastel violet. Spectral violet blends with spectral yellow-green to make white. Pigment violet blends with pigment yellow-green to make black. Spectral violet blends with spectral red to make purples. Pigment violet blends with pigment red to make purples. People see violet well at farther distances. People see violet well at visual periphery. Violet has narrow color range. Violet can fade in intensity to dark purple then black.

brown

Pigment brown can mix red, yellow, and green. Brown is commonest color but is not spectral color. Brown is like dark orange pigment or dark yellow-orange. Brown color depends on contrast and surface texture. Brown is not alerting or calming. Brown is warm color. Brown is dark color. Brown mixes with white to make pastel brown. Pigment brown blends with other pigments to make dark brown or black. People do not see brown well at farther distances. People do not see brown well at visual periphery. Brown has wide color range.

color space with orthogonal vectors

Simple color space can have orthogonal red, green, and blue coordinates {color space with orthogonal vectors}, with unit vectors at (1,0,0) for red, (0,1,0) for green, and (0,0,1) for blue. Adding red, green, and blue coordinates makes the resultant-vector color.

Brightness is resultant-vector length. For example, bright green can have vector (0,9,0), with length 9. Bright green (0,9,0) and bright red (9,0,0) can add to vector (9,9,0), with length 9 * 2^0.5.

Hue is resultant-vector direction. For example, unit red and unit green can add to yellow (1,1,0).

Saturation is resultant-vector angle to the color-space diagonal. For example, unit red, unit green, and unit blue add to white (1,1,1), which is on the diagonal and so has 0% saturation. Unit red and unit blue add to magenta (1,0,1), which is on the farthest plane, with maximum 45-degree angle to the diagonal, and so has 100% saturation.

1-Consciousness-Speculations-Space

cross-sections and three-dimensional space

Flow cross-sections have two dimensions and can represent surfaces. Flow cross-sections can represent three dimensions {cross-sections and three-dimensional space}. To represent a squat cylinder, cross-section left region can represent cylinder top layer, middle region can represent cylinder middle layer, and right region can represent cylinder bottom layer. Alternatively, the three regions can interleaf throughout cross-sections, with cross-section points having top-, middle-, and bottom-layer points. Because cross-sections can represent three-dimensions, circuit flows can represent three-dimensional space over time.

layers and three-dimensional space

Because layers can represent two-dimensional images, multiple layers can represent a three-dimensional image {layers and three-dimensional space}. See Figure 1.

One layer can represent a three-dimensional image by skewing. See Figure 2. Left region represents top layer. Middle region represents middle layer. Right region represents bottom layer.

One layer can represent a three-dimensional image by interleafing. See Figure 3. Evenly distributed neuron sets represent top layer, middle layer, and bottom layer.

One topographic-map neuron layer can represent three-dimensional space, and layer series can represent three-dimensional space over time.

network to space

A network of nodes and links among nodes {network to space} can represent space. Sense processing uses neuron assemblies to represent nodes and links.

semi-space

Two-dimensional surfaces {pre-space} {semi-space} can add relative distance information to represent three-dimensional spaces. Semi-spaces are like two-and-a-half-dimension sketches [Marr, 1982].

Sense, and computer, processing uses intensity variations to find symbolic primitives, such as zero crossings, edges, contours, and blobs; detect boundaries and brightnesses; and represent two dimensions. From the primitives, sense and computer processing finds relative surface distances, depths, contours, and orientations and uses surface shading, orientation, scaling, and texture to find object and observer spatial relations, to simulate three dimensions. Later, sense and computer processing uses memory and global information to integrate the two-dimensional and depths-and-distances descriptions to build a three-dimensional model for object representation, manipulation, and recognition.

stimuli as media

Stimuli can serve as substrates/media on which to display sensations {stimuli as media}. Sense, and perhaps computer, processing can simulate stimulus input streams from physical space.

surface elements and mental space

Mental-space points are surface elements (differential surfaces), which have direction, distance, and orientation {surface elements and mental space}. Surface elements link to make space. Sense, and perhaps computer, processing can make surface elements in space.

1-Consciousness-Speculations-Space-Biology

adjacency and mental space

Skin touches objects, and touch receptors receive information about objects adjacent to body {adjacency and mental space}. As body moves around in space, mental space expands by adding adjacency information.

angle-comparison computations calculate distances

Eye-accommodation-muscle feedback to vision depth-calculation processes can calculate distances up to two meters. Using metric depth cues can calculate all distances. Observing objects requires at least two eye fixations, which allow vision processing to calculate two different perceived angles, for two different eye, head, and body positions. Vision and body angle-comparison computations can calculate line, surface, feature, and object distances {angle-comparison computations, distances} {distances, angle-comparison computations}.

two sight-line to surface angles

At first eye fixation on a line or surface point, vision calculates a sight-line to point angle. At second eye fixation on a collinear or co-surface point, vision calculates a different sight-line to point angle, because eye, head, and/or body have rotated. At nearest possible line or surface point, sight-line to point angle is 90 degrees. At farthest possible line or surface point, sight-line to point angle is 0 degrees. Angle decreases linearly with distance. If angle of sight-line to line or surface is more perpendicular, line or surface point is nearer. If angle of sight-line to line or surface is less perpendicular, line or surface point is farther.

Comparing sight-line angles to two collinear or co-surface points can calculate distance. Angle difference varies directly with distance. Larger angle change means object is nearer. Smaller angle change means object is farther.

two visual angles

At first eye fixation on an object edge or contour, object has a retinal visual angle, calculating object relative size. At second eye fixation on a different object edge or contour, object has a different retinal visual angle, because eye, head, and/or body have rotated. If sight-line to object edge or contour angle is 90 degrees, visual angle is maximum. At other angles, visual angle is less. Visual angle decreases linearly with distance. If sight-line to object edge or contour is more perpendicular, visual angle is more. If sight-line to object edge or contour is less perpendicular, visual angle is less.

Comparing first and second visual angles can calculate object distance. Angle difference varies directly with distance. Larger angle change means object is nearer. Smaller angle change means object is farther.

two sight-line to point angles

At first eye fixation on an object point, sight-line to point has an angle. At second eye fixation on the same object point, sight-line to point has a different angle, because eye, head, and/or body have rotated. At nearest possible object point, sight-line to point angle is 90 degrees. At other object points, sight-line to point angle is less. Angle decreases linearly with distance. If sight-line to object point is more perpendicular, object is nearer. If sight-line to object point is less perpendicular, object is farther.

Comparing first and second angles can calculate object distance. Angle difference varies directly with distance. Larger angle change means object is nearer. Smaller angle change means object is farther.

two concave or convex corner angles

The first eye fixation on a concave or convex corner determines its angle. The second eye fixation determines a different angle, because eye, head, and/or body have rotated. Smaller-angle concave corners are farther, and larger-angle concave corners are nearer. Smaller-angle convex corners are nearer, and larger-angle convex corners are farther.

Comparing first and second corner angles can calculate distance. Angle difference varies directly with distance. Larger angle change means object is nearer. Smaller angle change means object is farther. Angles and vertices use the same reasoning as corners.

body angle comparisons

First eye fixation and second eye fixation have two different eye, head, and/or body positions. The kinesthetic system determines their angle sets and sends kinesthetic angle-difference information to association cortex for comparison with the corresponding vision angle-difference information.

integration

Comparing the two sets of angle differences calculates absolute metric distances. Accumulating distance information allows building three-dimensional-space information.

body surface and mental space

Sensations impinge on body surface in repeated patterns at touch receptors. Nervous system occupies three dimensions and has information about receptor locations. From receptor activity patterns, nervous system builds a three-dimensional sensory surface {body surface and mental space}.

carrier waves and mental space

Senses make a global carrier-wave function, and whole brain-and-body has a carrier-wave function {carrier waves and mental space}. Global functions are regular and form coordinate grids, establishing egocentric space. Local disturbances affect global function to indicate location.

convexity and concavity and mental space

Frontal-lobe region derives three-dimensional images from two-dimensional topographic maps by assigning convexity, concavity, and boundary edges [Horn, 1986] to lines and vertices and making convexities and concavities consistent {convexity and concavity and mental space}.

cortical processing and mental space

Primary-visual-cortex topographic map represents scene intensities. After primary visual cortex, cortical topographic-map neurons {cortical processing and mental space} respond to orientations, locations, and distances [Burkhalter and Van Essen, 1986] [DeValois and DeValois, 1975] [Newsome et al., 1989] [Tootell et al., 1997] [Zeki, 1985]. Topographic maps use thresholds to make boundaries and regions. Vision system sends information to motor and other sense systems [Bridgeman et al., 1997] [Owens, 1987]. Topographic maps use movements, angles, and perspective to add distance and depth by interpolation and extrapolation and represent egocentric space. Brain integrates and synthesizes spatial information [Andersen et al., 1997] [Gross and Graziano, 1995] [Olson et al., 1999].

frames and mental space

Nose, cheeks, and eyebrow ridges frame vision scenes. Silent regions frame sounds. Untouched surrounding areas frame pressures. Neutral-temperature regions frame warm or cool areas. Nose touch sensations frame odors. Mouth touch sensations frame tastes. Silent sensors frame active sensors. Sensations have frames that provide context for near and far locations {frames and mental space}.

memory and mental space

Long-term memory recall makes space {memory and mental space}. Short-term memory builds space modifications. Awaking activates memory, which activates space. Perception and recall occur on space background. Memory is stronger than perception, because people can remember images and override perceptions.

motions and mental space

Retinal regions can receive repeated light-pattern series that correlate with motion {motions and mental space}. For example, when moving toward light source, as visual horizon lowers, source appears lower in visual field. When moving away, source appears higher in visual field. When turning, rotations are around sense organ.

When people move, other objects do not move. Correlated movements belong to body region, and correlated non-movements belong to other region. Moving establishes a boundary between adjacent moving and non-moving regions. Moving is inside region, and non-moving is outside region. In and out make a space axis. When finger slides across surface, or feet walk across ground, touch correlates with vision moving/non-moving boundary.

motor feedback and space

Brain senses, moves, senses, moves, and so on, to have feedback, so brain processes are multisensory and sensorimotor. Visual-motor and touch-motor feedback loops interact to locate surfaces {motor feedback and space}, also using kinesthetic and vestibular systems. Vertical gaze center near midbrain oculomotor nucleus detects up and down motions [Pelphrey et al., 2003] [Tomasello et al., 1999]. Horizontal gaze center near pons abducens nucleus detects right-to-left and left-to-right motions [Löwel and Singer, 1992].

multimodal neurons and mental space

Midbrain tectum and cuneiform nucleus have multimodal neurons, whose axons envelop reticular thalamic nucleus and other thalamic nuclei to map three-dimensional space {multimodal neurons and mental space}.

multiple neurons for multiple space points

To experience multiple space points simultaneously, neuron assemblies have 200-millisecond intervals in which events are simultaneous {multiple neurons for multiple space points}.

topographic map continuum

Topographic-map neurons, dendrites, axons, and synapses are so numerous that overlapping forms a continuum {topographic map continuum}. Perhaps, the continuum carries analog signals and geometric figures, like TV screens, and models continuous space.

1-Consciousness-Speculations-Space-Biology-Boundaries

analog to digital conversion and mental space

Neuron thresholds reduce instantaneous below-threshold input to 0 and set instantaneous above-threshold input to 1. Thresholds differentiate regions by establishing boundaries {analog to digital conversion and mental space}.

boundary and mental space

Brain can compare outgoing (inner) and incoming (outer) signals, which differ. Inner signals have loops and loop patterns and include memories and imaginings. Outer signals have non-looping patterns and include stimuli. Nervous system builds a boundary {boundary and mental space} between inner (self) and outer (other). Boundary is at nervous-system edges. Waking and dreaming rebuild the boundary.

inequalities and boundaries

To trigger a neuron impulse, membrane potential, caused by input neuron impulses, must be greater than neuron threshold potential. Neuron threshold potentials establish inequalities. Lower potential has no effect. Higher potentials cause one impulse. (Higher potentials over time cause higher impulse rate.) Inequalities establish boundaries {inequalities and boundaries}. At space boundaries, one region has response above threshold, and adjacent region has response below threshold. (Neuron thresholds can change.)

lateral inhibition and spatial regions

Adjacent neurons can inhibit central neuron. Such lateral inhibition reduces central-neuron activity. Lateral inhibition can contract regions {lateral inhibition and spatial regions}. Lateral inhibition can move boundaries inwards. Lateral inhibition can suppress and eliminate boundaries. Spreading activation and lateral inhibition can join or separate regions.

spreading activation and spatial regions

Central neuron can excite adjacent neurons. Such spreading activation increases adjacent-neuron activity. Spreading activation can expand regions {spreading activation and spatial regions} {spreading excitation and spatial regions}. Spreading activation can move boundaries outwards. Spreading activation can establish and emphasize boundaries. Spreading activation and lateral inhibition can join or separate regions.

1-Consciousness-Speculations-Space-Biology-Coordinates

coordinate transformation and allocentric space

People see objects in space as external and stationary (allocentric) [Rizzolatti et al., 1997] [Velmans, 1993]. Cerebellum and forebrain anticipate, coordinate, and compensate for movements.

Frontal-lobe topographic maps can represent egocentric space [Olson et al., 1999], with vertical, right-left, and front-back directions. Coordinate-origin egocenter is in head center, on a line passing through nosebridge. Space points have directions and distances from egocenter. All points make vector space.

As body, head, or eyes move, egocentric space moves, spatial axes move, and point coordinates and geometric figures transform linearly to new coordinate values [Shepard and Metzler, 1971]. Transformations are translation, rotation, reflection, inversion, and scaling (zooming). Motor processing uses tensor transform functions to describe changes from former to current output-vector field [Pellionisz and Llinás, 1982]. To maintain stationary allocentric space, so point coordinates do not change when body moves, visual processing must cancel egocentric spatial-axis coordinate transformations {coordinate transformation and allocentric space}. Visual processing inverts motor-system tensors to transform egocentric coordinate systems in opposite directions from body movements [Pouget and Sejnowski, 1997]. Topographic maps can describe tensors that transform from egocentric to allocentric space. Topographic maps can represent allocentric space.

example

Translating and rotating make spatial axes change direction. After movement, new axes relate to old axes by coordinate transformations. For example, two-dimensional vector (0,1) can translate on y-axis to make vector (0,0), rotate both axes to make vector (1,0), or reflect y-axis to make vector (0,-1). Coordinate transformations do not change dimension number.

stationary space

Perception typically maintains an absolute spatial reference frame. Stationary space allows optimum feature tracking during object and/or body motions. Moving reference frames make all motions three-dimensional, but stationary space makes many movements one-dimensional or two-dimensional.

gravity and vertical direction

Gravity exerts vertical force on feet and body. Nervous system analyzes this distributed information and defines vertical axis in space {gravity and vertical direction}.

ground and mental space

Foot motions stop at ground. Touch and kinesthetic receptors repeatedly record this information. Nervous system analyzes this distributed information and defines a horizontal plane in space {ground and mental space}. Ground nearest to eye has sight-line perpendicular to ground. Farther-away ground points have sight-lines at smaller angles. All objects are on or vertically above ground.

invariants and coordinate axes

Vision observes moving and stationary points in space with varying brightnesses and colors. Nervous system analyzes this information to detect perceptual invariants. For space, invariant points are stationary reference points. Invariant lines are stationary coordinate axes {invariants and coordinate axes}: vertical, horizontal right-left, and horizontal near-far. Because invariants stay constant over many situations, invariants can be grounds for meaning.

motions and touches

Nervous system correlates body motions and touch and kinesthetic receptors to extract reference points and three-dimensional space {motions and touches}. Repeated body movements define perception metrics. Such ratios build standard length, angle, time, and mass units that model physical-space lengths, angles, times, and masses. As body, head, and eyes move, they trace geometric structures and motions.

tracking

During body movements, neuron activations follow trajectories across topographic maps. Brain can track moving stimuli. Brain can study before and after effects by tracking stimuli.

stimuli and motions

Stimuli can trigger attention and orientation, and so body moves or turns toward or away. Different stimulus intensities cause different moving or turning rates.

distance

Because distance equals rate times time, motion provides information about distances. Brain can track locations over time. Brain can use interpolation and extrapolation.

horizontal directions and motions

Moving toward or away from stimuli maximizes visual flow and light-intensity gradient, and establishes forward-backward direction. Moving perpendicular to sight-line to stimuli minimizes visual flow and light-intensity gradient, and establishes left-right direction.

vertical direction and motion

Body raising and lowering can indicate vertical direction.

orientation columns and direction

Vision topographic maps have orientation macrocolumns, which align and link orientations to detect line directions and establish all spatial directions {orientation columns and direction} [Blasdel, 1992].

pole and dimension

As body moves in a straight line, visual flow and light-intensity gradient establish one forward point (pole). Eye to forward point defines the forward-backward spatial dimension {pole and dimension}.

rotation centers and mental space

Body and body parts rotate around balance or equilibrium points {rotation centers and mental space}. Kinesthetic receptors send information to brain, which defines those reference points and builds three-dimensional space.

tensors and mental space

Topographic-map series can store matrices and so represent tensors {tensors and mental space}. Motor processing uses tensor transform functions to describe changes from former to current output-vector field [Pellionisz and Llinás, 1982]. Tensors can linearly transform coordinates from one coordinate system to another. Output vectors are linear input-vector and spatial-axis-vector functions. Motor-system topographic maps send vector-field output-vector spatial pattern to motor neurons. Muscles move body, head, and eye to specific space locations, or for specific distances or times. Current output-vector field differs from preceding output-vector field by a coordinate transformation.

topographic maps and coordinate axes

Topographic-map-neuron types have regular horizontal, vertical, and diagonal spacings, at different small, medium, and large distances. Neuron grids make a spatial network of nodes and links. Neuron grids allow measuring distances and angles and using coordinates. Topographic-map neuron grids have up/down, left/right, and near/far axes {topographic maps and coordinate axes}. Topographic-map spatial axes intersect to establish a coordinate origin and make a coordinate system, so points, lines, and regions have spatial coordinates.

Sensory topographic maps can have lattices of superficial pyramidal cells, whose non-myelinated non-branched axons travel horizontally 0.4 to 0.9 millimeters to synapse in clusters on next superficial pyramidal cells. The skipping pattern aids macrocolumn neuron-excitation synchronization [Calvin, 1995].

topographic maps and distances

Topographic maps have neurons specific for space locations {topographic maps and distances}. Locations involve space direction and distance. If 100 neurons are for radial distance one unit, to have same visual acuity 400 neurons must be for radial distance two units. To have less acuity, 100 neurons can be for radial distance two units.

vestibular system and direction

Vestibular-system saccule, utricle, and semicircular canals detect gravity, body accelerations, and head rotations. From that information, nervous system establishes vertical direction and two horizontal directions {vestibular system and direction}.

vision and direction

Animal eyes are right and left, not above and below, and establish a horizontal plane that visual brain regions maintain {vision and direction}. Vision processing can detect vertical lines and determine height and angle above horizontal plane. Body has right and left as well as front and back, and visual brain regions maintain right, left, front, and back in the horizontal plane.

1-Consciousness-Speculations-Space-Computer Science

models for three dimensions from two dimensions

Models can build three dimensions from two-dimensional images {models for three dimensions from two dimensions}. Stacks of two-dimensional layers can model three-dimensional space. Rotation of one two-dimensional layer can sweep out three-dimensional space.

reading and writing and mental space

Mental space has no reading or writing {reading and writing and mental space}, because output becomes input and input becomes output simultaneously and in parallel.

1-Consciousness-Speculations-Space-Computer Science-Algorithm

segmentation and mental space

Region boundaries have high contrast. Surfaces have coarser or finer and other texture types. Textures depend on surface slant, surface tilt, object size, object motion, shape constancy, surface smoothness, and reflectance. Segmentation algorithms {segmentation and mental space} separate observed regions by contrast and surface texture. Contrast and steep texture gradients define large domains. Subdomains have different surface textures.

self-calibration and mental space

Camera algorithms can use epipolar transform and absolute conic image in Kruppa equation to find standard metric and relative distances and positions {self-calibration and mental space}.

shape from shading and mental space

Vision processing can find convexities, concavities, and boundary edges. Later vision processing makes these consistent to build three-dimensional space {shape from shading and mental space}.

structure from motion and mental space

Motions cause disparities and disparity rates that can reveal structure {structure from motion and mental space}. Bundle-adjustment algorithm can find three-dimensional scene structure and eye trajectories. First, projective reconstruction can construct the projected structure, and then Euclidean upgrading can find actual shape. Affine projective reconstruction uses Tomasi-Kanade factorization.

synthesis algorithms

Synthesis algorithms {synthesis algorithms} compare vectors and coordinates to build images and space.

vision algorithms and space

Vision algorithms can use fiducials as reference points for calibration to make space coordinates {vision algorithms and space}.

1-Consciousness-Speculations-Space-Mathematics

continuity and mental space

Continuous surfaces have no gaps and no overlaps. Phenomenal space seems continuous {continuity and mental space}.

cross products and mental space

Two vectors define one plane or surface. Two vectors can multiply to make a vector perpendicular to both vectors. Perhaps, mental space gets the distance dimension from cross products {cross products and mental space}.

derivatives and mental space

Derivatives indicate changes, gradients, and directions at space and time points. Second derivatives indicate gradient and direction changes and so apply to curves. Perhaps, brain calculates derivatives to find directions and surfaces, and their curvatures, and so build mental space {derivatives and mental space}.

generators and mental space

Brain not only represents space, but also generates/constructs space {generators and mental space}. From an origin, each space direction has a function that indicates distance and color. Functions extend from origin, in brain, into space, outside body, so there is no action at a distance. Space is nonphysical abstract vector space.

mathematics and mental space

Mathematical ideas can relate to mental space {mathematics and mental space}. Neuron assemblies can represent mathematical objects and mathematical operations.

number

Over a one-millisecond interval, neurons have (1) or do not have (0) an impulse, so neuron series can represent binary numbers. Over a one-second interval, one neuron's series of 0s and 1s can represent a binary number with 1000 digits.

Over a one-second interval, single-neuron axon-impulse number or released-neurotransmitter-packet number can represent a whole number. Neurons have impulse frequencies up to 800 Hz, so one neuron can represent numbers from, say, 1 to 800.

Neuron series can use positional notation to represent larger numbers. For example, one neuron can represent numbers from 0 to 99, and the other can represent numbers from 0000 to 9900, so neuron pairs can represent numbers from 0 to 9999.

number: integer

In neuron series, one neuron can represent sign, so neuron series can represent integers.

number: rational

In neuron series, one neuron can represent decimal point, so neuron series can represent rational numbers.

number: real

Real numbers have rational-number approximations, so neuron series can represent real numbers.

number: imaginary

In neuron series, one neuron can represent square root of -1, so neuron series can represent imaginary numbers.

number: complex

Complex numbers add real number and imaginary number, so two neuron series can represent complex numbers.

ratio

Neurons can compare receptive-field center input to surround input to measure stimulus-intensity ratio. Opponent processes compare inputs from two neurons to find ratio. Ratios are dimensionless, because dividing cancels units.

ratio: metrics

Comparing current and memorized ratios builds standard relative lengths, angles, and other measurement units (standardized metrics).

addition

To add two numbers, neuron series can receive input from two neuron series that represent numbers. To subtract, one input is negative.

Single neurons can accumulate membrane potential or neurotransmitter over time to represent simple summation.

addition: tables

If tables are available, arithmetic operations can use table lookup. First number is in first column, second is in second column, and answer is in third column. Neuron arrays can store number tables. Using indexes allows table lookup.

multiplication

To multiply two numbers, neuron series can receive input from two neuron series that represent numbers.

multiplication: amplification

Single neurons can amplify input. Cell body priming can cause inputs to dendrites to make more membrane voltage. Axon gating near synapse can cause synapse to release more neurotransmitter. Amplification is like multiplication.

multiplication: logarithm

Neuron series can store bases and exponents, so three neuron series can represent exponentials and logarithms. Neuron-series sets can add logarithms to perform multiplications. Logarithms are smaller than original number. For example, if number is 100, logarithm is 2: 100 = 10^2.

multiplication: power and root

Powers are multiplication series: a^3 = a*a*a. Roots are multiples of reciprocals: a^0.5 = (1/a) * (1/a). Neuron-series sets can repeat multiplications and divisions to find powers and roots.

symbol

Alphabet letters and punctuation symbols can have number representations. Neuron series can represent numbers and so letters, symbols, and variables.

mathematical term

Mathematical terms are constants times variables raised to powers: a*x^b. Neuron series can represent symbols and can use powers and multiply, so five neuron series can represent mathematical terms.

polynomial

Polynomials are mathematical-term sums. Neuron-series arrays can represent mathematical terms, so neuron-series-array series can represent mathematical-term sums. For infinite polynomials, higher terms have negligibly small values, so finite polynomials can approximate infinite polynomials.

polynomial: functions

Over space, time, or numeric intervals, polynomials can represent functions, so neuron-series-array series can represent functions. Polynomials can represent periodic, trigonometric, and wave functions: sin(x) = x - x^3/3! + x^5/5! - x^7/7! + ..., and cos(x) = 1 - x^2/2! + x^4/4! - x^6/6! + ... Polynomials can represent exponential functions: e^a = 1 + a + a^2/2! + a^3/3! + ..., and e^(i*a) = cos(a) + i*sin(a).

polynomial: factoring

Polynomials can have smaller polynomials that divide evenly into the polynomial. For example, a^2 + 2*a*b + b^2 = (a + b)^2, so (a^2 + 2*a*b + b^2)/(a + b) = (a + b). Neuron-series-array arrays can factor.

equation

Equations set two functions equal to each other: 3*x + 2 = 2*x + 3. Neuron assemblies can represent functions and the equals operation, so neuron assemblies can represent equations. Because they can subtract, factor, and divide, neuron assemblies can solve linear equations. Linear equations can approximate other equations.

equation: inequality and relation

Neuron assemblies can represent equations, so neuron assemblies can represent inequalities. Inequalities can indicate relations: more, same, and less, or before and after.

equation: system

Two or more equations with same variables are equation systems. For example, 3*x + 2*y = 6 and 2*x + 3*y = -6. Large neuron assemblies can represent an equation system. Because they can subtract, multiply, and divide, and so substitute, neuron assemblies can solve linear-equation systems. Linear-equation systems can approximate other-equation systems.

algebra

Algebras have elements, such as integers. Algebras have operations on elements, such as addition and multiplication. Operations on elements result in existing elements. Neuron series can represent numbers and perform arithmetic operations, so neuron assemblies can represent algebra.

calculus

All differentiations and integrations use only exponentials, multiplications, and powers. Neuron series can represent logarithms, multiplication, and powers, so neuron assemblies can differentiate and integrate.

mathematical group

Mathematical groups have elements, such as triangles. Mathematical groups have one operation, such as addition or rotation. Operations map every element to the same or another group element. For example, if element is equilateral triangle, 120-degree rotations result in same element. Tables show group-operation results for all element pairs. Neuron assemblies can represent number tables and table lookup and so represent mathematical groups.

logic

Neuron series can represent letters and symbols, so neuron-series arrays can represent words and statements. Statements can use nested variable relations. Neuron assemblies can represent and understand grammar.

logic: truth value

Neurons can represent TRUE or FALSE by potential above threshold or below threshold.

logic: operations

Two or three neuron series can represent NOT, AND, and OR operations. NOT operations can change input into no output, or vice versa, using excitation or inhibition. AND operations add two inputs to pass high threshold, which neither one alone can pass. OR operations add two inputs to pass low threshold, which either input alone can pass.

logic: tables

Logic operations can use table lookup. First variable is in first column, second variable is in second column, and truth-values are in third column. Neuron assemblies can store tables and perform table lookup.

logic: conditionals

Conditional statements combine NOT and AND operators: p -> q = ~(p & ~q). Neuron assemblies can represent NOT and AND operations and so represent conditionals.

logic: reasoning

Reasoning uses statement series. Neuron-assembly series can represent statement series and so reasoning.

computation

Neuron assemblies can represent numbers and statements and perform logic operations, so complex neuron assemblies can use programming languages and compute. Neuron-assembly activity patterns can represent cellular automata, which can simulate universal Turing machines and so compute any algorithm.

geometry

Visual processing can represent geometric objects, relations, and operations [Burgess and O'Keefe, 2003] [Moscovitch et al., 1995]. Representations have same relative lengths, angles, and orientations as physical geometric objects in space.

Geometric objects are points, lines, angles, and surfaces. Geometric objects have location, extension, and shape. Geometric objects have brightness, hue, and saturation. Geometric-object relations are up, down, above, below, right, left, in, out, near, and far. Geometric operations are constructions, transformations, vector operations, topological operations, region marking, and boundary making and removing.

geometry: point

Dendritic-tree center-region input excites ON-center neurons. Surrounding-annulus input inhibits ON-center neurons. ON-center neurons can represent points [Hubel and Wiesel, 1959] [Kuffler, 1953].

geometry: line

Lines are point series, so ON-center-neuron series can represent straight and curved lines [Livingstone, 1998] [Wilson et al., 1990]. Neuron-series length can represent line length.

Lines are boundaries of regions. Distance and intensity change rates are greatest at boundaries.

geometry: surface

Surfaces are line series, so ON-center-neuron arrays can represent flat and curved surfaces. Distance and intensity change rates are small in surfaces. Neuron-array area can represent surface area. Line boundaries are surface edges and separate surfaces.

geometry: orientation

Lines and surfaces have orientation/direction. Topographic-map orientation columns, perpendicular to cortical neuron layers, detect orientation. Orientation columns are for specific space locations. Orientation columns are for specific line lengths and sizes. Therefore, orientation columns represent one space location, one orientation, and one line length [Blasdel, 1992] [Das and Gilbert, 1997] [Dow, 2002] [Hübener et al., 1997] [LeVay and Nelson, 1991].

geometry: angle

For same space location and line length, adjacent orientation columns detect orientations. Neuron assemblies calculate plane angles between two line orientations or solid angles between three line orientations. Object and body rotation movements have angle changes.

geometry: geometric figures

Neuron assemblies can represent points, lines, orientations, angles, and surfaces, so neuron assemblies can represent geometric figures, such as spheres, cylinders, and ellipsoids.

geometry: distance

Neuron-series length can represent distance between two points. Neuron series can have all orientations, so neuron series can detect distance in any direction.

Topographic-map orientation columns calculate line and surface orientations. At farther distances, concave angles appear smaller, and convex angles appear larger.

Closer regions are brighter, and farther regions are darker, so neuron excitation can estimate distance.

Closer surfaces have larger average surface-texture size and larger spatial-frequency-change gradient. Neuron assemblies can detect surface texture and spatial-frequency-change gradients and estimate distance.

Object movements and body movements occur over distances, and neuron assemblies can track trajectories.

geometry: triangulation

To find triangle lengths and angles, neuron assemblies can use trigonometry cosine rule or sine rule.

geometry: trilateralization

Trilateralization finds point coordinates, using three reference points. The four points form a tetrahedron, with four triangles. Distance from first reference point defines a sphere. Distance from second reference point defines a circle on the sphere. Distance from third reference point defines two points on the circle. Neuron assemblies can measure distances between points and angles, and can use the cosine rule or sine rule to find all triangle angles and sides.

Animals continually track distances and directions to distinctive landmarks. Animals navigate environments using maps with centroid reference points and gradient slopes [O'Keefe, 1991].

geometry: space

Brain can represent perceptual space in topographic maps [Andersen et al., 1997] [Bridgeman et al., 1997] [Gross and Graziano, 1995] [Owens, 1987] [Rizzolatti et al., 1997].

Midbrain tectum and cuneiform nucleus have multimodal neurons, whose axons envelop reticular thalamic nucleus and other thalamic nuclei to map three-dimensional space.

Vision processing derives three-dimensional images from two-dimensional ones by assigning convexity and concavity to lines and vertices and making convexities and concavities consistent.

geometry: spatial axes

Vestibular-system saccule, utricle, and semicircular canals establish vertical axis by determining gravity direction and horizontal directions by detecting body accelerations and head rotations. Three planes, one horizontal and two vertical, define vertical axis and two horizontal axes.

Animal eyes are right and left, not above and below, and establish horizontal plane that visual brain regions maintain.

Vision processing can detect vertical lines and determine height and angle above horizontal plane. Vertical gaze center near midbrain oculomotor nucleus detects up and down motions [Pelphrey et al., 2003] [Tomasello et al., 1999].

Body has right-left and front-back, and visual brain regions maintain right-left and front-back in horizontal plane. Horizontal gaze center near pons abducens nucleus detects right-to-left motion and left-to-right motion [Löwel and Singer, 1992].

Topographic-map orientation columns with same orientation align and link to establish coordinate axes, in all directions.

Sense and motor topographic maps have regularly spaced lattices of special pyramidal cells. Non-myelinated and non-branched superficial-pyramidal-cell axons travel horizontally 0.4 to 0.9 millimeters and synapse in clusters on next superficial pyramidal cells. The skipping pattern aids macrocolumn neuron-excitation synchronization [Calvin, 1995]. The regularly spaced pyramidal-cell lattice can represent topographic-map reference points and make vertical, horizontal, and other-orientation axes. Lattice helps determine spatial frequencies, distances, and lengths.

Medial entorhinal cortex has some grid cells that fire when body is at many spatial locations, which form a triangular grid [Sargolini et al., 2006].

geometry: coordinate system

Vision processing relates spatial axes to make a coordinate system. Spatial axes intersect at a coordinate origin. In spherical coordinates, space points have distance to origin, horizontal angle to horizontal axis, and azimuthal angle to vertical axis. In Cartesian coordinates, points have distances to vertical, right-left-horizontal, and front-back-horizontal axes. Brain and external three-dimensional space use the same spatial axes and coordinate system. Coordinate origin establishes an egocenter, for egocentric space.

tensor

Neuron series can represent number magnitudes and space directions, so two neuron series can represent mathematical vectors. Neuron arrays can represent vectors and motions, so they can represent spinors as rotating vectors.

Neuron arrays can represent vectors, so they can represent matrices, which can represent surfaces. Matrices can be two-dimensional tensors, which have all vector-component products as elements. For example, |x1*x2, y1*x2 / x1*y2, y1*y2| for vectors (x1, y1) and (x2, y2) has four elements. |x1*x2, y1*x2, z1*x2 / x1*y2, y1*y2 z1*y2 / x1*z2, y1*z2 z1*z2| for vectors (x1, y1, z1) and (x2, y2, z2) has nine elements.

Three-dimensional tensors have all vector-component products. Neuron arrays can represent matrices, so neuron assemblies can represent three-dimensional tensors. During eye, head, and body movements, tensors can transform egocentric-space coordinates to maintain stationary allocentric-space coordinates.

self-reference and mental space

Gödel numbers can contain compressed-information descriptions of themselves. Nesting allows self-reference. Topographic maps can contain descriptions of themselves. Topographic-map space information can contain space-information descriptions. Mental space can contain space descriptions. Complete brain-based mental-space descriptions can contain mental space {self-reference and mental space}. By nesting, mental space can be internal, in observer, and observer can be in mental space.

space by relaxation

Color processing finds surfaces and distances by mathematical relaxation techniques that locate complete and consistent positions {space by relaxation}.

state space and mental space

Color phase space can have three spatial dimensions, one time dimension, surface orientation dimension, black-white dimension, red dimension, blue dimension, and green dimension {state space and mental space}.

1-Consciousness-Speculations-Space-Mathematics-Vectors

spinors and mental space

Spinors are rotating three-dimensional vectors or quaternions. Perhaps, spins can define space axes, and three real-number orthogonal independent spinor components make three-dimensional space {spinors and mental space}.

tensors and space

Tensors are scalars, vectors, matrices, three-dimensional arrays, and so on, that represent linear operations. Tensors can model flows and fields. Integrating tensors over one dimension decreases dimension by one. Differentiating tensors over one dimension increases dimension by one. Three tensor differentiations can build three dimensions from one scalar {tensors and space}.

1-Consciousness-Speculations-Space-Physics

ether and mental space

The ether can fill space or define space {ether and mental space}. The ether can provide a substrate for sensations and observer.

holography and mental space

Holography can make three-dimensional images in space from two-dimensional interference patterns illuminated by a coherent-light beam {holography and mental space}. Perhaps, association cortex stores two-dimensional interference patterns and makes beams. However, association cortex has no coherent beam to make interference patterns, and interference patterns and intensities, features, and objects have no relation. Brain does not send out signals.

projection and mental space

Projectors illuminate film or otherwise decode stored representations to create two-dimensional or three-dimensional displays in media, such as screens or monitors. Perhaps, mind is projection, and brain is projector {projection and mental space}. However, mind must know sensations, not just display them. Projection starts with a geometric figure. Projection needs something on which to project. Projection has no opposites.

quantum mechanics entanglement and non-locality

Physical interactions are local. Forces are particle exchanges. For example, masses exchange gravitons to affect each other. Force fields change space and so affect particle motions. Physical interactions do not allow action at distance, except for quantum-mechanical entanglement. Two particles that have interacted have a joint wavefunction, made of superposition of the two particle wavefunctions. Because particle waves are infinite, the joint wavefunction is infinite. The two particles have quantum-mechanical entanglement over all space. Consciousness has experiences at distant places, with no interceding events. Perhaps, consciousness entangles everything over all space {quantum mechanics entanglement and non-locality}.

non-locality

In quantum mechanics, observation at one location can appear to immediately affect another observation at a distant location. Though physical waves send information at finite speed, quantum-mechanical waves collapse everywhere at once. Perhaps, mind involves non-locality. Consciousness links separate space points, and sense system and sensation, and so is non-local.

However, brain processing does not use waves, entanglement does not include knowing, and any entanglement in brain collapses in less than a microsecond.

tunneling and mental space

Perhaps, brain has potential barriers to outside world, but mind can tunnel through barriers to experience outside world {tunneling and mental space}.

1-Consciousness-Speculations-Space-Psychology

sense qualities and mental space

People seem to experience a sensory field outside themselves [Velmans, 1993]. Sense experiences are at locations in three-dimensional space. Sense qualities are the type of thing that allows consciousness of space {sense qualities and mental space}. Experiencing mental space requires sense qualities.

surface distances and mental space

The farthest surfaces, like the sky or distant mountains, seem to be a few kilometers away. The closest surfaces, like a book, appear smaller than their retinal visual angle indicates. Perhaps, rather than varying directly with distance, perceived sizes are logarithms of distances {surface distances and mental space}.

surface texture and depth

Gradient location-orientation histograms define surface textures. People assign depth using corresponding points in stereo or successive images and other monocular techniques {surface texture and depth}. Near objects have more texture details, and far objects have less texture details.

surface transparency and perspective

Windowpanes and perspective paintings represent depth and three-dimensional scenes in two dimensions, and their two-dimensional surfaces are apparent. If such surfaces have no reflection or any other property and so are invisible, they represent three-dimensional space perfectly {surface transparency and perspective}.

zooming and mental space

Scaling (zooming) maintains relative distances and angles {zooming and mental space} {scaling and mental space}. Zooming in can make a finite region equivalent to an infinite region, because the boundary becomes far away. Zooming out can make large regions smaller. Attention is like zooming.

What Is Color

"How Vision Makes Light, Color, and Brightness in Space"

Overview

All brain activity is internal, but people experience colors in space. All senses convert stimuli to electrochemical neural signals, so brains cannot directly know space, time, or objects.

How do brains know the meaning of neuron-assembly pathways and signals?

How do brains know what space and its directions and distances are? From knowing, how do brains experience space around and in body? How do brains experience surfaces, features, objects, and scenes?

How do brains know what light, brightness, hue, saturation, and color are? From knowing, how do brains experience colors? How do color and brightness experiences have locations in space? What is the same, and what differs, among black, white, blue, red, green, and yellow?

The following sections, about experience/consciousness, information-processing, sense-physiology, sensation, and perception, provide the background information needed to explain how vision experiences light, brightness, and color in space in the last section.

Section about Experiences and Consciousness

Experiences, Space, and Time

All senses gather information about direction, distance, and spatial extension (surface and/or volume), as well as about magnitude, quality, and temporal extension. The sensory processes use selection, discrimination, association, and memory to make spatial and temporal experiences and perceptions at locations in three-dimensional space and with spatial relations.

Spatial extension, temporal extension, spatial location, spatial distance, and motion are the first steps of building space. Space is not a form of empirical intuition, because space is imperceptible and is not a cause or caused. Space could be a form of a priori intuition. Space representation could be a concept, because space has object spatial relations and geometry.

People seem to see a visual field [Jackson, 1977]. The whole visual field has its own properties, such as overall lightness or darkness and brightness and color gradients.

Experience requires associations (binding) of various perceptual features, for example, colors, brightnesses, shapes, and functions. Binding uses temporal/synchronized and spatial/shape information [Treisman, 2003], as well as sensation and function information.

As body, head, and/or eyes move, senses maintain stationary space. Stationary space models motion efficiently.

1. Space properties

Space locates all experiences and perceptions. Space is the reference frame for objects and object spatial relations, including the observer. Space itself is imperceptible. Experienced space is not a cause or caused.

Space could be finite or infinite, homogeneous or heterogeneous, and isotropic or not, and have a fixed or relative geometry and static or changing geometry.

If space is infinite, it cannot be a substance. If space is a property of a substance, then space depends on the substance (and so might not exist at some places and times).

Psychological spatial concepts derive from object location, size, and orientation perceptions. Special visual systems encode spatial properties, separately from object shapes, colors, and textures.

Space locations can be relative to retina (retinotopic coordinate) or relative to spatial reference point (spatiotopic coordinate).

Spatiotopic coordinates can be relative to body (egocentric coordinate) or to another object (allocentric coordinate). Egocentric spherical coordinates are vertical/elevation angle, horizontal/azimuth angle, and radial distance. Allocentric/geocentric rectangular coordinates are front-back, left-right, and down-up.

Allocentric coordinates can be specific to view (viewer-centered) or object itself (object-centered). People use viewer-centered coordinates in imagery. Body-centered coordinates can relate to head (craniotopic). Allocentric representations can transform to egocentric representations, and egocentric space can transform to conceptual space representations.

Conceptual space representations use Cartesian coordinates, along X, Y, and Z dimensions from origin, or polar coordinates, by radius and planar and depth angles from origin.

Perception can use local coordinates for part locations, using many separate origins to form interlocking coordinate system, and global coordinates for part locations relative to one origin. Topographic maps compute locations in nearby space using body-based coordinates. Topographic maps compute locations in far space using allocentric/geocentric coordinates.

Behavior uses egocentric coordinates, compensating for body movements. Movement coordination requires only egocentric space, not images.

Animals navigate environment using map (slope-centroid model) with reference point (centroid) and gradient (slope). Mind can calculate direction and distance to target by triangulation using centroid and slope.

2. Motions through space over time

Subject and objects have motions. Space has no motion.

True motion is through absolute space or in relation to center of mass.

Apparent motion is a change in relation to an object or objects or to observer.

3. Theories of space

Space could be:

Objective - Physically and/or mathematically real, as physical substance or mathematical entity: The universe is space.

Subjective - Ideal, mental, or mind-dependent representation, property, or relation: Space is part of experience.

Space could be:

Empirical - Information comes from experiences (and their associations). Empirical representations originate from sensory information: Space is not an object and does not make experiences and so is imperceptible. Space has geometrical structure but it is not sensed directly.

A priori - A priori representations originate from mental information, as part of mind's nature, as a mental property, or as a product of a mental function: Sensory processes work on sensory information to make space as part of perceptions.

Space could be:

Absolute - Space is a substance or substrate, independent of objects and object relations. Space is uniform and homogeneous. Different orientations have different geometry, and so are objectively different. However, different orientations of absolute space look the same. Therefore, people cannot sense absolute space, so absolute space is impossible, because things that are not discernibly different must be identical.

Relative - Space is an abstract mathematical (metaphysical) representation built by mind from experience and association among observer and real possible objects and object relations. Space and space points have geometry but are not substances. Perceiving a system of objects, observers can find locations that have specific relations to all objects, and all those (possible) locations make the space.

Some space theories are:

Naturalism - Information can only come from nature. Such information is empirical not a priori. Such information is an experience or representation (concept or intuition): Space is real and may or may not have geometrical structure.

Rationalism - Information can come from deduction, induction, and all forms of reasoning: Space has geometrical structure and is a product of reasoning but not exclusively of reasoning.

Transcendentalism - Information can come from the nature of mind. Such information is a representation. Such information is not empirical but a priori. Such information cannot be an experience, because experiences are always empirical. Such information cannot be a concept, because concepts always have higher and lower concepts, but mind is a whole with no constituents: Space is a mental representation that is an a priori intuition, not an empirical intuition. Space is not real and does not inhere in objects or their relations. Space has geometrical structure.

Realism - Nature exists independently of mind. Sensations and experience are information about real objects: Space is objective and real. Space is independent of objects and object relations. Space has geometrical structure.

Non-transcendental realism - Space is physically objective and real and/or is a mathematical coordinate system (like a Platonic Real). Space is not a concept or intuition.

Transcendental realism - Mind has an empirical or a priori concept of space as physically objective and real, as a mathematical coordinate system, and/or as metaphysical representation that inheres/supervenes in object relations. Space has geometrical structure.

Idealism - Sensations and experience are information only in mind. All objects and properties are empirical intuitions. Perceptions are mental representations: Space is a mental representation as an empirical intuition. Space has no objective existence but inheres/supervenes in object relations. Space has geometrical structure.

Non-transcendental idealism - Mind has an empirical intuition of space. Space has no objective existence and is not a concept. Space may or may not have geometrical structure.

Transcendental idealism - Mind has an a priori intuition of space. Such a space makes possible experience, perception, and representation of spatial relations and so objects outside and inside body. Such a space corresponds with the sensory processes.

Space is not dependent on object relations and does not come from experience, because perceiving distance relations requires an existing space representation, so space cannot be built from distance relations.

Space is not an empirical intuition because that would allow people to represent no space (not just empty space), but they cannot.

Space cannot be a concept, or represented by any other concepts, because there is only one space, with no parts and no constituents, and all spatial things are in that space. (However, space does not appear uniform or to have no constituents or parts, being different when close or far and in front or behind, and being filled out by perception of more objects.)

Space may or may not have geometrical structure.

Experiences

From physiology, perception, cognition, meaning, memory, imagination, and computation, vision makes experiences/qualities.

1. Experience properties

Experiences have primary, secondary, and tertiary properties, parameters, and relations.

Experiences are empirical, never a priori (though the origin may be internal to brain).

Experiences have no structure.

Experiences have no medium.

Experiences have space location, time event, energy-like intensity, and experience quality. Experiences are in three-dimensional space (not in brain), which has regions and region relations. Experiences are in one-dimensional time. Experiences are extended over time and space (possibly as high-level reverberations). Experiences use qualities that have quantities. Experiences have quantity of quality, not just a numerical value. Senses have different experiences for low, medium, and high intensity. For example, different brightnesses have different qualities. Intensity and quality interact, and have three or more parameters.

Experiences have categories, for the different senses, and subcategories for the sense's properties. For example, light evolves/develops to different experiences: brightness, darkness, and color.

Experiences/qualities can have multiple properties, such as color's location, brightness, hue, and saturation. (Physical quantities, such as mass, are in physical space and have only one property. Physical quantities have values and units.) Experiences have intensity and sense/category (such as blue).

Experiences are macroscopic, not microscopic, though brains do everything microscopically. For example, light is from a visible surface.

Experiences are continuous. There are no pixels or voxels. For example, there are no units of brightness, hue, saturation, black, white, blue, yellow, red, or green.

Experiences may be independent and unmixed (analytic), like sounds; dependent and mixed (synthetic), like brightnesses and colors; or both, like touches, smells, and tastes.

Experiences cannot change state.

2. Experiences and surfaces

Experiences define surfaces/objects and their properties.

Experiences are on continuous surfaces.

Experiences make adjacent surface points continuous. Qualities integrate datatypes into a continuous whole.

3. Sensation, memory, imagination, and dreaming have experiences

Besides experiences made from sensations, people can remember colors in space, from color names or memories. People can say red and see a sort of red in their mind.

People can imagine colors in space. People imagine a location with an object with color.

People can dream colors in space.

Also, spontaneous vision activities can evoke a sort of color in the mind.

4. Different senses have similar experience properties

Experiences are similar for different senses. Because intensity is too low to have definite type, black, whisper, tickle, whiff, hint, and uncomfortableness are similar. Because intensity is too high to have definite type, white, too-loudness, too-stressful, sharp, too-strong, and deep pain are similar. If intensity is medium, and all subcategories are present so that there is no definite type, then gray, noise, pressure, odor, flavor, and ache are similar.

5. Experience qualities

People can experience experiences (sentience) [Armstrong, 1981]. Sentience has levels.

People can be aware of experiences. People can be aware that they sense. People can have awareness that they are aware of experiences (self-awareness) (self-consciousness) [Carruthers, 2000]. Self-consciousness has levels.

Experiences seem continuous, with no spatial or temporal gaps [VanRullen and Koch, 2003]. Experiences have no discrete units. To make continuity, inputs from small and large regions, and short and long times, integrate over space and time [Dainton, 2000].

Experiences seem to be about the objects, not about experience properties (transparency) [Dretske, 1995] [Harman, 1990] [Horgan and Tienson, 2002] [Moore, 1922] [Tye, 1995]. Experiences are transparent, with no intermediates [Kind, 2003]. People are conscious of objects, not just experiences [Rosenthal, 1986]. Consciousness is conscious of outside objects (the in-itself), which are intentional [Sartre, 1943], and such consciousness is not active. People may have a consciousness state without object.

People can have ineffable qualitative sensory experiences (qualitative consciousness) [Chalmers, 1996] [Churchland, 1985] [Clark, 1993] [Shoemaker, 1990]. They have no description except their own existence. Subject experience is not directly communicable, because it has no units with which to measure. Such experiences could have inversion [Block, 1980] [Block, 1980] [Shoemaker, 1981] [Shoemaker, 1982]. Such experiences could be epiphenomenal [Chalmers, 1996] [Jackson, 1982].

Experiences are immediate, and so not affected by activity, reasoning, or will [Botero, 1999]. Subjective experiences seem not to be ignorable and have self-intimation. Experiences seem indubitable, unerring, infallible, and irrevocable. Experiences seem incorrigible, and so not correctable or improvable by activity, reasoning, or will. Experiences are intrinsic, with no dependence on external processes [Harman, 1990].

Experiences are private, and so not available for others' observations or measurements.

Experiences are privileged, and so not possible to observe except from the first-person viewpoint [Alston, 1971].

Experiences are subjective, and so intrinsic, private, privileged, and not objective [Kriegel, 2005] [Nagel, 1979] [Tye, 1986]. Experiences are conscious states of a subject, and so are subjective representations. People have a subjective point of view that depends on their senses and ways of sensing (subjectivity) [Nagel, 1974]. People can know some things, but do not know many things, about their own conscious experiences [Chalmers, 2003] [Lycan, 1996] [Papineau, 2002] [Van Gulick, 1985].

6. Experience relations

Conscious representations have unity across distances, times, and categories [Cleeremans, 2003]. Features unite into objects [Treisman and Gelade, 1980]. There are representational, functional, neural, and phenomenal unities [Bayne, 2010] [Tye, 2005]. People's experiences and actions have unity of purpose.

Experiences are similar to, differ from, are opposite to, and exclude others, in a unified system of relations.

Qualitative experiences form a multidimensional system in space and time, and have specific relations [Churchland, 1995] [Clark, 1993] [Hardin, 1992]. Qualitative consciousness seemingly needs no other cause or information to have meaning. Experiences have multiple parameters, unlike physical things or properties.

Experiences are for internal use only, with no output to other people or to instruments.

Experiences are for behavior.

7. Essence/nature of experiences

Experiencing compactly represents the different forms of energy transfers of the senses in space. Sense receptors absorb energies of different forms. The form of energy absorbed for:

Vision is electromagnetic waves.

Hearing is sound waves.

Touch is forces applied over distances.

Smell is air-borne-molecule electric electron-transition energies.

Taste is water-borne-molecule electric electron-transition energies.

Pain is pain-receptor electric electron-transition energies.

Experiencing appears to reveal experience nature/essence.

People's language has experience categories/concepts.

People have knowledge/beliefs about experience. Seeing experiences can cause belief that the experience is present. People's knowledge about experience is not the experience itself [Jackson, 1982] [Jackson, 1986].

Experience can have example objects as paradigms, such as cherries for cherry red.

Experiences could be phenomena, experiences, perceptual features, conceptual or non-conceptual representations, elements in a relational/comparative system, and/or fundamental epistemological entities. Experience properties (such as hue, saturation, and brightness) could be properties in experiences or representations of object properties.

8. Experiences are not processes, properties, states, structures, or substances

Experiences are not processes, properties, states, structures, or substances. Experiences are results not processes, do not belong to objects or events, cannot change state, have no structure, and are insubstantial.

Processes (such as higher-order thinking) input, transform, and output information. Making experiences involves processing information. However, experiences appear to be results not processes.

Properties are categories, such as lightness, with values. Experiences have properties. However, experiences appear to be whole things, not properties of something else.

States are distributions of positions, momenta, energies, and times. Experiences have positions and times. However, experiences have no momenta or energies.

Structures are feature and object patterns. Experiences are of features and objects. However, experiences appear to be wholes, not arrangements of parts.

Substances may be physical or nonphysical and have properties. Experiences do not have motions. Experiences relate to physical properties. However, experiences are of, or on, substances, not substances themselves.

9. Experiences are not physical, chemical, physiological, or informational

People cannot experience physical things, which are only radiations, molecular structures, vibrations, and electrical forces. Experiences are not physical. They are not a substance and have no structure. They have no particles, masses, charges, or resonances. They have no motions, translations, kinetics, dynamics, vibrations, oscillations, rotations, waves, flows, or frequencies. They have no particles, masses, charges, or amplitudes. They have no forces or energies.

People cannot experience physiology or chemistry. Experiences are not chemical or physiological. They are not electrical or chemical flows, other motions, or potentials. They have no chemical or electrical reactions or interactions. They have no electrochemical structures or internal changes.

People cannot experience information. Experiences are not information, information structures, information transfers, inputs, outputs, or information processing. They are not datatypes, files/tables, or databases. They do not use coding or language.

9.1. Not photons or particles

Photons are microscopic and emit, or reflect (absorb and emit), from surfaces.

However, experiences are not photons (or particles), because they are continuous, not discrete, and macroscopic, not microscopic.

Brightness cannot be photon number. Color cannot be photon frequency.

9.2. Not flows

Flows can be discrete or continuous. Flows can be fast or slow. Flows can be smooth or turbulent. Flows can have density and viscosity. Flows can expand or contract.

However, experiences are not flows, because there is no motion.

Brightness cannot be slow to fast flow or weak to strong flow. Darkness cannot be continuous smooth slow flow. Lightness cannot be continuous turbulent fast flow. Color temperature, chroma, strength, and other properties cannot be flow expansion/contraction, viscosity, or density.

Yellow cannot be turbulent, fast, low-density, weak, sparse, expanding, high activity, warm, and more-chromatic flow. Blue cannot be smooth, slow, high-density, strong, dense, contracting, low activity, cool, and less-chromatic flow. Green cannot be turbulent, fast, low-density, weak, sparse, contracting, low activity, cool, and less-chromatic flow. Red cannot be smooth, slow, high-density, strong, dense, expanding, high activity, warm, and more-chromatic flow. Black cannot be no or low flow. White cannot be all types of flow added together.

Colors have brightness seen over space, but colors are non-physical and brightness/color is not a physical flow: not light rays, not fluid flows, and not particle flows:

Electromagnetic waves (light rays) have frequency, amplitude, and phase. If colors were waves, blue, yellow, red, and green amounts would be amplitudes of a frequency; white amount would have to be amplitude of highest frequency; and black amount would have to be amplitude of lowest frequency, all at same phase.

Fluid flows can be streamline/laminar (linear), with no sideways component, or turbulent (nonlinear), with one or two sideways components. If colors were flows, dark/black would be streamline-flow speed or amount, with small values for the quality coordinates; light/white would be maximum-turbulent-flow speed or amount, with large values for the quality coordinates; and hues would be turbulent-flow speed or amount, with intermediate values for the quality coordinates.

Semiconductors have locations with electrons (which have negative charge) and locations with missing electrons ("holes", which effectively have positive charge), and charges flow under a voltage difference. If colors were particle flows, white/light, black/dark, and hues would be six different "charged"-flow amounts or speeds.

9.3. Not waves

Waves have frequency and amplitude, like light.

However, experiences are not waves, because there can be no wave propagation. Experiences do not use temporal frequencies or amplitudes.

Brightness cannot be wave amplitude or number of waves. Color cannot be wave frequency.

9.4. Not static structures

Periodic/sinusoid structures vary in spatial frequency (cycles per meter, which is inversely related to sinusoid wavelength), relative high and low amplitudes, orientation angle to vertical or horizontal, and phase shift compared to other sinusoids. Example structures are diffraction gratings, rolling hills, fingerprints, standing waves, and radial cross-sectional structures.

However, experiences are not periodic structures.

Brightness cannot be sinusoid amplitude or number. Color cannot be frequency over space.

9.5. Not a physical property

Colors have properties seen over space, but colors are non-physical and colors are not physical properties. Physical properties, such as energy, density, viscosity, compressibility, pressure, temperature, surface tension, resistance, and impedance, do not change position and have only one phase/dimension/coordinate, so color cannot be like those properties.

10. Purpose of experiences

Why did consciousness evolve/develop?

10.1. Response speed

The main purpose is to speed up responses, because being able to have experiences is like pre-tensioning, priming, and being ready. Consciousness maintains signals at a higher state, shortening response time. Such states are good for alertness to new signals and for attention to meaningful signals.

10.2. Better perception

Experiences aid perception by marking boundaries and regions better, optimized for distinguishing and categorizing things in nature.

Experiences aid perception by increasing energy differences to signify importance and aid attention.

Experiences aid perception by adding meaning for better memory and recall.

Experiences differentiate sensations from different senses at the same location.

10.3. Stability

Consciousness sets up a mostly unchanging environment that can persist to allow continuous behaviors with no outside input.

Consciousness keeps the sensed world and mind going when signals are absent, minimal, or fade, such as in unchanging and slightly changing environments.

10.4. Correlate senses and actions

Experiences associate sensations from different senses. Experiences correlate perceptions among brain regions, such as sound and sight location, and correlate actions and perceptions, such as odor smell and nose touch. All sense experiences share experienced space.

Experiences associate perceptions of same sense at different locations.

10.5. Differentiate senses

Experiences in space differentiate sights, sounds, touches, smells, tastes, and pains at the same location.

11. Theories about experiences

Experiences appear to depend on minds, because dreams and imagination are visual. Some theories are:

Experience realism: Experiences are physical things or properties.

Experience primitivism: Experiences are simple physical qualitative properties.

Experience physicalism: Experiences have complex physical properties that cause appearance.

Experience objectivism: Experiences are physical things that perform functions that make observers see them.

Experience disjunctivism: The same subjective experience can have different causes. Different experiences can have the same cause.

Experience dispositionalism: Experiences are physical things, with secondary qualities, that dispose normal observers, in standard conditions, to see them.

Experience relationalism: Objects are physical things that, in a defined context/situation, have relational properties/qualities/capacities that make them appear to have experience to observers with a defined phenomenology. Vision develops experience categories that aid object classification and recognition.

Experience enactivism: Observer actions change perspective.

Experience adverbialism: Experiences are properties of perceptual processes which depend on objects, vision physiology, and mental state.

Experience eliminativism: Experiences are not physical things or properties.

Experience subjectivism: Experiences are psychological things or subjective properties, either of experiences or in experience's qualities. Psychological things or properties could be qualia, sensa, sense-data, experiential properties, non-intentional content, and/or intentional/representational content. Subjective properties could (monism), or could not (dualism), be identical to, or reducible to, physical properties. (Electrochemical patterns are stimuli for such psychological things or properties.)

Experience projectivism: Experiences are subjective properties projected to surfaces in three-dimensional space.

Color and Brightness Experiences

Color and brightness experiences have properties, parameters, and relations.

1. Location-experience properties

From viewpoint at coordinate origin, color and brightness experiences are at distances in directions of space, seen as the visual field and making a sheaf.

Space locations are continuous, open-boundary, macroscopic (not microscopic), unitary geometric surfaces/regions at physical distances and physical directions from viewpoint. Surfaces/regions blend with adjacent surfaces/regions.

Color and brightness define surfaces/objects and their properties. For example, light blue color defines bluebirds.

2. Color-experience properties

Color and brightness experiences have properties.

Color brightness ranges from dim to bright.

Color hue ranges from reds to oranges to yellows to greens to blues to violets to magentas. (Pure orange appears to have no red or yellow. Pure chartreuse looks like mixed yellow and green. Pure cyan looks like light blue. Pure violet appears to have no red or blue. Pure magenta appears to have no red or blue.)

Color saturation ranges from black, white, and grays to only hue. No-hue colors range from black to gray to white. (Pure gray appears to have no black or white.)

Color lightness ranges from dark to light brightness. Dark hues have deep chroma, strength, denseness, heaviness, solidness, compactness, and opaqueness. Light hues have shallow chroma, weakness, sparseness, unheaviness, fluidness, spreadness, and transparency.

Color strength ranges from weak to strong coverage. Strong hues have darkness, deep chroma, denseness, heaviness, solidness, compactness, and opaqueness. Weak hues have lightness, shallow chroma, sparseness, unheaviness, fluidness, spreadness, and transparency.

Color temperature ranges from cool to warm. Warm hues have vividness, liveliness, boldness, activity, salience, approaching, expanding, larger size, and rougher texture. Cool hues have dullness, quietness, stillness, background, receding, contracting, smaller size, and smoother texture.

Color chroma ranges from grayness to colorfulness. Deep chroma has vividness and activity. Shallow chroma has dullness and stillness.

For hue categories, for example:

Blue has low brightness, darkness/strength/wide chroma, and coolness/dullness.

Yellow has very high brightness, lightness/weakness/narrow chroma, and warmth/vividness.

Red has medium brightness, darkness/strength/wide chroma, and warmth/vividness.

Green has high brightness, lightness/weakness/narrow chroma, and coolness/dullness.

For no-hue color categories:

Black has darkness, high color strength, and high depth.

White has lightness, low color strength, and low depth.

Both black and white have no chroma range; not-applicable vividness and vibrancy; and neutral color temperature, attention, and salience.

Note: Taking three levels for color temperature/vividness and two levels for color lightness/strength/depth makes six categories: black, white, blue, yellow, red, and green.

3. Color-experience parameters

Color and brightness experiences have parameters.

Colors depend on brightness, light-source emissions/radiations, light paths/illuminations, object reflections/radiations, matte/glossy surfaces, surface textures, physical states, shadows, contrasts, transparency/opaqueness, and occlusions.

Different surfaces/objects make the same wavelengths look different. For example, the same color blue looks different for water, sky, and feathers.

People can see color and brightness without awareness.

People can see color and brightness without definite color (such as at flicker threshold or in fast motion).

People can see color and brightness with contradictory colors.

People can see color and brightness without object shape or motion (as in movement illusions).

4. Color-experience relations

Color and brightness experiences are similar to, differ from, are opposite to, and exclude other colors, in a unified system of color relations. Hues, black, white, and grays have relations among themselves (internal relatedness), with similarities and differences.

Colors are a complete and consistent system. For example, the hues yellow, blue, red, and green have colorfulness of different types. They each have no black or white and have none of other three hues. They are equally distant from two other hues:

Yellow is halfway between red and green.

Blue is halfway between red and green.

Red is halfway between yellow and blue.

Green is halfway between yellow and blue.

White has no black and no net hue (no net blue, yellow, red, or green). Black has no white and no hue (no blue, yellow, red, or green).

(Perhaps, colors are a language, with vocabulary, grammar about relations and properties, contexts, associations, and meanings.)

4.1. Color property similarities

Black and white have no hue, no chroma range, neutral color temperature, and no vividness.

Blue and red are relatively dark, have high strength, and have wide chroma range.

Green and yellow are relatively light, have low strength, and have narrow chroma range.

Blue and green have cool color temperature and dullness.

Red and yellow have warm color temperature and vividness.

Blue, red, and black are dark and strong.

Green, yellow, and white are light and weak.

4.2. Color property differences

Black and white have opposite lightness and strength.

Blue and red have opposite color temperature and vividness.

Green and yellow have opposite color temperature and vividness.

Blue and green have opposite lightness, chroma range, and strength.

Red and yellow have opposite lightness, chroma range, and strength.

Blue and yellow have opposite lightness, chroma range, strength, temperature, and vividness.

Red and green have opposite lightness, chroma range, strength, temperature, and vividness.

Blue and black have opposing chroma range, temperature, and vividness.

Red and black have opposing chroma range, temperature, and vividness.

Green and black have different lightness, chroma range, strength, temperature, and vividness.

Yellow and black have different lightness, chroma range, strength, temperature, and vividness.

Blue and white have different lightness, chroma range, strength, temperature, and vividness.

Red and white have different lightness, chroma range, strength, temperature, and vividness.

Green and white have different chroma range, temperature, and vividness.

Yellow and white have different chroma range, temperature, and vividness.

4.3. Colors maximize contrasts

Color categories are for distinguishing and categorization.

Blue and yellow are at extremes and have maximum contrast. Red and green are at extremes and have maximum contrast. Black and white are at extremes and have maximum brightness/lightness contrast. Yellow, blue, red, and green have maximum combined-lightness/temperature contrasts and so maximum hue contrast. Colors have maximum brightness, hue, saturation, complementarity/simultaneity, lightness, temperature, vividness, and strength contrasts, under all different illuminations, viewing angles, distances, and eye movements. Contrasts distinguish colors better, as well as calculate surface depth and figure/ground better. Contrast between two points/surfaces is better with discrete black and white, not just continuous brightness difference, and with discrete hues, not just continuous colors.

Distinguishing grounds is better with discrete hue/no-hue, not just continuous variables. Backgrounds include sky, water, leaves, and grass. Foregrounds include sun, fruit, blood, and fire. Neutral grounds have no hue and include rain, night, most rocks, and dark things.

Color categories exhaust all possible combinations of color brightness, hue, color saturation, color lightness, color strength, color temperature, and chroma.

5. Dark/dim, light/bright, black, and white

Experienced/perceived light from spots is relatively light/bright or dark/dim. Dimness is the inverse of brightness. Darkness is the inverse of lightness. Because they are both about color with no hue, darkness and lightness do not cancel each other but mix to make a range from black to white.

Hues have decreasing lightness, from yellow, through green, through red, down to blue.

White emits (is a source of) and/or reflects all hues. Black absorbs (is a sink for) all hues.

Note: Vision realizes that darkness and lightness total gray over a whole region, so light is conserved.

5.1. Black vs. dark

For higher vision, black is darkest, blue is dark, red is medium dark, green has small darkness, yellow has slight darkness, and white has no darkness. Blue, red, green, yellow, and white have no black (and hue mixtures have no black), so darkness and black can be different.

Black is a color with little lightness. Note that if people saw only luminance, dim/dark/black surfaces would appear to be barely there and look like empty space. People see colors that come from surfaces.

Black and white cannot be reversed. A night-sky spot with no stars is very dark and is black. A very dim surface is very dark and is black. Very dark color is black.

Because color mixes black, white, and/or blue and red, blue and green, yellow and red, or yellow and green, the quantity of black is the quantity of missing hue and white. If hue and white total less than 100%, black makes up the missing percent. If net blue is 50%, other net hues are 0%, and white is 0%, color has 50% black. Middle gray has 50% black and 50% white. Dark red is red 50% and black 50%. Dark orange is red 25%, yellow 25% (red 25% and green 25%), and black 50%.

5.2. White vs. light

For higher vision, black has no lightness, blue has low lightness, red has medium lightness, green has high lightness, yellow has very high lightness, and white has highest lightness. Black, blue, red, green, and yellow have no white, so white and lightness can be different.

White has no darkness. Note that people do not see luminance, only colors that come from surfaces.

Black and white cannot be reversed. The brightest surface is very light and is white. Mixing all hues maximally and equally makes maximum lightness, with no net hue, and is white. Mixing complementary colors maximally and equally makes maximum lightness, with no net hue, and is white. Highest-lightness color is white.

Any part of the color that is all hues together is white. If net blue is 50%, other net hues are 0%, and black is 0%, color has 50% white. Light gray has 25% black and 75% white. Light red is red 50% and white 50%. Light orange is red 25%, yellow 25% (red 25% and green 25%), and white 50%.

6. Hues

Net hue is blue and red, blue and green, yellow and red, or yellow and green. It is never blue and yellow or red and green, which cannot mix because they are opposites about different things. Equal blue and yellow, or red and green, cancel each other to make no hue.

6.1. Black, blue, red, green, yellow, and white have increasing brightness change with distance

Black brightness changes little with distance: for example, if distance doubles, black brightness changes from 4 to 2. White brightness changes most with distance: for example, if distance doubles, white brightness changes from 90 to 45. Hue brightnesses change over the range between black and white: yellow most, green next, red next, and blue least.

7. Colors are results not causes

Experienced/perceived colors are results not causes. For example, colors do not add as colors, but add as vision processes. Blue and yellow colors do not add to white, but only cause vision processes that make white. Similarly, red and green colors do not add to yellow. Black color does not make colors blacker. White color does not make colors whiter.

8. Colors must have their appearances

Colors must have their appearances, brightnesses, color lightnesses, color strengths, color temperatures, and chromas.

The highest level of darkness/strength, with no sources of hue/chroma/temperature, is black. The highest level of lightness/weakness, with all sources of hue/chroma/temperature and so no net hue/chroma/temperature, is white. Dark/strong/warm/vivid leads to red. Light/weak/cool/dull leads to green. Dark/strong/cool/dull leads to blue. Light/weak/warm/vivid leads to yellow. Color mixtures vector-sum color-property values. For example, no-net-hue light/weak/neutral temperature/not dull or vivid leads to white. Note that magenta, orange, cyan, and chartreuse are new colors: exact magenta is neither red nor blue, exact orange is neither red nor yellow, exact cyan is neither blue nor green, and exact chartreuse is neither yellow nor green.

9. Colors have no opposites or symmetries and cannot interchange

Colors have no opposites. Blue, red, green, yellow, black, and white have no exact opposites. Black and white are not opposites, because black approaches a darkness limit, but white can become blinding lightness, so high darkness must be black. Blue and yellow are not opposites, because blue lightness and yellow lightness are not equally far from half-lightness. Red and green are not opposites, because red lightness and green lightness are not equally far from half-lightness. Mixed colors have no exact opposites.

Colors have no symmetries. The black-white, yellow-blue, and red-green axes are not symmetric around the origin. The origin can be different.

Colors must have the colors they do and cannot interchange, because they differ in color properties, contrast properties, and color relations.

9.1. Exchanging

Assume that vision has three opponency pairs: white-black/black-white (dark-light), yellow-blue/blue-yellow, and red-green/green-red.

Blue and green cannot exchange, because then there is no magenta and no chartreuse.

Blue and red cannot exchange, because then there is no orange and no cyan.

Yellow and green cannot exchange, because then there is no orange and no cyan.

Yellow and red cannot exchange, because then there is no magenta and no chartreuse.

Exchanging blue and yellow makes the same opponency and opponency relations, but then there is no bright yellow and blue is too light (and their saturability is incorrect).

Exchanging red and green makes the same opponency and opponency relations, but then there is no bright green and red is too light (and their saturability is incorrect).

Exchanging both blue and green and red and yellow, or blue and red and green and yellow, or blue and yellow and red and green causes the same problems as above.

9.2. Modifying

Changing the properties of blue, red, green, yellow, black, and white (so, for example, blue is bright) is not possible, because the properties form a complete and consistent system.

10. Color systems

Possible color systems are harmonic ratios, mathematical group, symmetry group, multivectors, and visual phonemes.

10.1. Color harmonic ratios

Musical tones have frequencies in harmonic ratios. Light frequency in terahertz (10^12 Hz) equals light speed (3.02 * 10^8 m/s) divided by wavelength in nanometers. Perhaps primary, secondary, and tertiary colors have harmonic ratios that fit into an octave. Table 1 shows example frequencies, ratios, and wavelengths.

Notes: Azure has wavelength 461 nm to 481 nm. Spring green has wavelength 500 nm to 502 nm. Maroon has wavelength 700 nm to 740 nm.

Perhaps color categories come from the twelve-tone scale: C, C#, D, D#, E, F, F#, G, G#, A, A#, B, C.

Perhaps color categories come from the pentatonic scale: C, D, E, G, A, C.

Perhaps color categories come from the tones in an octave: C, D, E, F, G, A, B, C.

10.2. Colors as a mathematical group

Perhaps colors form a mathematical group.

The color mathematical group has colors as elements. The operation is color addition, and adding two colors makes a color (by wavelength-space vector addition, following Grassmann's laws).

Color addition is commutative, because adding two colors in either order results in the same color.

Color addition is associative, because adding three colors in sequence in either of the two ways results in the same color.

Colors have additive inverses, because every hue has a complementary color, and black, grays, and white have white, grays, and black, respectively.

Colors have an identity element, because adding black, white, or gray to a color makes the same hue and adding black, white, or gray to black, white, or gray makes black, white, or gray.

Adding color to itself makes the same color.

Multiplying color by a scalar represents decreasing or increasing intensity. Colors have the distributive property for scalar multiplication and vector addition, because increasing intensity does not change hue and increasing the intensity of a mixture of two colors makes the same color as increasing the intensity of each mixture color.

Colors do not multiply (or divide)

The color mathematical group applies to colors from light sources and from pigment reflections.

10.3. Colors as a symmetry group

Perhaps colors form a symmetry group.

Colors have three particles, which have value +2/3, +1/3, -1/3, or -2/3.

White has color charge 3*(+2/3) = +2. Yellow has color charge 2*(+2/3) + 1*(-1/3) = +1. Green has color charge 1*(+2/3) + 2*(-1/3) = 0. Red has color charge 1*(-2/3) + 2*(+1/3) = 0. Blue has color charge 2*(-2/3) + 1*(+1/3) = -1. Black has color charge 3*(-2/3) = -2.

2*(+2/3) + 1*(+1/3) = +5/3, 1*(+2/3) + 2*(+1/3) = +4/3, 1*(-2/3) + 2*(-1/3) = -4/3, and 2*(-2/3) + 1*(-1/3) = -5/3 are not allowed.

Alternatively, yellow has color charge 3*(+2/3) = +2. Red has color charge 2*(+2/3) + 1*(-1/3) = +1. White has color charge 1*(+2/3) + 2*(-1/3) = 0. Black has color 1*(-2/3) + 2*(+1/3) = 0. Green has color charge 2*(-2/3) + 1*(+1/3) = -1. Blue has color charge 3*(-2/3) = -2.

10.4. Colors as multivectors

Perhaps colors are multivectors.

Binomials have the form (a + b)^n, which equals the sum of binomial terms (n! / (r! * (n - r)!)) * a^(n - r) * b^r, where 0 >= r <= n. Each binomial coefficient indicates the number of possible combinations of n things taken r at a time. If n = 3, the four coefficients are 1, 3, 3, and 1:

The number of possible combinations of 3 things taken 0 at a time is 1.

The number of possible combinations of 3 things taken 1 at a time is 3.

The number of possible combinations of 3 things taken 2 at a time is 3.

The number of possible combinations of 3 things taken 3 at a time is 1.

Geometric algebra GA(3) has dimension 2^3 = 8 and so 8 bases. Because both are about the number of possible combinations of 3 things taken r at a time, the binomial coefficients and the geometric algebra GA(3) coefficients are the same:

The one 0-vector is for the scalar basis. Black is scalar color.

The three 1-vectors are for the three space dimensions. Blue, red, and green are vector colors.

The three 2-vectors are for the three space planes. Yellow, cyan, and magenta are bivector colors.

The one 3-vector is for the single possible trivector. White is trivector color.

10.5. Visual phonemes

In phonetics, nine phonological distinctive features concatenate into sixty-seven phonemes (which concatenate into syllables, which concatenate into words). Phonological distinctive features are about consonant articulations and vowel acoustic-wave amplitudes, frequencies, frequency-intensity distribution, and amplitude-change rates, as they occur over one-dimensional time.

In vision, unique combinations, of lightness, chroma, strength, and temperature, are a distinctive feature that makes the colors and their properties. Color lightness/strength has two main states, and color temperature/vividness has three main states. Two times three equals six directions (of the three opponency coordinates), making six colors: black, white, yellow, blue, red, and green. The colors are states of the same thing and are like visual phonemes.

10.6. Visual bosons

Perhaps color (and sound, touch, smell, taste, and pain) experiences are like bosons, in that an unlimited number can be at one location simultaneously.

10.7. Light rays

Perhaps light is light rays. Light rays are brightness vectors with two orthogonal perpendicular vectors that represent hue coordinates. The resultant of the two perpendiculars is at a rotation angle that represents hue and has a magnitude that represents saturation.

Perhaps light rays use multiple interacting layers/tracks (like multitrack recording and play back).

10.8. Flow

Perhaps white is like maximum streamline flow and black is like maximum turbulence.

Perhaps white is like maximum light flow and black is like dark flow, with dark the opposite experience of light.

Consciousness and Philosophy

Consciousness increases gradually as brain develops structures and functions [Aoki and Siekevitz, 1988] [Borrell and Callaway, 2002] [Carey, 1987] [Schaefer-Simmern, 1948].

Consciousness theories try to describe what consciousness is (features), why we have it (functions, roles, values), and how it came to be (causes, bases, antecedents) [Van Gulick, 2016].

Consciousness could be about processes, properties, states, structures, or substances.

The foundation of consciousness is spatiality. Consciousness has:

Accessibility: Space provides a workspace for access.

Intentionality: Space provides representations for intentions.

Subjectivity: Space provides a coordinate-origin viewpoint for subject.

Reflexivity: Space provides a viewpoint representation for reflexive thought.

Narrative: Space provides scenes for narrative thought.

Consciousness has many connections, with many spatial and other relations, at different hierarchy levels, that make an integrated symbol system and so make meaning possible.

Consciousness can make experience by itself, with no input, and so can have memory, imagination, and dreams.

1. Studying consciousness

Consciousness studies use first-, second-, and third-person methods [Flanagan, 1992].

Introspection can reveal the nature of consciousness [Helmholtz, 1897] [James, 1890] [Titchener, 1901] [Wundt, 1897]. First-person self-observation, second-person interactions, and third-person observation require comparison [Searle, 1992] [Siewert, 1998] [Varela, 1996].

Consciousness has biological, personal, and social aspects (phenomenology) [Heidegger, 1927] [Husserl, 1913], [Husserl, 1929], [Merleau-Ponty, 1945].

Memory, perception, and language all process information (cognitive psychology) [Gardiner, 1985] [Neisser, 1965].

Philosophy, computer science, psychology, linguistics, and biology contribute to consciousness studies [Baars, 1988] [Chalmers, 1996] [Crick, 1994] [Dennett, 1991] [Libet, 1982] [Libet, 1985] [Lycan, 1987] [Lycan, 1996] [Penrose, 1989] [Penrose, 1994] [Wegner, 2002].

Neural damage and abnormal psychology indicate features of consciousness [Farah, 1995] [Sacks, 1985] [Shallice, 1988].

2. Nature and features of consciousness

People can have different levels of arousal (alertness): awake, dreaming, hypnosis, stupor, non-dreaming sleep, minimally conscious state, vegetative state, and coma.

People can have different levels of knowing that objects (including self), thoughts, or feelings are currently present or imagined (awareness).

People can have consciousness (creature consciousness) [Rosenthal, 1986].

Consciousness is a physical-world object (the for-others) that relates to brain and body and that other people and the for-itself can perceive [Sartre, 1943].

Consciousness requires space, time, and causation [Kant, 1787].

Consciousness is experiences.

2.1. Conscious states

Mental states and processes can be conscious (state consciousness) [Rosenthal, 1986]. For example, people can have pains, sights, sounds, touches, smells, tastes, emotions, moods, feelings, and desires [Siewert, 1998].

Intentional conscious states

Conscious states represent objects and have intentionality [Carruthers, 2000], as do some non-conscious states. Mental states model reality. People can be the subjects of conscious states.

Perhaps conscious states are representations [Carruthers, 2000] [Dretske, 1995] [Harman, 1990] [Lycan, 1996] [Tye, 1995] [Tye, 2000]. However, representations may have no qualitative experiences [Block, 1996] [Peacocke, 1983] [Tye, 2003].

Perhaps consciousness requires interpretation of representations and testing against alternatives, with some rising in generality, as in the Multiple Drafts Model [Dennett, 1991]. Perhaps left hemisphere (interpreter module) integrates experiences and actions [Gazzaniga, 2011].

Attended Intermediate-level Representation [Prinz, 2012] represents, and attends to, colors, tones, and touches. Higher levels are judgments. Lower levels cannot have attention or qualitative experiences.

Reflexive conscious states

People can be aware that they are in a mental state (reflexive consciousness) (reflexivity) [Rosenthal, 1986] [Rosenthal, 1996]. People then have a mental state about a mental state.

Perhaps conscious states require a higher level that can have self-awareness and reflexive intentionality (higher-order theories). Perhaps conscious states require higher-order thinking (higher-order thought) [Carruthers, 2000] [Gennaro, 1995] [Gennaro, 2004] [Rosenthal, 1986] [Rosenthal, 1993). Perhaps conscious states require mental perception monitoring (higher-order perception) [Armstrong, 1981] [Lycan, 1987] [Lycan, 1996] [Lycan, 2004] [Shoemaker, 1975] [Van Gulick, 2000]. However, it remains to explain why more thinking or perceiving makes consciousness [Byrne, 1997] [Dretske, 1995] [Lycan, 1997] [Rosenthal, 1997].

Perhaps conscious states require reflexivity in themselves, with intentionally toward both object and state [Brentano, 1874] [Gennaro, 2004] [Gennaro, 2012] [Kriegel, 2009] [Kriegel and Williford, 2006].

Perhaps conscious states require whole-brain states (Higher-Order Global State models) [Van Gulick, 2004] [Van Gulick, 2006].

Mental states can have the idea of selves as agents.

Accessible conscious states

Mental states can have interactions with other mental states, and more, or specific kinds of, interactions make more consciousness (access consciousness) (accessibility) [Block, 1995]. Consciousness is about functionality.

Consciousness has interconnected contents [Van Gulick, 2000]. Consciousness seems to organize and create itself as a system [Varela and Maturana, 1980]. Mental states arise in brain, and some fit into a story and become conscious (narrative consciousness) [Dennett, 1991] [Dennett, 1992]. There is a stream of consciousness [James, 1890].

Phenomenal conscious states

Mental states (phenomenal state) have space, time, causality, intentionality, body, self, physical-world concepts (phenomenal consciousness) [Husserl, 1913] [Husserl, 1929] [Siewert, 1998].

2.2. Self

People are conscious selves/subjects [Descartes, 1644] [Searle, 1992]. Selves are observers and have a viewpoint on objects [Wittgenstein, 1921].

The physical world has space, time, and causation structure that defines the viewpoint and so self [Husserl, 1929] [Kant, 1787].

Consciousness is conscious of itself (the for-itself), which is non-intentional [Sartre, 1943], and such consciousness actively knows that there are intentions and the in-itself.

Selves are abstract concepts built by mental processes combining functional elements [Mackay, 1987].

Selves have meaningful experiences [Baars, et al., 2003] [Baumeister, 1998] [Kessel, et al., 1992].

Selves are subjects of experiences [Zahavi, 2005].

Self is identity (not a property, and not just a body image), and people know self and not-self [Leary and Tangney, 2003].

As observer, selves are agents for reading/getting and writing/putting observations. To survive and reproduce, selves perceive prey, predator, self, same-sex species member, or opposite-sex species member and then act, controlling action inhibition or permission by using the will, so selves are agents.

Selves have a continuous history, which can persist through amnesia, sensory deprivation, minimal information, body-perception loss, distorted perceptions, and hallucinations.

Meaning and consciousness combine to make self. Consciousness refers experience to self.

Mental processes combine functional elements to build abstract concepts and perceptions. Perceiving, thinking, reasoning, believing, imagining, dreaming, having emotions, having a conscience, being aware, and having a self all develop from experiences.

Mental states, structures, and processes (mind) are about seeing, hearing, tasting, smelling, and feeling temperatures and touches (experiences). For example, people have mental images of the environment around them (cognitive map) [Järvilehto, 2000] and of their physical dimensions (body image). People need cognitive maps and body images to perform conscious actions.

Body-movement, sense-quality, and mental-state covariance define subject and location, distinguishing it from environment, other organisms, and other minds.

2.3. Thought

People can think, and thought is everything that is conscious [Descartes, 1640].

Thinking requires sensibility [Locke, 1688]. Selves can think.

Thoughts can have levels of consciousness and can be unconscious [Leibniz, 1686].

Conscious thoughts have associations [Hume, 1739] [Mill, 1829].

Thought associations can make new thoughts [Mill, 1865].

Thoughts are about the objects, not about the process [Searle, 1992] [Van Gulick, 1992].

Thoughts and beliefs may have qualitative experiences [Pitt, 2004] [Seigel, 2010] [Strawson, 2003], or may not [Prinz, 2012] [Tye, 2012].

2.4. Functions

What function(s) does consciousness have? Does it cause anything [Chalmers, 1996] [Huxley, 1874] [Jackson, 1982] [Velmans, 1991]?

Perhaps consciousness has a moral function in relation to pleasure and pain and in relation to responsibility [Singer, 1975].

Perhaps consciousness allows control and selection of thoughts and actions [Anderson, 1983] [Armstrong, 1981] [Penfield, 1975], useful in new situations. Repeated, or emotional, conscious processing may change over to unconscious processing [Schneider and Shiffrin, 1977].

Perhaps consciousness helps understand others' mental states (perceptions, thoughts, beliefs, desires, motivations, and goals) and so aids social interaction and communication [Dennett, 1978] [Dennett, 1992] [Humphrey, 1982] [Ryle, 1949]. Social interaction also aids consciousness.

Perhaps consciousness integrates feature, object, space, and time information from all experiences, perceptions, and memories [Campbell, 1997] [Gallistel, 1990] [Husserl, 1913] [Kant, 1787] [Lorenz, 1977]. Consciousness provides a symbol system and so meaning.

Perhaps consciousness provides overall information to many other brain processes [Baars, 1988] [Prinz, 2012] [Tononi, 2008], not just functional modules [Fodor, 1983].

Perhaps consciousness relates to choice and free will [Dennett, 1984] [Dennett, 2003] [Hasker, 1999] [van Inwagen, 1983] [Wegner, 2002]. Consciousness presents the information upon which to base the choice and presents the action options.

Pleasure also has attraction, and pain also has repulsion [Humphreys, 1992] [Nelkin, 1989] [Rosenthal, 1991]. Such consciousness would then be a cause.

3. Brains and consciousness

How does consciousness come from physical things [Block and Stalnaker, 1999] [Chalmers and Jackson, 2001] [Van Gulick, 1995]? In physics, lower-level structures and their properties and processes can make all higher-level structures and their properties and processes (reduction) [Armstrong, 1968] [Lewis, 1972]. All physical functions have explanations in terms of matter and its laws.

Currently, there is no physical explanation of consciousness [Levine, 1983] [Levine, 1993] [Levine, 2001] [Papineau, 1995] [Papineau, 2002] [Van Gulick, 2003]:

Perhaps people cannot know such an explanation [McGinn, 1991] [McGinn, 1995].

Perhaps consciousness has no physical basis [Block, 1980] [Campbell, 1970] [Chalmers, 1996] [Foster, 1989] [Foster, 1996] [Kirk, 1970] [Kirk, 1974] [Robinson, 1982].

Perhaps there is no reductive explanation, only a non-reductive one [Fodor, 1974] [Kim, 1980] [Kim, 1989] [Putnam and Oppenheim, 1958] [Putnam, 1975] [Van Gulick, 1993].

Access, phenomenal, qualitative, narrative, and reflexive consciousness seemingly arise from neural properties and activities. How can brains cause conscious experiences (mind-body problem) [Levine, 1983] [McGinn, 1991]?

3.1. No consciousness, experience, or self

Perhaps consciousness does not exist (eliminativism) [Churchland, 1983] [Wilkes, 1984] [Wilkes, 1988].

Perhaps consciousness has no qualitative experiences [Carruthers, 2000] [Dennett, 1990] [Dennett and Kinsbourne, 1992].

Perhaps consciousness has no self [Dennett, 1992].

3.2. Idealism

Consciousness cannot come from matter or mechanics [Leibniz, 1714].

Perhaps causation is fundamental, and consciousness is an extension of it [Rosenberg, 2004].

Perhaps consciousness depends on a deeper non-physical/non-mental reality (neutral monism) [Russell, 1927] [Strawson, 1994].

Perhaps consciousness relates to the level of information integration (integrated information theory) [Koch, 2012] [Tononi, 2008].

Perhaps colors are mental properties, events, or processes (color subjectivism). Perhaps colors are mental properties of mental objects (sense-datum). Perhaps, colors are perceiver mental processes or events (adverbialism).

3.3. Dualism

Perhaps consciousness has aspects that are non-physical (dualism) [Eccles and Popper, 1977].

Perhaps reality has both physical and non-physical substances (substance dualism) [Descartes, 1644] [Foster, 1989] [Foster, 1996] [Swinburne, 1986]. Non-physical mind/self has consciousness.

Perhaps consciousness has properties that are not physical (property dualism). Such properties may have independent existence (supervenience) [Chalmers, 1996] or emerge from physical properties (emergence) [Hasker, 1999]

Perhaps all reality has mental aspects (panpsychism) [Nagel, 1979].

3.4. Physicalism

Dualism cannot be correct [Churchland, 1981] [Dennett and Kinsbourne, 1992].

Perhaps consciousness has physical parts or functions (physicalism). Perhaps consciousness realizes the physical (realization). Physical states and processes have or are functions in a system (functionalism) [Block, 1980]. Higher-level states and processes have their own principles that are not reducible to physical states and processes (non-reductive physicalism) [Boyd, 1980] [Putnam, 1975]. However, there remains the task of explaining how the higher-level states and processes have those principles [Jackson, 2004] [Kim, 1989] [Kim, 1998].

Perhaps color relates to physical objects, events, or properties (color realism) (color objectivism). Perhaps, color is identical to a physical property (color physicalism), such as surface spectral reflectance distribution (reflectance physicalism).

Perhaps colors depend on both subject and physical conditions (color relationism) (color relativism).

Perhaps humans perceive real properties that cause phenomenal color. Perhaps colors are only things that dispose mind to see color (color dispositionalism). Perhaps colors depend on action (color enactivism). Perhaps colors depend on natural selection requirements (color selectionism). Perhaps, colors depend on required functions (color functionalism). Perhaps colors represent physical properties (color representationalism). Perhaps experience has color content (color intentionalism), which provides information about surface color. Perhaps humans know colors, essentially, by experiencing them (doctrine of acquaintance), though they can also learn information about colors. Perhaps, colors are identical to mental properties that correspond to color categories (corresponding category constraint). However, there are really no normal observers or standard conditions.

Brain neural activities

Consciousness seemingly must have "neural correlates of consciousness" [Crick and Koch, 1990] [Metzinger, 2000].

Perhaps conscious states, properties, or processes are physical/brain states, properties, or processes (identity theory) (type-type identity theory) [Place, 1956] [Smart, 1959]. However, many different physical states, properties, or processes can be/represent/model the same mental state, property, and process [Fodor, 1974] [Hellman and Thompson, 1975]. Perhaps qualitative experiences are physical/brain neurochemical activities [Hill and McLaughlin, 1998] [Papineau, 1995] [Papineau, 2003], though there remains the task of explaining why [Levine, 2001].

Perhaps consciousness is about neural activities that unify:

Synchronous oscillation can cause binding [Crick and Koch, 1990] [Singer, 1999].

NMDA can make temporary neural assemblies [Flohr, 1995].

Thalamus can initiate and control cortical-activation patterns [Llinas, 2001].

Cortex can have circuits with feedback and feedforward [Edelman, 1989]. Local recurrent activity in sensory cortex may make qualitative experience [Block, 2007] [Lamme, 2006].

Neural modules can have fields, and brain can have overall fields [Kinsbourne, 1988].

Perhaps consciousness is circuits:

Left hemisphere interprets perceptions according to a story [Gazzaniga, 1988].

Frontal lobe and midbrain have circuits that initiate and control actions after predicting and comparing outcomes [Gray, 1995].

Frontal lobe and limbic system have processes related to emotion [Damasio, 1999]. Peri-aqueductal gray matter has processes related to emotion [Panksepp, 1998].

Brain quantum mechanics

Perhaps consciousness is quantum effects in neuron microtubules [Hameroff, 1998] [Penrose, 1989] [Penrose, 1994].

Perhaps consciousness occurs when brain states integrate into a single thing (as does a physical Bose-Einstein condensate) [Marshall, 1990] [Marshall and Zohar, 1990].

Perhaps consciousness occurs when separated brain parts interact (as in quantum entanglement) [Silberstein, 1998] [Silberstein, 2001].

Perhaps consciousness is necessary for physical reality, as an observer/measurement is necessary for wavefunction collapse [Stapp, 1993].

4. Perception, decision-making, language, and cognition

Perception, decision-making, language, and cognition have productivity, systematicity, inferential coherence, and context dependence.

4.1. Productivity

Perception, decision-making, language, and/or cognition can build complex things from simpler things, so they can make sums and products (productivity). They have competence to do so, but only do so for useful things (performance).

4.2. Systematicity of thoughts

Perception, decision-making, language, and/or cognition have analogous, opposite, similar, logical, empirical, spatial, temporal, and other relations among their simple and complex things/objects (systematicity of mental contents and systematicity of thoughts). The same relation can hold between different objects in the same category. Different objects in the same category can have similar, opposite, contrasting, and other relations. Different objects and relations have similarities, opposites, and contrasts.

4.3. Inferential coherence and systematicity of thinking

Perception, decision-making, language, and/or cognition have analogous, opposite, similar, and other relations among their processes, inferences, and computations (inferential coherence and systematicity of thinking). The same process, inference, or computation can use different objects in the same category. Different objects in the same category can have similar, opposite, contrasting, and other processes, inferences, and computations. Different processes, inferences, and computations have similarities, opposites, and contrasts.

4.4. Context dependency

Thoughts and thinking, and their properties and relations, are context-dependent.

Maps and images have context-dependency, as do neural networks.

5. Computational theory of mind

The computational theory of mind says that brains/minds are computers, with hardware, software, inputs, information processing, and outputs. Thinking is computation. Thoughts are representations.

Perception, decision-making, language, and cognition are likely to use a programming language and computations, with datatypes and connecting words for arithmetic, logic, character manipulations, spatial manipulations, and temporal manipulations.

Rational processes have implementations as physical causal processes.

Meaning/intentionality can come from the system of relations of all mental representations and/or causal connections among perceptions and representations.

Primitive mental representations may be innate or acquired and have syntactic, semantic, and computational roles and relations.

Computational theories of mind include the language of thought hypothesis, causal-syntactic theory of mental processes, representational theory of mind, and neural networks.

5.1. Language of thought

The language of thought hypothesis says that mind is a representation system. There are innate or acquired primitive representations that have meaning. A syntax can combine primitive (and/or compound) representations to make compound representations (combinatorial syntax). Compound representations have semantics that comes from the components and how they are used in a representation structure (compositional semantics).

5.2. Causal-syntactic theory of mental processes

The causal-syntactic theory of mental processes takes the language of thought further. It says that representations have knowledge of syntax, and so have constraints on their compositional semantics. Mental processes are causal processes.

5.3. Representational theory of mind

The representational theory of mind says that minds have relations between subjects and mental representations. Such relations (intentional states/propositional attitudes) include beliefs, desires, and repulsions. Mental representations can be mental objects or mental sentences.

5.4. Neural networks

Neural networks (connectionism) have no syntax or semantics, so their representations do not use language. Each node has no meaning and contributes no meaning. Only the whole vector can have meaning, and meaning does not depend on symbolic processing.

Neural networks must learn, and so have goal outputs.

Neural networks are not like neuron assemblies. Neuron-assembly neurons have more connections to nearby neurons, but neural-network nodes have even distribution. Neuron assemblies not have scalars or vectors.

6. Maps, images, and spatial features/descriptions

Maps and images have spatial features and descriptions, whereas computation has logical features and descriptions.

Cognitive maps in brains are processes and/or representations for animal navigation in three-dimensional space. Cognitive maps may be geometric and/or descriptional representations of distances, directions, angles, spatial relations, and/or space. Topological representations can be connectedness, adjacencies, or nesting/containment. Affine representations can be collinear, coplanar, intersecting, parallel, or curved. Metric representations can be distances and angles. Cognitive maps may be images or models of space. Cognitive maps may have productivity and systematicity.

7. Consciousness and computation

Access, phenomenal, qualitative, narrative, and reflexive consciousness appear to have computational aspects and be about cognition.

The Global Workspace [Baars, 1988] [Dehaene, et al., 2000] is a main processor, with an information-processing limit, that disseminates information to many modules for output and behavior, including primary sensory cortex and attention centers in frontal and parietal lobes.

Integrated Information Theory [Koch, 2012] [Tononi, 2008] equates level of information integration with level of consciousness. Information integration is system-organization information relations.

Section about Information Processing

Information Theory

Processes in physical media can input/retrieve, transfer/process/transform, and output/store information.

1. Information and data

Positions have a finite number of possible states (information). States differ from other states, so information is about state differences.

Two-state devices store one bit of binary information. Each of the two states has probability 1/2. Binary coding turns switches on and off to represent data and programs. It can represent Boolean false or true. Current, voltage, or wave amplitude can be 0 or 1. A switch can be off or on. A point can be absent or present.

N-state devices store one bit of N-ary information. Each of the N states has probability 1/N. N-ary coding turns switches to digital or analog settings/positions to represent data and programs. Decimal code uses bits with values 0 through 9.

Device series store one byte of information.

Device-series series store one record of information.

Device-series-series series store one table of information.

1.1. Addresses

Bits, bytes, records, and tables have addresses, so processing can access them.

1.2. Data types

Data types can be for categories (qualitative data) or numbers (quantitative data). Categories can be in order (ordinal qualitative data, such as grades and sizes) or have no order (nominal qualitative data, such as colors and genders). Numbers can be discrete (integers) or continuous (real numbers).

1.3. Data, context, contrast, and meaning

State series are data.

In a series, each state has preceding, following, adjacent, and related states (context) (data context). Contexts have possible-symbol sets (code) (data contrast). Symbols have probabilities of being in contexts, which are information quantities.

Syntax defines context relations and contrasts/codes, which contribute to meaning.

2. Information transfer

Binary information transfer/transmission uses a series of absent or present currents or voltages along one channel, or parallel absent or present currents or voltages along multiple channels.

Amplitude modulation of carrier-wave amplitude, or frequency modulation of carrier-wave frequency, makes a series of absent or present states. Carrier-wave frequency determines frequency or amplitude modulation range (bandwidth).

The number of possible on-off positions per second is information carrying capacity.

3. Input

Inputs are vectors (ensemble). Each vector coordinate has possible events, each with a probability. A high number of possible events makes each state have low probability.

4. Output

Outputs are vectors (ensemble). Each vector coordinate has possible events, each with a probability.

5. Information channel

Information channels link input and output.

Information channels have cross-sections, with a possible number (channel capacity) of carrier waves.

Information redundancy, by series repetition or parallel channels, can overcome noise and reduce transient errors (but not systematic errors).

6. Transducer

Transducers extract information from data, by sampling.

Information can change from analog to digital, or vice versa.

7. Information compression

If a series of bits has the same value, rather than using the whole series of bits, coding can denote series length. For example, 000000000000000 can have code 1111, because number of 0's is 15.

If a code has few symbols, a series of bits is more likely to have the same value, allowing more compression.

8. Error checking

Error-correcting code can perform same operation three times and use majority result.

Logical sum checking finds logical sum of bits. Weighted check sum uses bit frequencies.

Extra bits (check bit) in bytes can detect errors. Parity checking compares check bit to sum of other bits. Rectangular code codes message in arrays, with check bits for rows and columns.

9. Information processing

A switching network runs a program. Switching can represent any serial or parallel program and all procedures and objects.

Switching transforms an input state to an output state. Switching allows input values to have negative or positive weights, processing to go forward or backward, and output values to decrease or increase.

10. Encoding and decoding

Decoding is reading code, using the code's format and rules, and expressing the code's information in a format that the reader understands, whereas encoding is writing code, in the code's format and rules, expressing the writer's understanding of the information.

A language has structures (syntax) for making phrases from words and morphemes and has rules (grammar) for producing phrases and sentences.

Parsing uses syntax and grammar on a sentence or phrase to make a tree diagram showing the relations of parts of speech.

11. Data visualization

Computer-data "visualization" [Tufte, 1983] requires graphics software and an output device with a coordinate system. Visualization places graphical marks (points, lines, circles/ellipsoids, areas, and so on) at locations in the coordinate system. Marks have data properties (encoding channels) for color, hue, brightness, and opacity/transparency. They have encoding channels for horizontal-axis/vertical-axis/third axis positions or for radius/elevation-angle/horizontal-angle positions. They may have encoding channels for size/volume/area/length, orientation/alignment/ordering, shape, and surface texture. Encoding channels have datatypes: nominal datatypes for text/labels, quantitative datatypes for integer or real-number intervals or ratios, ordinal datatypes for greater-than/less-than/equal-to values or position or for before/after time orderings, and temporal datatypes for year-month-day-hour-minute-second timestamps or number-of-time-units time intervals.

Procedure-Oriented Structured Programming

Procedure-oriented programming/structured programming reads inputs from addresses, runs information-processing instructions on constants and variables to get values, and writes output to addresses.

1. Variables

Physical quantities are about object states and motions. Physical-object states include mass, charge, location, time, existence, and shape. Physical-object motions include velocity (distance per time unit) and intensity (energy per time unit per area unit). Physical quantities have continuous values (for example, for velocity and intensity) or discrete levels (for example, for number and quanta). In computers, named variables can represent object states and motions (as well as object events/actions such as additions, deletions, object changes, triggering of actions, getting data, and putting data). Variables have datatypes, for integers, real numbers, strings, logical values, times, arrays, or images, so variables can hold continuous values and discrete levels.

2. Data hierarchy

Instruction, address, and input and output data is in an information hierarchy:

Bits have one of a series of values. In binary code, bits represent two possible states 0 or 1. In decimal code, bits can have values 0 through 9.

Bytes are one-dimensional series of bits. Bytes can have any number of bits, but typically have eight bits. Bytes use positional notation. In binary code, a byte with two bits has values 00, 01, 10, and 11. In decimal code, a byte with two bits has values 00 through 99.

Fields (columns) are one-dimensional series of bytes. Fields represent numbers, strings, dates/times, Booleans, and pixels (system datatypes). Data types can be for categories (qualitative data) or numbers (quantitative data). Categories can be in order (ordinal qualitative data, such as grades and sizes) or have no order (nominal qualitative data, such as colors and genders). Numbers can be discrete (integers) or continuous (real numbers).

Records (rows) are one-dimensional series of fields. Records represent sentences, formulas, equations, and vectors.

Tables (files and arrays) are two-dimensional series of records. Tables represent paragraphs, equation systems, matrices, arrays, voxels, blobs, and bitmaps. Tablespaces (file sets) link tables.

Databases have three-dimensional arrays of tables. Databases represent books, tensors, spaces, and vector-graphic pictures.

Computers typically use 64-bit data units (word) to represent data, instructions, and addresses in their registers, processors, and information channels. Typical words have eight eight-bit bytes.

A hash, map, associative array, or dictionary datatype has name-value pairs.

2.1. Program datatypes

Structured programming uses integer, floating-point-number, character, string, Boolean, time, date, and monetary-value system datatypes to build program datatypes.

The reference datatype is a memory or other address (so it comes from the integer datatype). The pointer datatype is a memory address and is the simplest reference datatype.

The set datatype has elements, which may use a rule. The enumerated datatype has named elements (forming a set). Elements are typically constants. Elements may have an order. It is like a categorical variable in statistics. The union datatype is a record datatype that allows only one type of datatype.

The list datatype has elements in a sequence. The linked list datatype has elements that each point to the next, making a sequence. The queue datatype is a linked list with first in, first out. The stack datatype is a linked list with last in, first out.

The graph datatype has nodes/vertices and directed or undirected links/edges. The tree datatype has a root node and child nodes, with no recursion.

The blob datatype holds images, sounds, or videos as binary strings/byte strings.

2.2. Arrays

Computer-software arrays are variables with any number of elements/dimensions, each of which can have values. For example, array A(e1, e2, e3) has three elements. Arrays are vectors or numbered lists.

Elements are system datatypes or object references. All elements have the same datatype.

Elements have integer indices in sequence. The example has 1, 2, 3. (Theoretically, indices could be any datatype and/or have mixed datatypes.) To scan arrays, programs read elements in sequence from first integer index to last integer index. Typical arrays have a specific number of elements, so that they use a contiguous memory block with elements in the same series as the integer indices.

In procedural-programming languages, multidimensional arrays have dimensions, such as numbers of rows and columns. Arrays with four rows and four columns have 16 discrete elements: B(e11, e12, ..., e43, e44). The first element e11 is in row 1 and column 1. Arrays with four rows, four columns, and four depths have 64 discrete elements: C(e111, e112, ..., e443, e444). The first element e111 is in row 1, column 1, and depth 1.

In the Java programming language, multidimensional arrays are nested arrays. Two-dimensional arrays are one-dimensional arrays whose elements are (references to) one-dimensional arrays, whose elements are values. For example, D(e1(f1, f2, f3), e2(f1, f2, f3), e3(f1, f2, f3)) has nine elements. Three-dimensional arrays are one-dimensional arrays whose elements are references to one-dimensional arrays, whose elements are references to one-dimensional arrays, whose elements are values. For example, E(e1(f1(g1, g2), f2(g1, g2)), e2(f1(g1, g2), f2(g1, g2))) has eight elements.

The awk language uses one-dimensional arrays. Associative arrays are a series of string-index (not integer index) and string-element pairs: F(ia ea, ib eb, ic ec, id ed). For future scanning of the array, awk stores all index strings to which the program has assigned an element string. Such scanning is not by consecutive indices but by consecutive assignments. Multidimensional associative arrays have index strings with separators between index substrings: G(ia ja ea, ib jb eb, ic jc ec, id jd ed). The first substring is the index for the first dimension, the second substring is the index for the second dimension, and so on. Multidimensional associative arrays are still a series of index-element pairs. To scan multidimensional associative arrays, the program can scan by the whole index string, or it can first split the index string into substrings and then scan first by the first substring, then by the second substring, and so on. Associative arrays can change size by adding or deleting index-element pairs, so they can use random-access memory.

Theoretically, arrays could start with variable size and dimension and/or change size and dimension during program runs.

3. Programs

Programs/algorithms are series of instructions to perform arithmetic, algebraic, calculus, logical, and linguistic operations on inputs to make outputs.

Procedure-oriented programs read input, run application programs, and write output:

The central processor unit (CPU) reads a program instruction and its input data from two addresses in memory registers.

The CPU runs the instruction in its register, to perform an arithmetic, logical, or string operation on input.

The CPU writes output data to an address in a memory register.

The CPU repeats the steps for the next instruction in the program.

The CPU clock sends electric pulses to run the processing steps.

Information moves from one register to another along one information channel (serial processing) or through multiple independent information channels (parallel processing).

3.1. Program instructions

Structured programming uses if/then/else (selection), for/while (repetition), declarations and statements in groups (block structures), and instruction-sequence groups (subroutines). Computer instruction sets have approximately 200 instructions:

Define a constant or variable.

Assign a datatype to a defined constant or variable.

Read data (number, symbol, letter, word, date, or Boolean) from a memory address into the CPU.

Write data from CPU to a memory address.

Using data retrieved from two registers, perform an arithmetic operation and put the resulting data in a register.

Using data retrieved from two registers, perform a logical operation (AND or OR) and put TRUE or FALSE in a register.

Using data retrieved from two registers, perform a comparison operation, to see if numbers are equal, less than, or greater than, and put EQUAL/SAME, MORE, or LESS in a register.

Using data retrieved from two registers, perform a comparison operation, to see if strings are the same, different, longer, or shorter, and put EQUAL/SAME, MORE, or LESS in a register.

Transform strings or extract substrings from strings.

Depending on a condition (or unconditionally), branch to a new location in a program and perform the instruction there, sometimes returning to the previous location.

Determine if statement A and/or B is (or is NOT) true, and then perform statement C (conditional): "IF A, THEN C", "IF A AND B, then C", or "IF A OR B, then C".

If value of i is between m and n, perform x to change i, and then check value of i again (loop): "FOR i FROM m TO n, DO x" or "DO ... WHILE ...".

Execute STOP instruction.

3.2. Operating systems

Computer operating systems manage files, memory, input-output devices, and system configuration:

Run the central processor unit (CPU).

Schedule and run applications in working registers, using the CPU.

Store in, retrieve from, and organize memory devices.

Create, name, copy, move, delete, locate, index, and search files and directories.

Compare, edit, and format files.

Use clocks to step through programs and track time and motion.

Read input from input devices.

Write output to output devices.

Coordinate CPU, memory and working registers, and input and output devices.

Some operating-system functions are:

Import and export

APIs, applets, and init() method

Log in and log out

Processes and variable to open and close

Foreground and background operations

Windows

4. Transformations in grammar

Programming compares with transformations in grammar.

Transformational grammar [Chomsky, 1957] generates and comprehends sentence patterns by mappings/rules. For example, predicate calculus is a transformational grammar.

Phrase-structure rules describe language syntax by relating grammatical units (such as subject, verb, and object) to define basic sentence types. Basic-sentence-type elements are placeholders for grammatical units and their specific phrase or word categories.

Transformations work on basic sentence types to make complex sentences:

The Move function has rules, with constraints, to move phrases and make trees, for each basic sentence. For example, the Move function can interchange phrases. The Move function can add and subtract branches and leaves.

The Merge function combines phrases and sentences by copying and deleting phrases. For example, the Merge can add phrases that govern, or bind to, another phrase.

Specific sentences are sentence types with placeholders replaced with specific words.

Object-Oriented Programming

Function-oriented programming takes processes and use cases as fundamental, emphasizing the data flow. Object-oriented programming takes objects and their classes and fundamental, emphasizing the data hierarchy.

In object-oriented programming, objects are unique (identity) with a unique name, are different from all other objects, and cannot transform to another object. Objects have properties/attributes/states, such as number, size, or shape. Objects have behaviors/methods, such as write, read, transform strings, and do arithmetic, which are functions. Functions are independent entities and have input(s), processing, and output. Objects and properties can be functions.

Object-oriented programming builds datatypes for information categories. Datatypes have parameters, properties, and procedures, as well as structures and relations. Programs assign datatype instances to constants and variables and run the procedures to return values.

1. Objects/datatypes

Object-oriented programming has datatypes for strings (for qualitative data), numbers (for quantitative data), true/false, dates/times, and arrays. Qualitative data is for categories that have an order (ordinal qualitative data, such as grades and sizes) or no order (nominal qualitative data, such as color and gender). Quantitative data can be discrete (integers) or continuous (real numbers). Structured procedure-oriented and object-oriented programming both use the same system datatypes: integers, floating-point numbers, alphanumeric strings, times, dates, monetary values, and arrays.

Object-oriented programming also has datatypes (spatial datatypes) for geometric objects such as points, straight and curved lines, polygons with straight and/or curved lines, circles, multiple points, multiple lines, multiple polygons, and their collections and indices. Spatial datatypes have information about lengths, areas, volumes, distances, directions, and orientations. Spatial datatypes can be for planar geometries or for curved surfaces and solids. Geographic spatial datatypes represent spatial datatypes for spheres/ellipsoids, so they have latitude, longitude, and distance from center. Spatial datatypes have information about coordinate systems.

Spatial datatypes can use raster data, with a two-dimensional or three-dimensional grid of cells, or vector data, with points, polylines, and polygons at locations in space. They have color variables added to point locations. Vector data uses vector graphics, which may use vector algebra or geometric algebra. Spatial datatypes include information about names/labels and quantities, ratios, and intervals. Spatial datatypes can transform and scale (zoom). Spatial-datatype series and sequences can model motions.

Object-oriented-programming languages use independent modules (class) to define datatypes for fields, records, files, tables, databases, database query results, windows, images, geometric figures, and algebraic and geometric functions.

1.1. Spatial datatypes

Spatial datatypes are geometric shapes.

The point datatype is zero-dimensional.

The line datatype is one-dimensional and has a pair, or series, of points connected by line segment(s). Lines have length.

The curve datatype is one-dimensional and has a start point, middle point, and end point connected by an arc. Curves have length and width.

The polygon datatype is two-dimensional and has a start point, lines, and an end point. Polygons have width and area.

The curved-polygon datatype is two-dimensional and has a start point, curves, and an end point. Polygons have width and area.

The solid datatype is three-dimensional and has surfaces. Solids have width and volume.

Spatial datatypes can represent numbers, letters, symbols, and all geometric figures.

Spatial datatypes have coordinate locations.

Collections of spatial datatypes are necessary to describe scenes and space.

1.2. Spatial-datatype properties

Spatial datatypes are datatype collections. Spatial datatypes have:

Numbers, configurations, and patterns of bytes and datatypes

Magnitude

Area or volume

Shape

Location

Discrete or continuous boundaries

Orientation with or without direction

Constants, variables, and methods about object properties/attributes

Relations, indexes, and importance markers

Any number of spatial datatypes can be at a location.

1.3. Examples

Object-oriented vector-graphics-like text files describe three-dimensional spatial datatypes and their three-dimensional spatial relations. An example is a string:

public class class1 { // Define a class.

private static String String1 = "0"; // Define a variable.

public static void main (String args[]) { // main method.

System.out.println("String1 = " + String1); // Print.

}

public class1() { // Constructor method makes an instance.

}

}

A three-dimensional space can represent any text as an array. Therefore, a three-dimensional space can represent a molecule type by listing atoms, atom locations, and bonds. For that molecule type, methods can add or delete atoms, add or remove bonds, and describe molecule vibrations and rotations. Compiling the space in the memory register makes a three-dimensional molecule-instance in the working register.

2. Methods

Datatype classes define procedures (methods):

Define a variable and set its datatype instance to a value.

Make a datatype instance with specific property values (constructor method).

Make a datatype instance with a procedure that returns a value (instance method).

Change variable state or property.

Call a class or method.

Open and close windows.

Open file, write to file, read from file, and close file.

Get table-cell value and set (put) table-cell value.

Indicate if statement about datatype-instance state or property is true or false.

Methods can have parameters and return result values. For example, a function with point coordinates as parameters can calculate distance between two points and return a variable with a real-number value.

A main static method starts the program by assigning datatype instances to variables and running their datatype methods.

2.1. Spatial methods

Spatial methods store, retrieve, update, or query:

Dimension

Shape/Type

Text description

Coordinate geometry

Vector-graphic representation

Boundary

Constructor

Index

3. Packages

Classes are in a group (package) about database, files, output, input, flow control, or other system part. All the packages together make the program (software system).

4. Example of class, instance, data attribute, and method

Make a class file that can create the Point datatype.

Run the class file to put the Point datatype into the system.

Set a variable and assign it the Point datatype, making a specific instance (instantiation) of the class by using the class like a function (constructor):

pt = Point().

Note: The class and constructor have the same name, but the constructor has parentheses to contain the three placeholders for the three data attributes, in this case the three point coordinates.

To provide initial data for the instance, call (or let the system automatically call) an initializer method:

define _init_(self,n,m,p). For pt = Point().

define _init_(self,n=0,m=0,p=0). For pt = Point(0,0,0).

The variable is automatically set equal to "self":

pt = self.

The initializer method creates the x, y, and z coordinate values:

self.x = n, self.y = m, self.z = p.

After initialization, the variables have attribute values:

pt.x = n, pt.y = m, pt.z = p.

Note that all instances have the current settings (state) of their data attributes.

Some system methods are:

set method sets an attribute value: set pt.x = 0.

get method gets an attribute value: get a = pt.x.

print() method sends a string to the printer or display: print(pt.x).

Define methods in the class file:

define CalculateDistanceFromOrigin(self).

Call methods only from instances:

pt.CalculateDistanceFromOrigin().

Parameters are datatypes. Methods can take parameter values:

define CalculateDistanceBetweenPoints(self,n,m,p).

Methods can return a value:

return ((n - pt.x)^2 + (m - pt.y)^2 + (p - pt.z)^2)^0.5.

Methods can return a value for the datatype:

return Point(n,m,p).

5. Object-oriented-programming language

An object-oriented-programming language has an operating system, datatypes, compiler, machine, memory, instruction set, and run-time environment.

The operating system runs the compiler, machine, and run-time environment using memory, datatypes, and instruction set.

5.1. Structured files for datatypes

Memory subdirectories (like packages in Java) have structured files (like classes in Java) that describe datatypes. Classes are collections of fields, methods, and attributes.

A structured file (like the Object class in Java) describes datatypes in general, and all classes are subclasses of that datatype.

Classes can have superclasses and subclasses. Some classes (like final classes in Java) have no superclasses or subclasses.

An interface/protocol/abstract class (abstraction of a class), such as serializable interfaces in Java including the Interface class, is a set of signatures (names, parameters, and return types) of methods in a class (or set of classes) and so allows reading and writing datatypes from/to the classes (which thus implement the class or adopt the protocol).

Datatypes have links to other datatypes.

Constants and variables

Classes define constants and variables (fields in Java) and then assign them a datatype. Some constants and variables (attributes in Java) belong to named objects of the class.

Classes may give constants and variables a value.

Some variables (like final variables in Java) must be initialized and then cannot be changed.

Procedures and functions

Classes define procedures and functions (methods in Java), such as set, get, print, and display. Procedures do not return values (void method in Java). Functions return a variable value (for example, int method in Java returns an integer).

Procedures and functions have instructions, such as FOR...NEXT loops, IF...THEN conditionals, comparisons, and all arithmetic, algebraic, calculus, linguistic (string), and logical operations.

Procedures and functions can have arguments (parameters).

Procedures and functions can call procedures, functions, datatypes, and datatype instances.

If a datatype has a "main" procedure, which can have arguments, it is run first by the program.

Other procedures open, write to, read from, and close files. The files are the arguments.

A procedure (like the Constructor method in Java) uses the class to create datatype instances.

Functions return a field. Some functions write and read bytes from files.

Some procedures and functions can be called by any method (public method in Java). Some procedures and functions can be called only by instance methods in the same class (private method in Java). Some procedures and functions can be called only by instance methods in the same class, package, or subclass (protected method in Java).

Some procedures and functions are only in a class, not its instances (static method in Java).

Control mechanisms start, stop, and synchronize processes. For example, procedural loops have reverberations/resonances that extend over space and time to maintain signals, allowing space and time for parsing.

5.2. Compiling

The operating system has a compiler to convert source-code text-like formatted files/information (like java files), in memory, into machine-language-like formatted files/information (like bytecode Java class files), in working registers. Compiled files are streams of bytes about one datatype and have a structure:

References to already-compiled classes/datatypes, interfaces, fields, methods, attributes, and constants

Permissions and access

Class/interface information

Fields with datatypes

Methods in the class

Attributes (constants or variables) of named objects of the class

5.3. Virtual machine

A virtual machine (like the Java Virtual Machine) is software that is an abstract computing machine. It can be hardware-, operating-system-, host-, and network-independent (like HTML and Java). Source-code programs can embed inside other programs (such as HTML pages). Virtual machines implement a programming language (such as Java).

Virtual machines have a specification, implementation, and instance:

The specification document (such as the Java Virtual Machine Specification) describes virtual-machine implementation requirements.

Virtual machines implement the specification (the Java Development Kit has several implementations of the Java Virtual Machine).

The run-time environment/system (such as the Java run-time environment) makes an instance of the virtual-machine implementation on the hardware/software (including microcode) platform.

5.4. Instructions and run-time

Virtual machines have an instruction set (such as the Java Virtual Machine instruction set). An instruction is an operation code (one-byte opcode in Java) followed by zero or more arguments/data (operands in Java). Instructions are specific to datatypes (in Java, iadd and fadd add integers or floating-point numbers).

Virtual machines know only a binary-file format (not the source-code programming language). A binary file (such as a Java class file) contains virtual-machine instructions, a symbol table that defines the placeholders used, and some information about the binary file itself.

Virtual machines dynamically load, link, and initialize classes and interfaces:

Loading reads a named class binary file and creates a class or interface datatype.

Linking combines the class or interface datatype with the run-time state of the virtual machine, ready for execution.

Initializing executes an initialization method to make a class instance, interface, or array.

The virtual-machine interpreter runs a loop:

Calculate program-counter value.

Fetch that value's operation code, and fetch that operation code's operands (if any).

Execute the operation-code action.

Virtual machines can support many execution threads simultaneously, using separate program-counter registers (such as pc registers in Java).

5.5. Datatypes and run-time

Virtual machines work with datatypes:

They have system datatypes (primitive types in Java): numeric, Boolean, and address-return (returnAddress in Java). Primitive types can be declared as local variables, object-instance variables, and method class variables.

They create class and interface datatypes (reference types in Java).

They make class instances and create arrays (objects in Java). Array elements are primitive-type values or are object references. All array elements have the same primitive type or refer to the same object. Object references can be declared as local variables, object-instance variables, or method class variables. After declaring an object reference, it must be initialized by referencing an existing object or calling "new". Multidimensional arrays are arrays of arrays, of any dimension, and have more than one index. Arrays cannot be declared as variables.

Variables, constants, method arguments, method returns, and operands have values (primitive-type values or reference-type values in Java).

Note that the compiler does all datatype checking, so virtual machines do not check datatypes.

5.6. Memory areas and run-time

Virtual machines manipulate memory areas at run time.

At run time, virtual machines set up a stack or queue. Queues are linear collections of local variables accessible at either end. Stack memory is a vector of local variables, which can be primitive types or object references. It works last-in-first-out:

The Push command puts a local variable on top of the stack.

The Pop command takes the local variable off top of the stack.

The Peek command tells what local variable is at the top.

The Search command tells position of a local variable in the stack.

The Empty command tests if stack is empty.

At run time, virtual machines set up a heap memory to hold all objects (class instances and arrays), with their object-instance variables. If the heap is full, virtual machines remove unused objects (garbage collection).

At run time, virtual machines set up a memory area for methods, with their class variables.

Note: At run time, virtual machines also set up memories for thread stacks, native handles, and virtual-machine internal data structures.

5.7. Decompiling

Decompiling inputs binary code and outputs high-level code. It can recover lost source code or help make the code compatible with other applications.

Decompiling first makes the binary code into assembly-language code, which typically has 1:1 correspondence with binary code, ready for analysis and synthesis into high-level code (disassembly).

Decompiling finds structures and semantics, possibly using known "metadata" about the code:

Identifiers, symbols, constants, procedures, and functions and their declarations (or locations) (symbol tables)

Variables and methods of classes (class structure)

Variable names

Number, order, and type of inputs and outputs (arguments) of methods (method signature), functions, and subroutines (type signatures)

Program architecture and the "main" function and its entry point

Conditionals, loops, and other code structures

Execution semantics

Libraries and their interfaces

Descriptive and debugging information (comments).

Decompiling traces where register contents are defined and calculated (data flow analysis), to assign names to variables and constants.

Decompiling finds the data types (integer, real number, string, date, Boolean, pointer, array) associated with register and memory locations (type analysis).

Decompiling analyzes the program for semantics and syntax (program analysis). Binary code may combine some instructions (idiom) for execution semantics, such as for subroutine calls, exception handling, and if/then, for/next, and while/do statements (switch statement).

Decompiling combines semantics of instructions to make complex expressions (expression propagation). Decompiling makes if/then/else conditional statements, while loops, and for/next loops (structuring).

Decompiling then outputs the high-level code (code generation).

6. Object-oriented database-management principles

Objects have constants, variables, locations, times, descriptions, attributes, and parameters. Objects have contexts as spatial, temporal, and functional relations (for example, located at the same place).

Object-oriented database-management and programming uses encapsulation, polymorphism, classes and instances, and subclasses and subclass inheritance.

6.1. Encapsulation

Objects have both data elements, such as identifiers, attributes, and coordinates, and procedures/methods, such as calculate length, area, or angle, for data-element processing. Datatypes/classes encapsulate data elements, attributes, parameters, variables, constants, and methods. Accessing an object executes the method on the data elements.

6.2. Polymorphism

For the same object, output can differ because the method first identifies the data element and then uses the method that applies to that data element. Polymorphism can make classes have categories.

6.3. Class and instance

Objects can have the same formal structure of data elements and methods. For example, curved lines formally model many things in nature and engineering. Classes have instances. The curved-line class has specific-shape circumference, perimeter, power-function graph, and other instances.

6.4. Subclass and subclass inheritance

Child classes have the same data elements and methods as a parent class, plus more-specific data elements and their methods. Subclasses are consistent with class data elements and methods. (It is possible for a subclass to have more than one parent class.) From basic datatypes and operations, subclasses and inheritance organize data and programs.

7. Object-oriented user interfaces

Object-oriented user interfaces, such as graphical user interfaces, have a "desktop", dialog boxes, and glyphs.

7.1. Desktop

Desktop/screen icons/objects include file folders, shortcuts to programs, recycle bin, and taskbar icons, that the user can point to, click, double-click, drag, and drop.

7.2. Dialog boxes

Dialog boxes have checkboxes, command buttons, radio/option buttons, sliders, spinners, and drop-down lists.

7.3. Glyphs

Glyphs are graphics displays and include screen savers, blinking cursors/arrows, and dot types that indicate object type and GPS position on a map.

7.4. Graphical user interfaces

Computer graphical user interfaces have a:

Display, typically made to look like a desktop

Movable and shapeable icons and windows, for files and programs

Menus, to select a command to operate on selected item

Standard text and graphics formats, such as photos, tables, and spreadsheets

Center of attention/point of action, typically a pointer

Pointer mover/dragger, such as a mouse, trackball, finger pad, or joystick

Item activator/selector/releaser, such as a mouse button or keyboard key

Processes running in the background

Vector graphics and rasterization work with screen two-dimensional-page coordinates to make the display.

8. Object-oriented programming systems

Object-oriented programming systems use flowcharts to design code. Arranging flowchart objects (widgets) models the program's logic. Widgets have dialog boxes. Compiling writes actual code.

Examples are Visual Basic and Visual C. Compiling writes actual Basic or C code.

Many applications, such as Microsoft Word, include the ability to make short programs (macros).

People can design data displays in Microsoft Excel.

9. Spatial and temporal programming

Spatial programming takes account of register, processor, input, processing, and output space locations, topology, and shapes. For example, it represents three-dimensional arrays of computing elements, such as cellular automata, which can have regions of connected elements. Regions can attract or repel, or otherwise link, through channels. Regions and channels can move, make copies, appear, and disappear. Regions and channels can send signals about property values to, and receive signals about property values from, the same or other regions or channels.

Temporal programming includes sequential programming, parallel programming, distributed programming, and multiprocessor programming.

10. Digital twinning

Computer models describe a physical or abstract object and its processes. Computer models can run one simulation of a physical or abstract process.

Computer "twinning" (digital twinning) [Grieves, 2014] has a dynamic/kinetic model of a physical object, receives real-time data about the object, analyzes the data and updates the model based on the data using software instructions and/or control signals, and runs multiple, interconnected, and/or related simulations of the object's physical processes, possibly analyzing the data and updating the model.

Digital twinning uses sensors at object boundaries, and representations/memories of space information, to define object boundaries, model parts/components, construct virtual space, and simulate motions, flows, changes, transformations, accelerations, forces, functions, and processes. Digital twinning simulates component/process/system interactions and timings. Digital twinning can simulate behaviors, test hypotheses, optimize performance, and develop new processes.

Vector Graphics

Computer vector graphics [Foley et al., 1994] uses curves (including straight lines) to connect points (at coordinate locations) to make polygons, polyhedrons, and all three-dimensional shapes. For example, computer fonts are outlines using quadratic and cubic functions ("fills" fill in the outlines).

Vector graphics is object-oriented. Objects include lines, polylines, splines, polygons, Bézier curves, bezigons, circles, ellipses, polyhedrons, and ellipsoids. Objects have an order of overlap from observer. Illumination sources and reflecting, refracting, and/or diffracting surfaces connect by ray casts and ray traces.

Scripts perform operations on objects: translation, rotation, reflection, inversion, vibration, bending, scaling, stretching, skewing, all affine transformations, order of overlap from observer, embedding, nesting, and combining objects (such as with set operations) to make new objects. The same pattern can change position, brightness, color, size, scale, and shape.

To display objects requires converting to the output-display pixel array, which has a coordinate system. A page-drawing program assigns object points to spatial coordinates.

1. Geometric-figure representations

Vector graphics represents geometric figures for use in drawing programs. As examples, Scalable Vector Graphics (SVG) is part of both XML and HTML page-drawing programs [www.w3schools.com/graphics/svg_intro.asp], and HTML scalable fonts use vector graphics.

Vector graphics represents images with text, which allows editing, search, indexing, and scripts.

2. Geometric figures

Vector graphics represents planar geometric figures:

Point and pixel

Line

Circle and ellipse

Polyline has three or more points, connected by a series of lines, and includes angles.

Polygon has three or more vertices, connected by a series of lines, and includes triangles and rectangles.

Path has three or more points, connected by a series of absolute-or-relative-position commands: moveto, lineto, horizontal lineto, vertical lineto, curveto, smooth curveto, quadratic Bézier curve, smooth quadratic Bézier curveto, elliptical arc, and closepath.

Text can have different sizes, fonts, boldness, italics, transparencies, shadows, and other effects. Text can rotate and otherwise transform. Text can be hyperlinks.

Spline descriptors can represent any point, line, or curve. Linear, quadratic, and cubic Bézier curves (Bézier spline) (Bézier polygon) (bezigon); Catmull-Rom splines; and non-uniform rational basis splines (NURBS) can represent region boundaries.

Vector graphics represents surfaces in different ways:

Parallel splines or spline grids

Polygons and their vertices

Non-Uniform Rational B-Spline areas (NURBS), with parameters (knot) and weighted control points

Constructive Solid Geometry Boolean operations on shapes

Vector graphics models surface location, orientation, curvature (convexity or concavity), shape, pattern, fill, and filters. Object surfaces have consistent indices for direction and distance [Glassner, 1989].

Splines can generate ellipsoids that represent voxels, boxes, parallelpipeds, spheres, ellipsoids, cylinders, cones, and other volumes.

3. Geometric-figure parameters

Geometric figures have parameters:

Center, vertex, endpoint, and control-point coordinates

Circle radius, ellipse radii, rectangle height and width, and other geometric-figure lengths

Line color, width, transparency, dashing, and end shape (square, round, arrow, flare)

Fill color, gradient, shading, transparency, pattern (foreground and background), and texture (smoothness-roughness and spacing, which is sparse or dense, random or non-random, clustered or dispersed spatial distribution)

Filters to diffuse, fade, blur, blend, flood, merge, offset, blacken or whiten, make linear or radial gradient, and change lighting direction and quantity

For example, the circle descriptor has parameter values for radius, center point, line style, line color, fill style, and fill color.

4. Vector-graphics file

To describe a three-dimensional scene, a text file has independent plane polygons (closed curves with Bézier splines), most commonly triangles, at spatial coordinates, marked with distance from viewer (z-buffer).

Geometric figures have locations, adjacencies, invariants, and symmetries.

4.1. Structural files

Vector-graphics files are like structural files. Structural files describe spatial relations among objects. For example, chemical MOL, XYZ, and TGF files describe molecules using atom relative positions and atom connections, in three dimensions.

5. Output

Rendering (image synthesis) starts with a two-dimensional or three-dimensional model (in a scene file), which has a viewpoint, illumination sources, and reflecting, refracting, and/or diffracting objects, with shapes, textures, and refractive indices, with shadows, at depths and horizontal and vertical coordinates, and specifies the red, blue, and green levels for all pixels of a two-dimensional display.

Rendering techniques are:

Rasterization includes scanline rendering (described below).

Ray casting uses rays from viewpoint to display pixel to reflecting, refracting, and/or diffracting object with shape, texture, and refractive index, taking account of illumination sources and reflections, refractions, and/or diffractions.

Ray tracing uses multiple rays going all the way back to illumination sources or a specific number of reflecting, refracting, and/or diffracting objects. It includes path tracing.

5.1. Rasterization

The graphical processing unit (GPU) transforms file information to give intensities to pixels of two-dimensional coordinate systems for dot-matrix printers, laser printers, and display monitors.

The GPU performs rendering operations:

Using the Accelerated Graphics Port, the CPU uploads the vertex lists (of each polygon's vertices and their adjacent faces and vertices), texture maps (of color, surface texture, and surface details), and state information to the GPU. State information includes the projection matrix, which shows how to project the three-dimensional image onto the two-dimensional screen. The view matrix describes camera position and orientation. The near and far clipping planes define allowable distances from camera. The lighting model shows how light reflects from objects onto other objects (global illumination) to make indirect light.

Vertex shaders get vertex triplets (from vertex memory), use the projection and view matrices to transform their coordinates, use the clipping plane to include or exclude them from view, and send screen-coordinate vertex triplets to rasterizers.

Rasterizers make triangles from screen-coordinate vertex triplets and send triangle pixel positions, depth levels, and (interpolated) texture coordinates to fragment shaders.

Fragment shaders get texture maps (from texture memory), get lighting-model information, calculate pixel colors, and send shaded pixels to the screen.

Vertex shaders, rasterizers, and fragment shaders are specialized for vector-processing types. Each polygon type has a rasterization algorithm. For example, the line-segment rasterization algorithm inputs beginning and ending points, finds line direction, starts at beginning point, finds points along line at the specified x and y interval, and sends pixels that are on the line (within the margin of error) to the output device.

A polygon rasterization algorithm can work like this:

Start at a point and go around the closed curve counter-clockwise.

Transform polygon coordinate system into output coordinate system (projection step), so points and pixels correlate.

Segment polygon splines to make only upward or downward splines (monotonic step).

Segment monotonic splines into straight lines from pixel to pixel, marking starting pixels and ending pixels (run-limit step).

Use the line-segment rasterization algorithm.

Scan the pixels left to right, row by row, and mark pixels as either in or out of the polygon, to make runs of in or out pixels (scan step).

Send pixel runs to drawing program to output on the printer or screen (print step).

The GPU rasterizes all scene polygons to make the final pixel display.

Computer and Robotic Vision

Computer and robotic-vision algorithms are examples of vision programs. Algorithms are for image capture, distance and location calculation, region and boundary detection, feature detection, object recognition, and image display [Aleksander, 1983] [Bülthoff et al., 2002] [Horn, 1986].

1. Image capture

Cameras capture images as arrays of opaque, translucent, or transparent pixels with red, green, and blue intensities.

Algorithms remove noise and crosstalk and correct errors such as low signal-to-noise ratio, high variance, high background, bright neighbors, rejected pixels, and saturated pixels. Averaging removes noise. Feedback reduces errors.

For image comparison, image analysis aligns images by rotation and translation, resizes and skews images by changing image dimensions, and normalizes intensities by adjusting brightness and contrast.

2. Locations and distances

Algorithms can order objects from front to rear.

Network mapping, and/or representing graphs as numbers (topological indexes), can describe topology.

Self-localization alignment methods (SLAM) can find locations in environments (self-localization).

Nearest-neighbor algorithms measure the Euclidean (or similar) metric to make clusters of neighboring points and group features.

Two cameras can find camera position (self-calibration). First-camera pixels have corresponding pixels on second-camera image curve (epipolar line), because second camera has focal-point projection (epipole). The essential matrix describes epipolar geometry using focal length, chip size, or optical-center coordinates. The fundamental matrix calculates camera relative orientation and position. After finding camera position, algorithms can use the epipolar transform, absolute conic image, and Kruppa equation to calculate the distance metric.

Using a distance metric, algorithms can calculate object distances.

3. Regions and boundaries

Image-processing algorithms find regions and boundaries:

Determine background as a large surface with small variation.

Find fiducials as marks or shapes with known coordinates.

Isolate intensity peaks by shrinking image, aligning the grid with fiducials, and aligning the centroid by intensity weighting.

Derivatives with respect to non-constant variable calculate invariants.

Lateral inhibition defines boundaries. Spreading activation maximizes regions.

Constraint satisfaction defines boundaries and regions.

Association connects points. Gap-filling links separated lines or regions.

Best fit and other statistics and probabilities help identify points, lines, and surfaces.

4. Feature detection

Spatial relations of points, lines, angles, and surfaces make patterns and features. Features have parameters, such as length, angle, and color, so they are vectors.

Feature detection finds invariants, such as angles and orientations.

4.1. Descriptors

Structural files, templates, and conditional rules can describe features as spatial relations among points, lines, angles, and surfaces.

Features have distinctive points, and point patterns, that can have text descriptions (descriptor), with factors and parameters.

4.2. Methods

Image segmentation uses color, distance, and shape to split images into smaller regions.

Factor analysis finds feature factors.

Finding principal components removes unimportant parameters.

Hill-climbing methods find parameter-space local-maxima peaks and points.

Least-squares regression finds lines.

Heuristic searches can find boundaries and features.

Similarity and dissimilarity methods calculate disparities and disparity rates to find boundaries and features.

Analyzing motions can find structures and features.

4.3. Algorithms

Specific-feature detection algorithms perform these steps:

Find distinctive points, taking account of scale and orientation.

Estimate orientations.

Calculate gradients and make gradient histograms.

May make a feature vector and then reduce its dimensions.

Normalize scale and orientation of points, orientations, gradients, and features to a standard scale and orientation.

For example, corner detectors have large gradient changes, or increased densities, in two directions from a distinctive point, accounting for scale and orientation.

Some algorithm types are:

Speed-up robust features (SURF)

Scale invariant feature transforms (SIFT)

Histograms of oriented gradient (HOG)

Gradient location-orientation histograms (GLOH)

The system can compare the detected feature to all feature descriptors stored in a library, to try to find a match. Note: Stored features are invariant to translation, rotation, scale, and other transformations. Noise, blur, occlusion, clutter, and illumination change have little effect.

5. Object recognition

Objects have specific features and are combinations of features.

Similar objects have similar feature vectors.

Clustering methods find feature groups and so objects.

Templates describe feature-placeholder spatial relations and so objects. For example, line, circle, and ellipse test sets use accumulator-space coordinates. Structure spaces describe feature-placeholder spatial relations using principal components.

Conditional rules describe feature spatial-relation information and so objects.

Sketching methods find contrasts to establish boundaries and so objects.

Separating scenes into multiple dimensions, looking at sub-patterns, and then eliminating dimensions can generalize to objects.

Using multiple inputs from different viewpoints can generalize to objects.

6. Image display

Image displays put pixel, voxel, and/or vector-graphics information onto a screen or printed page.

Volumetric displays can have multiple parallel display planes or a rotating plane to sweep out volume.

Neural Networks, Connectionism, and Deep Learning

Neural networks [Arbib, 2003] [Hinton, 1992] have a set of information channels with layers of distinctive connections with weights.

Neural networks have an input layer, middle layer, and output layer. Each input-layer element sends to all middle-layer elements, which weight the inputs. Each middle-layer element sends to all output-layer elements, which weight the inputs.

Neural networks can use feedback from the next layer to the previous layer (recurrent processing). (Neural networks do not use same-layer cross-connections, because they add no processing power.)

Neural networks have many units, and so still work if units fail.

Neural networks find output from input. An output can represent a variable, such a quantity of green, and/or a set of outputs can represent a vector/category, such as a color. The set of weights can represent a state.

Learning requires mechanisms to make internal changes. Learning has feedback and feedforward. Multiple examples and self-learning train the network.

1. Input layer

Input-layer states represent input values. Inputs are vectors.

2. Output layer

Output-layer states represent output values. For example, each output-layer element can have value 0 or 1. To distinguish alphabet letters, the output layer can have 26 elements. For each letter, one element has value 1, and the other 25 elements have value 0.

Alternatively, each output-layer element can have an integer or real-number value. For example, to calculate a variable, an output layer can have one element with a numerical value.

Outputs are vectors. In the alphabet example, for letter A, the 26-coordinate vector is (1, 0, 0, ..., 0, 0, 0). If output-layer elements have real-number values, output states are vectors in vector space.

3. Processing

Neural networks transform input vector into output vector and so are tensor processors. Multiple-nerve-bundle and multiple-neural-network input or output is a vector field, and a tensor transforms input into output. Neural networks use excitation and inhibition of flows started by an input state to make a flow pattern that represents an output state.

Neural networks allow input values to have negative or positive weights, processing to go forward or backward, and output values to decrease or increase.

4. Connectionism

Computers can use models of neural networks for automatic (or human-aided) machine learning (connectionism), with both top-down and bottom-up processing, that resets connection strengths, based on correct or incorrect responses, to find, recognize, and label patterns.

5. Deep learning

Neural networks can have many hidden layers (deep learning [Goodfellow et al., 2016]). The layers might be an input layer, filter layer (that uses convolution to find local features), rectified linear units (that block signals below a threshold level and so transmit only signals above threshold), pooling layer (that transmits only significant features), further set of input-filter-rectified linear unit-pooling modules (that look for feature configurations), and classification layer (that labels objects).

Deep learning uses large neural networks with multiple layers and trains with large quantities of text, photos, and/or sounds.

6. Capabilities

Neural networks can perform algorithms. They can represent any serial or parallel program and all procedures and objects. Neural networks have the capabilities of general-purpose computers, but with serial and parallel, and analog and digital, processing. They can perform mathematical, linguistic, logical, and input/output operations. They can store and recall memories.

6.1. Patterns

Neural networks can recognize one or more patterns, distinguish between patterns, detect pattern unions and intersections (such as words), and recognize patterns similar to an original pattern (and so make categories).

6.2. Analysis and synthesis

Neural networks can analyze and discriminate, using weighted connections to find an output. They can define boundaries in space and time. They make discriminations among locations, times, sensations, features, objects, and scenes.

Neural networks can synthesize and find categories. They find associations and relations among locations, times, sensations, features, objects, and scenes. They use inference. They can construct objects and events.

6.3. Operations

Neural networks have comparing, contrasting, associating, correlating, analyzing, connecting, synthesizing, inferring, modeling, decoding, interpreting, inverse-rendering, inverse-mapping, inverting, and internalizing methods/operations on present, past, local, and global perceptual information.

6.4. Images and maps

Neural networks can describe and manipulate images and maps and their space locations. They can find rates and gradients.

6.5. Language

Neural networks can represent language vocabulary and syntax.

They can use descriptions/labels/names for colors, locations, and objects.

They can have a system of meaning (meaning trees) based on references and associations.

7. Holographic processing

Holography [Gabor, 1946] can act like a neural network.

Making a holographic image has two steps:

Illuminate the scene with a coherent-light reference beam and record the two-dimensional interference pattern on a photographic plate (or the three-dimensional interference pattern in a digital array).

Illuminate the interference pattern with the coherent-light reference beam and observe the three-dimensional scene in space.

For a holographic neural network, intensities from three-dimensional space points are the three-dimensional input layer. For example, the layer has n^3 points, where n is the number of points along one dimension. Those points have different distances to the elements of the two-dimensional (or three-dimensional) middle layer. For example, the layer has m^2 points, where m is the number of points along one dimension.

Middle-layer elements have the same distances to the n^3 points of the three-dimensional output layer, because those points, as the image, coincide with the n^3 three-dimensional space points.

A holographic neural network has two steps that match the two steps for holographic-image making. The image is the input layer, the retina is the middle layer, and the display is the output layer, which is in the same location as the image.

7.1. Step 1: From input array to middle array

For each two-dimensional middle-layer point (i2, j2, 0), add the contributions from all three-dimensional input-array points (i1, j1, k1), a total of n^3 contributions:

Contribution to A(i2, j2, 0) from (i1, j1, k1) is A(i1, j1, k1) * sin(2 * pi * x / l).

A(i1, j1, k1) is intensity at space point.

x is distance from space point to middle-layer element: ((i2 - i)^2 + (j2 - j)^2 + (0 - k1)^2)^0.5

l is wavelength, shorter than unit distance but long enough not to blur interferences. For unit distance, a good wavelength is 0.01.

7.2. Step 2: From middle array to output array

For each three-dimensional output-layer point (i3, j3, k3), add the contributions from all middle-layer elements (i2, j2, 0), a total of m^2 contributions:

Contribution to A(i3, j3, k3) from (i2, j2, 0) is A(i2, j2, 0)*sin(2 * pi * x / l).

A(i2, j2, 0) is intensity at middle-layer element.

x is distance from middle-layer element to image point: ((i3 - i2)^2 + (j3 - j2)^2 + (k3 - 0)^2)^0.5. Because it is the image, distance from middle layer to output layer equals distance from input layer to middle layer.

l is wavelength. Wavelength is the same as in the first step (like using the same reference beam).

Normalize output-layer-point values to make the output pattern the same as the input pattern. Normalization typically requires dividing the sum of all contributions to each element by m^2, the number of contributing elements.

7.3. Notes

Both steps use the same sine (or cosine) function, with the same frequency and wavelength.

The second step is like the reverse of the first.

The mathematical function already has a wavelength, so there is no need for an actual reference beam.

Section about Brain Information Processing

Neuron Assemblies Can Process Information and Run Programs

Nerve neurochemical transmissions represent quantities. Neuron assemblies have excitatory, quiet, and inhibiting signals and can amplify and filter signals. Neuron-assembly electrochemical activities have spatiotemporal properties and patterns that can manipulate symbols to represent and process information. Neuron-assembly electrochemical-activity spatiotemporal patterns have large enough size, complexity, and plasticity to represent any data or algorithm [Wolfram, 1994].

Neuron-assembly computation uses neural networks, registers, processors, memories, and information channels.

1. Bits, bytes, words, and datatypes

Neuron-assembly electrochemical-activity spatiotemporal patterns and signal flows can represent bits, bytes, words, and datatypes.

1.1. Neuron circuits can store bits

Four neurons arranged in a circuit (hold-on relay) can store bits:

For AND hold-on relays, if both input switches change, and a fourth neuron sends WRITE input, a third neuron switches output. If either or both input switches do not change, the third neuron continues to have the same output.

For OR hold-on relays, if either or both input switches change, and a fourth neuron sends WRITE input, a third neuron switches output. If neither input switch changes, the third neuron continues to have the same output.

For both hold-on-relay types, if two neurons continually send the same input, and a fourth neuron does or does not send a WRITE input, a third neuron continues to have the same output.

1.2. Neuron output can represent bits

Each millisecond, neuron output is a spike (on) or no spike (off), like one bit of information for binary coding.

Over longer time intervals, neuron-output firing rate can be at baseline level or at burst level, like one bit of information for binary coding.

Multiple neurons can represent hexadecimal and any other multibit coding.

1.3. Neuron-series output can represent bytes, words, and records

Neuron series can represent bit series and so bytes (as in a Java BitSet class). Neuron-series neuron positions can represent positional notation for bits of number or string bytes.

Neuron-series series can represent byte series and so words for data, instructions, and addresses in registers, processors, and information channels. Neuron-series neuron positions can represent the sequence of bytes in words.

Neuron-series-series series can represent fields, records, sentences, and arrays. Alternatively, arrays (bitmap, bitset, bit string, bit vector) can be words split into parts.

Neuron-assembly electrochemical-activity patterns can represent data, addresses, and instructions.

1.4. Datatypes

Neuron-assembly coding makes datatypes.

Datatypes can represent integer and real numbers.

Datatypes can represent symbols, such as alphabet characters and punctuation. Datatypes can represent character strings, words, and phrases.

Datatypes can represent logical values. Examples include datatypes that represent on or off, true or false, low or high, middle or ends, or low or high importance/significance.

Datatypes can represent dates/times and time intervals.

Datatypes can represent angles and other geometric features.

Datatypes can represent events and event sequences.

Datatypes have properties, attributes, methods with parameters, and instances.

1.5. Variables and their values

Neuron-assembly electrochemical-activity patterns can have resonances that represent constants and variables. For example, neuron assemblies can combine inputs to make microphonic neuron signals [Saul and Davis, 1932]. Variables represent, store, transfer, and process data, addresses, instructions, inputs, processes, and outputs. Variable values have a datatype, such as for numbers, strings, dates/times, and points.

Variables can have multiple parameters. For example, color variables can have brightness, hue, and saturation parameters, and location variables can have three spherical or rectangular coordinates.

Computation can track variables by space location and/or time interval.

To track variables better during processes, computation declares and names/labels variables. Labels are an internal information code, not symbols, numbers, or the like.

Note: Neuron assemblies may use dynamic variables, which can vary and so have different expressions.

1.6. Data structures

From datatypes, neuron-assembly computation builds (and parses) data structures: bitmaps, pointers/links, records, tables/files, tablespaces, and databases.

As an example, a MOL file describes a molecule. Its Counts line lists the number of atoms, number of bonds, if chiral or not, and file format type. Its Atoms block lists the atoms and their x,y,z coordinates. Its Bonds block lists the bonds: bond multiplicity, bond stereochemistry, first atom, and second atom. Its Properties block can list aromaticity, delocalization, tautomerism, coordination, net charge, stereocenter configuration, bond orientation, and other overall structure information. Computation can parse MOL file information.

2. Procedures

Neuron-assembly electrochemical-activity spatiotemporal patterns and signal flows can run procedures, which declares variables and runs processes on variables using statements. Processing functions include input and output operations, if/then/else (selection) operations, for/while (repetition) operations, and instruction-sequence groups. Signal flows among neuron assemblies run programs with arithmetic, algebraic, calculus, logical, and linguistic steps.

2.1. Sums and differences

Neurons can find sums and differences and so can store, transfer, and process magnitudes.

Integrator neurons (accumulator neurons), for example in both parietal cortex and prefrontal cortex, increase output to behavioral neurons when input has a greater number of items. (When output reaches the threshold, the behavior occurs. Both threshold and accumulator processes can regulate each other. Starting time, starting output level, and rate of increase of output modulate accumulator neurons to set behavioral response time and accuracy.)

2.2. Byte manipulation

Neuron assemblies can manipulate the bits in bytes:

Starting with one byte, shift all bits left or right, with wrap around.

Starting with one byte, make all bits their opposite (unary complement).

Starting with two bytes, perform bitwise AND, EXCLUSIVE OR, or INCLUSIVE OR.

2.3. Logical operations

Each millisecond, neurons can represent True by potential above threshold and False by potential below threshold, or neuron output can be no-spike for False or spike for True. Over longer time intervals, neuron outputs can be baseline spiking for False or burst spiking for True.

Two neurons can work together to represent NOT (negation). When the output neuron is in the False state, the input neuron can excite it. When the output neuron is in the True state, the input neuron can inhibit it. Note: If statement is true, negation is false. If statement is false, negation is true.

Two neurons can send either baseline input or burst input to a third neuron, which has only baseline or burst output:

For AND, third neuron outputs baseline unless both inputs are burst. The third neuron has high threshold, so it requires both inputs to pass threshold, and neither input alone can make it pass. Note: The only true conjunction is when both statements are true.

For OR, third neuron outputs burst unless both inputs are baseline. The third neuron has low threshold, so either input can make it pass threshold, and only no input can make it not pass. Note: The only false disjunction is when both statements are false.

Conditional statement IF p THEN q is equivalent to NOT(p AND NOT q). The third neuron outputs baseline if both inputs are the same, and outputs burst if the two inputs are different (Sheffer stroke element). Input from a neuron passes third-neuron threshold. Input from another neuron passes third-neuron threshold. Input from both neurons does not pass third-neuron threshold, because first two neurons inhibit each other. Note: The only false conditional is when first statement is true and second is false.

Combining Sheffer stroke elements can make OR element, AND element, and NOT element, so multiple Sheffer stroke elements can make all logic circuits.

2.4. Timing

Neuron-assembly electrochemical-activity patterns can have oscillations that represent clocks and timing frequencies. They can step through address reading, data input, instruction running, and data output to run programs.

2.5. Computation steps

Neuron-assembly computation reads input into a variable; performs an input, output, mathematical, logical, or string operation/calculation; and writes output to a variable.

Algorithms may have conditionals (if-then), loops (for, while), routing (of inputs to processors, for output), and chaining (sending output to be input to another process). Computation uses reverberating neural networks.

Computing analyzes, splits, and differentiates to discriminate details, find new properties, and distinguish categories. It synthesizes, joins, and integrates to construct categories and associate details. data and programming structures. It performs statistical processes, finds principal components, and transforms coordinates.

2.6. Programs

Neuron-assembly electrochemical-activity patterns can function as operating systems, input-output operations, central processor units, and memory management.

3. Machine, assembly, and high-level languages

At first, neuron assemblies use binary or n-ary code to represent data/input/output, instructions, and addresses (compare to computer machine language). Neuron assemblies transform input symbol strings into output symbol strings that lead to actions.

Later, neuron assemblies group binary or n-ary code sequences into words for constants, variables, instructions, and addresses (compare to computer assembly language). Neuron-assembly virtual machines transform input number, string, array, date, and logical datatypes into output number, string, array, date, and logical datatypes, for perception and thinking.

Later, neuron assemblies group datatypes into statements using syntax (compare to computer high-level language, such as structured procedure-oriented programming). Neuron-assembly virtual machines transform input instruction, address, and statement collections into output instruction, address, and statement collections, for language.

Later, neuron assemblies group statements into objects and methods for features, objects, scenes, and space (compare to computer object-oriented programming). Neuron-assembly virtual machines transform input objects and methods into output objects and methods, for meaning.

Finally, neuron-assembly virtual machines transform objects into three-dimensional structures and space (compare to computer vector graphics).

4. Visual-information-processing levels

Neuron assemblies transform visual information at three levels:

At the computational level [Marr, 1982], input is frequency-intensity-distribution information from space locations, and output is surfaces, with brightness and color, at distances along radial directions of a coordinate system around body.

At the algorithmic level, procedures transform visual information into brightness, color, and spatial information.

At the physical level, many neuron-assembly parallel, nested, and interlocking electrochemical processes represent conditionals and other computation.

5. Object-oriented programming

Neuron-assembly computation can run object-oriented programming, which declares objects, which form a hierarchy. Objects have the same methods/functions as in procedure-oriented programming. For perception, object-oriented programming represents space, with a viewpoint, and represents brightnesses with color.

Section about Human Vision Physiology for Color and Brightness

Vision Reception and Vision Physiology up to Opponent Processes

The visual system has retinal rods, cones, and neurons and lateral-geniculate-nucleus and visual-cortex neuron assemblies and topographic maps. Neurons for same spatial direction have same relative position in retina, thalamus, occipital lobe, and association cortex.

Vision physiology calculates brightnesses and colors for all directions from eye and assigns them a distance.

1. Visible light and its properties

Vision can see electromagnetic waves (visible light) whose frequency is in a narrow range, above infrared light and below ultraviolet light.

Electromagnetic radiation is a flow that has power (energy per second) and intensity (power per unit cross-sectional area).

Radiant intensity, such as from infrared waves, is power per solid angle. Visible-light waves have radiant intensity called luminous intensity.

Electromagnetic radiation has a frequency-intensity distribution, in which each frequency has intensity. A frequency-intensity distribution has weighted average frequency. Over visible light waves, weighted average frequency is the basis of hue, weighted frequency standard deviation is the basis of saturation, and total intensity is the square of color brightness.

Two light sources with different frequency-intensity distributions combine to make a new frequency-intensity distribution, so vision is a synthetic sense, not an analytic sense). Two brightnesses/colors mix to make intermediate color.

Note: All sight-line points between light source/surface and retina have no net intensity, because electromagnetic waves interfere and cancel, so no information comes from those points.

2. Light sources

Light has sources. Illumination can come from seen and unseen surfaces. Light, heat, electric, electronic, and/or mechanical energy make atoms, molecules, crystals, and fluids stretch, compress, twist, vibrate, and rotate, and so move charges and change electric-charge structure. For example, dyes, pigments, transition-metal complexes, and compounds with bond conjugation have free charges, charges in chemical bonds, ions, electric dipoles with polarities, and electron orbitals.

Absorbed energy moves electrons and other charges to higher energy states (electron-orbital transitions, ionization, and charge transfer). When charges fall back to lower energy states, visible light emits.

Electromagnetic waves come from light sources or surface reflections.

2.1. Emission/radiation

Surface emission/radiation is light intensity coming from a light source. Processes include incandescence, fluorescence, phosphorescence, neon and argon light, light amplification by stimulated emission (laser), diode light emission, and crystal color centers.

Physical surfaces can be luminous and emit visible light.

Light sources have no scattering and no absorbance.

2.2. Reflection, pigment scattering, and absorption

Light has surface reflections/radiations, with scattering and absorbance. Reflections can have diffraction, interference, and/or polarization.

Physical surfaces can reflect visible light. Spectra depend on both illumination spectrum and surface reflectivity (reflectances at wavelengths).

Pigments have scattering and absorbance. Reflectance increases with more scattering and decreases with more absorbance:

Black pigment scatters least and absorbs most. As value falls, black-to-paint ratio increases as a power, not linearly.

Blue, green, and cyan pigments have low scattering (and so higher tinting strength) and variable absorbance.

Yellow, red, and orange pigments have high scattering (and so lower tinting strength) and variable absorbance.

White pigment scatters most and absorbs least. As value rises, white-to-paint ratio increases as a power, not linearly.

2.3. Diffuse and specular reflection

Surfaces can have diffuse or specular reflection.

Diffuse reflection means that a surface reflects light in all directions. Such surfaces are matte.

Specular reflection means that a surface reflects light in one direction. Such surfaces are glossy/shiny/glittering. For example, light at a large angle to vertical has specular reflection.

3. Light paths

Light has reflection from opaque surfaces, with diffraction (as in liquid crystals), interference, and/or polarization. Surface reflectance is ratio of outgoing light intensity to incoming light intensity.

Light has transmission through transparent or translucent materials, with scattering and/or polarization.

Light has refraction through transparent or translucent materials, with chromatic (and other) aberration and/or polarization.

Scenes have shadows and occlusions.

3.1. Illumination angle and viewing angle

Illumination angle and viewing angle affect perception of light, brightness, and color.

Surface illumination is light intensity coming into the surface. Visual scenes have illumination from seen and unseen surfaces.

Colors differ with illumination angle and viewing angle.

3.2. Transparency

Surfaces and materials can be opaque, translucent, or transparent. Opaque means that no light can come through from behind. Translucent means that some light can come through from behind. Transparent means that most light can come through from behind.

Visible materials have opaque (or translucent) surfaces. (Because they have no surfaces, clear air and flawless glass are transparent.)

Through transparent or translucent materials, light has transmission, with scattering and/or polarization, and refraction, with chromatic aberration and/or polarization. For example, applying a thin or low-density green paint over white paint allows white to show through, because green has only a narrow range of light wavelengths. Surface black, grays, and white must be opaque, because they have all wavelengths. For example, applying white, gray, or black paint over green paint blocks green from coming through.

Opaque surfaces have no surface depth. Surfaces can have some depth, so that they can have some translucency, such as dark and deep surfaces.

Colorless materials allow all light wavelengths to go through. Surfaces cannot be colorless, because then people cannot see them.

3.3. Depth

Transparent and translucent surfaces and materials can have depth.

Note: Materials have depth and can have interior reflection. For example, air can be clear, hazy, or foggy.

3.4. Physical state

Objects have phase/state (solid, liquid, gas) and surface textures.

3.5. Color associations

Objects have color associations. For example, blue comes from sky and water. Yellow comes from sunlight, skin, and sand. Red comes from fruit, blood, and meat. Green comes from leaves and grass. Black comes from night. White comes from daylight.

4. Cone and rod light reception

Rod and cone retinal-receptor chemical processes have transition states whose energies are the same as those of visible-light-wavelength photons.

After photon absorption, rods and cones change membrane potential. Rods and cones integrate intensities over a frequency range. Rod and cone outputs vary directly with logarithm of light intensity.

4.1. Cone processing

Cones absorb photons of a specific wavelength with highest probability, and absorb photons at lower and higher wavelengths with lower probabilities:

Short-wavelength cone: Maximum sensitivity is at 419 nm. Peak runs from 420 nm to 440 nm. Curve is approximately symmetric around 419 nm. The short-wavelength cone type has non-zero output at every light wavelength from 380 nm to 530 nm.

Middle-wavelength cone: Maximum sensitivity is at 531 nm. Peak runs from 534 nm to 545 nm. Curve is approximately symmetric around 531 nm. The middle-wavelength cone type has non-zero output at every light wavelength from 400 nm to 660 nm.

Long-wavelength cone: Maximum sensitivity is at 558 nm. Peak runs from 564 nm to 580 nm. Curve is approximately symmetric around 558 nm. The long-wavelength cone type has non-zero output at every light wavelength from 400 nm to 700 nm.

Three cones are necessary, to cover the light spectrum. Cones require brighter light than rods.

Cones take a logarithm of the integration/sum over the emitted or reflected frequency-intensity spectrum, with each frequency having a weight (sensitivity curve). Because cone output depends on a logarithm, output is not linear. For example, ten times the input intensity may only increase output by 1.5 times.

4.2. Color and cone outputs

For blues, short-wavelength cone has high output, middle-wavelength cone has low output, and long-wavelength cone has no output.

For cyans, short-wavelength cone has output, middle-wavelength cone has output, and long-wavelength cone has low output.

For greens, short-wavelength cone has low output, middle-wavelength cone has high output, and long-wavelength cone has low output.

For chartreuses, short-wavelength cone has low output, middle-wavelength cone has output, and long-wavelength cone has output.

For yellows, short-wavelength cone has low output, middle-wavelength cone has output, and long-wavelength cone has output.

For oranges, short-wavelength cone has low output, middle-wavelength cone has output, and long-wavelength cone has output.

For reds, short-wavelength cone has low output, middle-wavelength cone has low output, and long-wavelength cone has high output.

For black, short-wavelength cone has low output, middle-wavelength cone has low output, and long-wavelength cone has low output.

For middle gray, short-wavelength cone has half output, middle-wavelength cone has half output, and long-wavelength cone has half output.

For white, short-wavelength cone has high output, middle-wavelength cone has high output, and long-wavelength cone has high output.

4.3. Univariance and light spectrum

Photoreceptors have a wavelength at which they have highest probability of energy absorption and lower and higher wavelengths at which they have lower probability of energy absorption. A higher-intensity low-probability wavelength can make same total absorption as a lower-intensity high-probability wavelength (univariance problem). For example, if frequency A has probability 1% and intensity 2, and frequency B has probability 2% and intensity 1, total photoreceptor absorption is the same.

There are three cones, and every wavelength affects all three cones. Three wavelengths can have intensities at which all three photoreceptors have the same absorption as one or more wavelengths at other intensities. (Using three wavelengths always makes unsaturated color.)

Note: Two different wavelengths can never have intensities at which all three cones have the same absorption, so that case has no univariance problem. Using two wavelengths always makes saturated color.

4.4. Rod processing

Retinal rod light receptors have highest sensitivity at wavelength 498 nm (blue-green). The wavelength range is 440 nm to 660 nm. Rods take a logarithm of the integration/sum over the emitted or reflected frequency-intensity spectrum, with each frequency having a weight (sensitivity curve), because rods absorb photons of middle wavelengths with highest probability, and absorb photons at lower and higher wavelengths with lower probabilities.

Rod cells evolved after cone cells (and opponent processes). Rods are more sensitive than cones. For light intensity below cone threshold, only rod cells measure light intensity. Rods have no information about hue or saturation, only brightness.

Rods send to rod bipolar cells and amacrine cells, which then send to ganglion cells.

4.5. Alternative receptor types and numbers

Vision has three receptor types, which must cover the spectrum with even distribution. Therefore, because the spectrum is narrow, alternative receptor types can only differ somewhat in relative spectrum peak and so must be almost the same.

Note: Because receptor peaks are too narrow, only one or two receptor types cannot cover the spectrum. Vision can have four receptor types, but, because the spectrum is narrow, two must have close spectrum peaks and so be almost the same.

5. Adaptation

Visual receptors adapt within a few seconds. Cortical processes also adapt.

Vision adapts to illumination intensity changes, darker and lighter (simultaneous brightness contrast).

Afterimages are temporal effects of adaptation and seem to not be on surfaces [Wilson and Brocklebank, 1955] [Brown, 1965].

6. Accommodation, focus, and resolution

Occipital-lobe peristriate area 19 sends to Edinger-Westphal nucleus, which sends, along third cranial nerve, to:

Ciliary muscle to control lens accommodation

Sphincter pupillae muscle to control pupil constriction

Medial rectus muscle to converge eyes

Eye can focus on objects between 6 cm and 6 meters away. (At distances greater than 6 meters, light rays are essentially parallel.)

Rods and cones have diameters 4 to 100 micrometers. Dim light excites only rods and has less acuity. Bright light excites cones and has more acuity. Eye has resolution of 10 seconds of arc at best. Normal vision has visual acuity one minute of arc.

Eye properties reduce resolution. Eye fluid scatters light. Eye lens has chromatic aberration. Retina has blood vessels, macula, fovea, and blind spot. Eye vibrates, drifts, and has saccades.

7. Melanopsin-containing retinal ganglion cells

Retina inner part has intrinsically photosensitive retinal ganglion cells (melanopsin-containing retinal ganglion cells) that measure overall light intensity, approximately as intensity square root. They are 1% of retinal ganglion cells. They have melanopsin, whose absorption peak is at 480 nm, so they directly detect (with slow response) light intensity centered on that wavelength. They also receive from rods, which they use at low to medium light levels for light intensity centered on wavelength 500 nm.

They have three main functions:

Pupillary light reflex: Their axons go to upper-midbrain olivary pretectal nucleus, which sends to Edinger-Westphal nucleus, which sends, along oculomotor nerve, parasympathetically to ciliary ganglion, which sends to iris sphincter.

Melatonin and sleep: Their axons go to hypothalamus ventrolateral preoptic nucleus and regulate melatonin release and sleep.

Circadian rhythms: Their axons go to hypothalamus suprachiasmatic nuclei and thalamus intergeniculate leaflet and regulate circadian rhythms.

Attention, awareness, and perception affect these activities.

Amacrine cells inhibit them.

They contribute little to location, color, or object perception.

8. Constancies

Colors, sizes, and shapes stay relatively constant.

Surface hues stay constant over illuminance changes (simultaneous chromatic contrast), because vision adapts to illumination chromaticity changes by changing relative cone sensitivities in retina and visual cortex [Kries, 1895] [Lamb, 1985].

9. Motion detection

Retina outer part sends to occipital lobe to detect motion.

10. Blinking

Blinking occurs automatically to moisten cornea.

Reflex eyelid blinking occurs upon cornea stimulation (corneal reflex) (blink reflex). Cornea sends to pons, which sends along facial nerve to eyelid muscle.

Reflex eyelid blinking occurs to bright light (optical reflex) or to approaching objects. Occipital lobe mediates optical reflex.

Vision-Physiology Opponent Processes

Retinal opponent processes compare cone outputs (which have only positive values) to find differences [DeValois and DeValois, 1975] [Hurvich, 1981] [Hurvich and Jameson, 1955] [Hurvich and Jameson, 1956] [Jameson and Hurvich, 1955] [Jameson and Hurvich, 1956] [Svaetichin and MacNichol, 1958]. Opponent processes contrast two states.

Retinal-ganglion-cell electrochemical outputs travel along trackable "labeled lines" to specific lateral-geniculate-nucleus and vision-cortex areas, which have opponent and non-opponent processes. Ganglion-cell output is excitatory (never inhibitory).

Each retinal location has three pairs of opponent-process ganglion-cell outputs: spot luminance relative to surround luminance, short-wavelength intensity relative to long-and-middle-wavelength intensity, short-and-long-wavelength intensity relative to middle-wavelength intensity, and their inverses. The opposing (but not necessarily opposite) states name the opponency, so the opponencies are white-black/black-white, yellow-blue/blue-yellow, and red-green/green-red. The two opponencies of an opponency pair are linear transformations of each other. Pairs are necessary because output is positive. Onset and offset have equal representation. The three opponency pairs are orthogonal/independent.

Every visible-light stimulus typically affects all three cones, so opponent processes are necessary to differentiate colors.

Cone cells and opponent processes evolved at the same time (before rod cells). Opponent processes aid hue discrimination and boundary demarcation, so vision favors opponent processes.

Note: For each and all surfaces, cortical processes put the three opponency pairs together to make surface color, brightness, hue, and saturation. Each different color has a unique set of values of the three opponencies.

1. Retina and cortex physiology

Cones (and rods) connect to bipolar cells. All vertebrate retinas have bipolar cells, which have ON-center/OFF-surround or OFF-center/ON-surround receptive fields. ON-center/OFF-surround receptive-fields have positive input from center bipolar cells and negative input from annulus same-type bipolar cells, and so measure onset. OFF-center/ON-surround receptive-fields have positive input from annulus bipolar cells and negative input from center same-type bipolar cells, and so measure offset. Onset and offset have equal representation. Bipolar cells use graded potentials to excite or inhibit ganglion cells.

Cones also connect to horizontal cells, which inhibit ON-center/OFF-surround and/or OFF-center/ON-surround bipolar cells. (Rods also connect to amacrine cells, which inhibit ganglion cells.)

Ganglion cells have ON and OFF opponent-process pairs receptive fields. Ganglion-cell-center size defines the smallest perceptible visual angle. Each cone affects several ganglion cells (related to "filling-in"), and each ganglion cell gets information from several cones. Ganglion cells send action potentials (with 200-fold firing range), with high baseline rate, to lateral geniculate nucleus.

Lateral-geniculate-nucleus neurons have ON and OFF opponent-process pairs and send to visual cortex, whose cells also have ON and OFF opponent-process pairs. The two types are approximately equal in number, so onset and offset have equal representation.

1.1. Regulation

Opponent-process inputs have weights, which can regulate color, region, and boundary detection.

1.2. Lateral inhibition

Cells can inhibit neighboring cells. Lateral inhibition increases contrast, suppresses noise, sharpens boundaries, and contracts (and distinguishes) regions.

1.3. Spreading activation

Cells can excite neighboring cells. Spreading activation reduces contrast, fills in, blurs boundaries, and expands (and unifies) regions.

1.4. Ganglion-cell input and output

Opponencies have a positive input and a negative input. Opponency-ganglion-cell input is a function of a difference, and so varies continuously from maximum negative to zero to maximum positive.

Opponency-ganglion-cell-output action-potential rate varies continuously from baseline (which means no positive output) to maximum:

If the negative input is maximum and the positive input is zero, so opponent-process input difference is maximum negative, then ganglion-cell output is baseline.

If the negative input is maximum and the positive input is half, the negative input is three-quarters and the positive input is one-quarter, or the negative input is half and the positive input is zero, so opponent-process input difference is negative, then ganglion-cell output is baseline.

If positive input is maximum and negative input is maximum, positive input is half and negative input is half, or positive input is zero and negative input is zero, so opponent-process-input difference is zero, then ganglion-cell output is baseline.

If positive input is maximum and negative input is half, positive input is three-quarters and negative input is one-quarter, or positive input is half and negative input is zero, so opponent-process-input difference is positive, ganglion-cell output is above baseline.

If positive input is maximum and negative input is zero, so opponent-process input difference is maximum positive, then ganglion-cell output is maximum.

Ganglion-cell output can have low baseline rate and so large range up, medium baseline rate and so medium range up, or high baseline value and so small range up. Opponency-ganglion-cell outputs typically have low baseline rate.

2. White-black/black-white opponent-process pair

For these ganglion cells, both the center and the surround have the same input function, whose general form is C * (l*L + m*M + s*S + K), where C, l, m, s, and K are constants. An example is C * (2*L + M + 0.5*S + K). An oversimplified function is (L + M) / 2.

Input uses mostly long-wavelength and middle-wavelength cones, because ambient light has wavelengths near the middle of the visible spectrum. Input uses more than one cone to be more accurate. Therefore, each frequency has a different weight, with higher weights for long and medium wavelengths and lower weights for short wavelengths:

380 nm: 0.

415 nm to 425 nm: one-eighth of highest.

440 nm to 450 nm: one-quarter of highest.

505 nm to 515 nm: half of highest.

535 nm to 545 nm: three-quarters of highest.

550 nm to 560 nm: highest (photopic peak).

570 nm to 580 nm: seven-eighths of highest.

605 nm to 615 nm: half of highest.

630 nm to 640 nm: three-eighths of highest.

700 nm to 740 nm: 0.

Maximum sensitivity is at yellow wavelengths, followed by green wavelengths, red wavelengths, and blue wavelengths. The sensitivity curve is approximately symmetric from 490 nm to 620 nm.

Because this opponency pair depends on cone output, which depends on a logarithm, ganglion-cell output is not linear.

This opponency pair is a function of the difference between input values from a surface and from its surrounding surface [Jameson, 1985] and measures relative luminosity between center and surround. For ON-center/OFF-surround:

If center has lower luminosity than surround, difference is negative, and ganglion-cell output is baseline.

If center has higher luminosity than surround, difference is positive, and ganglion-cell output is above baseline.

If center has same luminosity as surround, difference is zero, and ganglion-cell output is baseline.

For OFF-center/ON-surround:

If center has lower luminosity than surround, difference is positive, and ganglion-cell output is above baseline.

If center has higher luminosity than surround, difference is negative, and ganglion-cell output is baseline.

If center has same luminosity as surround, difference is zero, and ganglion-cell output is baseline.

If ON-center/OFF-surround has low output and OFF-center/ON-surround has high output, higher-level vision makes surface dark. If ON-center/OFF-surround has high output and OFF-center/ON-surround has low output, higher-level vision makes surface light.

If center and surround have the same weight, if illumination doubles, the difference doubles, and if illumination halves, the difference halves. Higher illumination allows better contrast. (The quantity subtracted is never more than 20% of the center quantity, so only large differences in illumination allow significantly better contrast.) Scenes with high contrast have high overall illumination. Scenes with low contrast have low overall illumination.

White-black opponency measures sharpness of luminance contrast, gradient, and boundary between a spot and an adjacent surface, and black-white opponency measures the inverse. These opponencies distinguish one surface from another to make discriminations or categories.

This opponency pair has spatial information, contrasting center and surround. Higher-level vision uses this information to find boundaries between space regions.

This opponency pair (achromatic opponency) does not differentiate by different wavelengths, has no information about light frequency or frequency distribution, and does not compare hues or saturations. (Lightness and darkness have no hue information.)

Note: Similar ganglion cells have positive input from current time and negative input from a previous time and detect changes in intensity over time and so help detect motions through space.

3. Red-green/green-red opponent-process pair

Ganglion-cell-input general function is C * (l*L + m*M + s*S + K), where C, l, m, s, and K are constants. An example is L - 1.09*M + 0.09*S. An oversimplified function is L - M. Therefore, these ganglion cells have positive input from one cone type/bipolar cell receiving from surface, and have negative input from a different cone type/bipolar cell receiving from same surface [Jameson, 1985].

Each frequency has a different weight, with positive weights for long wavelengths and negative weights for medium wavelengths:

380 nm: 0.

440 nm to 445 nm: smaller local maximum. Curve is symmetric between 425 nm and 460 nm.

470 nm to 475 nm: 0.

520 nm to 530 nm: local minimum. Curve is symmetric between 475 nm and 575 nm.

570 nm to 575 nm: 0.

605 nm to 615 nm: larger local maximum. Curve is symmetric between 580 nm and 640 nm.

700 nm to 740 nm: 0.

Because this opponency pair depends on cone output, which depends on a logarithm, output is not linear.

This opponency pair measures the difference between surface long+short-wavelength and middle-wavelength inputs (total high+low-frequency intensity and total middle-frequency intensity). For ON:

If middle-wavelength light is more than little long+short-wavelength light, difference is negative, and ganglion-cell output is baseline.

If long+short-wavelength light is more than middle-wavelength light, difference is positive, and ganglion-cell output is above baseline.

If long+short-wavelength light equals middle-wavelength light, difference is zero, and ganglion-cell output is baseline.

For OFF:

If middle-wavelength light is less than little long+short-wavelength light, difference is negative, and ganglion-cell output is baseline.

If long+short-wavelength light is less than middle-wavelength light, difference is positive, and ganglion-cell output is above baseline.

If long+short-wavelength light equals middle-wavelength light, difference is zero, and ganglion-cell output is baseline.

Chromatic opponencies find difference between hue intensities/saturations at a spot (simultaneous color contrast). Red-green and green-red opponencies measure hue intensity relative to its complementary/opposite hue intensity. If ON has low output and OFF has high output, higher-level vision makes surface green. If ON has high output and OFF has low output, higher-level vision makes surface red. Equal opposite inputs cancel each other to make no hue. There is no reddish-green or greenish-red. Net hue appears either red or green, with no mixing. The hues cannot mix, because they are opposites about different things. Net hue (saturation) and no-hue (unsaturation) always add to 100%. Note: Red plus green appears yellow because both excite yellow, and red and green cancel.

This opponency pair does not compare two surfaces and has no spatial information. It has no information about brightness, only about spot hue and saturation. It is independent of the yellow-blue opponency.

4. Yellow-blue/blue-yellow opponent-process pair

Ganglion-cell-input general function is C * (l*L + m*M + s*S + K), where C, l, m, s, and K are constants. An example is 0.11 * (L + M - 2*S). An oversimplified function is ((L + M) / 2) - S. Therefore, these ganglion cells have positive input from two cone types/bipolar cells receiving from surface, and have negative input from a different cone type/bipolar cell receiving from same surface [Jameson, 1985].

Each frequency has a different weight, with positive weights for long and medium wavelengths and negative weights for short wavelengths:

380 nm: 0.

440 nm to 450 nm: local minimum. Curve is symmetric between 420 nm and 470 nm.

490 nm to 495 nm: 0.

550 nm to 560 nm: local maximum. Curve is symmetric between 530 nm and 580 nm.

675 nm to 700 nm: 0.

Because this opponency pair depends on cone output, which depends on a logarithm, output is not linear.

This opponency pair measures the difference between surface short-wavelength and long+middle-wavelength inputs (total low-frequency intensity and total high-frequency intensity). For ON:

If short-wavelength light is more than long+middle-wavelength light, difference is negative, and ganglion-cell output is baseline.

If long+middle-wavelength light is more than short-wavelength light, difference is positive, and ganglion-cell output is above baseline.

If long+middle-wavelength light equals short-wavelength light, difference is zero, and ganglion-cell output is baseline.

For OFF:

If short-wavelength light is less than long+middle-wavelength light, difference is negative, and ganglion-cell output is baseline.

If long+middle-wavelength light is less than short-wavelength light, difference is positive, and ganglion-cell output is above baseline.

If long+middle-wavelength light equals short-wavelength light, difference is zero, and ganglion-cell output is baseline.

Chromatic opponencies find difference between hue intensities/saturations at a spot (simultaneous color contrast). Yellow-blue and blue-yellow opponencies measure hue intensity relative to its complementary/opposite hue intensity. If ON has low output and OFF has high output, higher-level vision makes surface blue. If ON has high output and OFF has low output, higher-level vision makes surface yellow. Equal opposite inputs cancel each other to make no hue. Net hue appears either blue or yellow, with no mixing. The hues cannot mix, because they are opposites about different things. There is no bluish-yellow or yellowish-blue, because blue and yellow light mix to make white (and so are complementary colors). The percentages for no hue (unsaturation) and net hue (saturation) add to 100%.

This opponency pair does not compare center and surround and has no spatial information. It has no information about brightness, only about spot hue and saturation. It is independent of the red-green opponency.

5. Light wavelengths have red-green, yellow-blue, and white-black opponency values

Light wavelengths have yellow-blue, red-green, and white-black opponency values [Jameson, 1985]:

380 nm: Red-green is zero, and yellow-blue is zero.

440 nm to 450 nm (violet): Red-green is smaller maximum, and yellow-blue is minimum.

470 nm to 475 nm (unique blue): Red-green is zero, and yellow-blue is half minimum.

485 nm to 490 nm (cyan): Red-green is one-quarter minimum, and yellow-blue is one-quarter minimum, so approximately equal.

490 nm to 495 nm (unique green): Red-green is half minimum, and yellow-blue is zero.

520 nm to 530 nm (green): Red-green is minimum, and yellow-blue is seven-eighths maximum.

550 nm to 560 nm (chartreuse): Red-green is three-quarters minimum, and yellow-blue is maximum.

560 nm to 565 nm (yellow): Red-green is half minimum, and yellow-blue is maximum.

570 nm to 575 nm (unique yellow): Red-green is zero, and yellow-blue is three-quarters maximum.

590 nm to 595 nm (orange): Red-green is five-eighths maximum, and yellow-blue is two-thirds maximum, so approximately equal.

605 nm to 615 nm (red beginning): Red-green is higher maximum, and yellow-blue is three-eighths maximum. There is no unique red because, at wavelength 600 nm, red-green difference is less than maximum, and yellow-blue difference is higher than at wavelength 610 nm; at wavelength 610 nm, red-green difference is maximum, but yellow-blue difference is not zero; and, at wavelength 620 nm, red-green difference is less than maximum, and yellow-blue difference is still not zero.

670 nm to 680 nm (red to maroon): Red-green is one-eighth maximum, and yellow-blue is zero.

700 nm to 740 nm: Red-green is zero, and yellow-blue is zero.

Note: The three cone sensitivity curves cross at 465 nm, 475 nm, and 545 nm.

When either red-green or yellow-blue opponency value is zero, that is said to be a "unique hue". Note: Unique hues are for maximum saturation and brightness. At lower saturation and/or lower brightness, unique hues make a line through color space. The unique-hue red line could come from a cortical process that combines weighted L - M and differently weighted S - (L + M). The unique-hue green line could come from a different cortical process that combines weighted L - M and differently weighted S - (L + M). The unique-hue blue line and yellow line could come from a third cortical process that combines weighted L - M and similarly weighted S - (L + M).

Note: The color categories violet, blue, azure, cyan, spring green, green, chartreuse, yellow, orange, and red do not have even spacing over the spectrum. Blue, green, and yellow have even spacing over the spectrum, but violets and reds have wide ranges at spectrum ends.

5.1. Colors and opponency values

Colors depend on all three opponent processes:

For greens, the yellow-blue opponency crosses in the middle, with low yellow and low blue, with narrow wavelength range. Red-green input is negative. White-black input is high positive.

For reds, the yellow-blue opponency tapers off at the ends, with low yellow and low blue, with broad wavelength range. Red-green input is positive. White-black output is medium positive.

For blues, the red-green opponency crosses in the middle, with low red and low green, with narrow wavelength range. Yellow-blue input is negative. White-black output is low positive.

For yellows, the red-green opponency crosses in the middle, with low red and low green, with narrow wavelength range. Yellow-blue input is positive. White-black output is very high positive.

For blacks, the red-green opponency has no red and no green, and yellow-blue opponency has no blue and no yellow. White-black output is very negative.

For grays, the red-green opponency has equal red and green, and yellow-blue opponency has equal blue and yellow. White-black output is near zero.

For whites, the red-green opponency has equally high red and green, and yellow-blue opponency has equally high blue and yellow. White-black output is very positive.

5.2. Red-green vs. yellow-blue

Red-green opponency is more important at low intensity, and yellow-blue opponency is more important at high intensity. At higher intensity, vision boosts blue less and yellow more, and green less and red more.

Slopes have large changes in difference as wavelength changes. Balance points and tails have small changes in difference as wavelength changes. Wavelength discrimination is better at opponent-process balance points, because differentials are larger.

5.3. Color mixtures

Mixing hues typically moves both yellow-blue and red-green opponent-process inputs toward zero and increases white-black opponent-process input. Color adds white and decreases saturation.

Adding gray pigment to spot hue pigment may make white-black opponent-process input decrease, stay the same, or increase. Also, both red-green and yellow-blue opponent-process inputs are closer to zero, so hue has lower saturation.

Yellow wavelengths, and equal-intensity red+green wavelengths, make red-green opponent-process input zero and make yellow-blue opponent-process input yellow. The two opponencies together have no region where green overlaps red, so red+green light does not make reddish-green or greenish-red.

Cyan wavelengths, and equal-intensity blue+green wavelengths, make red-green opponent-process input green and make yellow-blue opponent-process input blue. The two opponencies together have a region where green overlaps blue, so blue+green light makes bluish-green or greenish-blue.

Violet wavelengths, and equal-intensity blue+red wavelengths, make red-green opponent-process input red and make yellow-blue opponent-process input blue. The two opponencies together have a region where red overlaps blue, so blue+red light makes bluish-red or reddish-blue.

Blue wavelengths make red-green opponent-process input zero and make yellow-blue opponent-process input blue. Red wavelengths make red-green opponent-process input red and make yellow-blue opponent-process input zero. Equal-intensity, but separate, blue+red wavelengths make magentas. Note that, at the other end of the spectrum from violets, the two opponencies together have no region where blue overlaps red. For pigments, blue mixed with red makes purples.

Equal-intensity blue+yellow wavelengths make red-green opponent-process input zero and make yellow-blue opponent-process input zero, so color is gray or white. The two opponencies together have no region where yellow overlaps blue, so blue+yellow light does not make bluish-yellow or yellowish-blue. For pigments, blue mixed with yellow makes green, because blue has some green, and yellow has some green, so together green is more than blue or yellow. For emitted light, blue has little or no green, and yellow has little or no green.

6. Only three opponency pairs needed

The three opponency pairs provide three independent parameters/coordinates, enough for all colors.

6.1. Two opponency pairs are not enough to distinguish colors

If colors have just two coordinates, they are all on a plane. Then, one primary color is on the line from pure hue to black and the other primary color is on the line from pure hue to white. The two lines must intersect, so different intensities of blue and red, red and green, and/or blue and green can make the same two parameter/coordinate values. Therefore, two opponency pairs are not enough to distinguish colors, so colors cannot have just two parameters/coordinates.

When colors have three parameters/coordinates, they are all in a solid, and so those two lines do not have to intersect, and different intensities of blue and red, red and green, and/or blue and green make different colors.

6.2. Alternative opponencies for hue

If hue opponencies were yellow-red and green-blue: For orange, yellow-red would be zero, and green-blue would be zero. For cyan, green-blue would be zero, and yellow-red would be zero. They cannot represent the spectrum.

If hue opponencies were yellow-green and red-blue: For chartreuse, yellow-green would be zero, and red-blue would be zero. For magenta, red-blue would be zero, and yellow-green would be zero. They cannot represent the spectrum.

Only the yellow-blue and red-green opponencies can represent the spectrum of hues and the purples.

6.3. Alternative functions for white-black opponency

Short-wavelength-receptor output is small for sunlight, so white-black opponency cannot be a function of middle and short wavelengths, or a function of short and long wavelengths. Only a function of middle and long wavelengths represents brightness most efficiently and accurately.

7. Brain opponency variations

Brain may modify the red-green opponency to be red-cyan opponency, such as L - (S+M)/2, and/or green-magenta opponency, such as M - (S+L)/2.

Brain distinguishes between blue and green and so may use an opponency such as S - M.

Brain distinguishes between blue and red and so may use an opponency such as S - L.

Brain also distinguishes between green and yellow and between yellow and red and so may use opponencies such as M - (L+M)/2 and L - (L+M)/2.

8. Double-opponent cells

Double-opponent cells measure both adjacent spatial regions and two different wavelength intensities. Positive input comes to dendritic-tree central region from one or two cone types receiving from surface center, and negative input comes to dendritic-tree central region from a different cone type receiving from surface center, while positive input comes to dendritic-tree surround region from the different cone type receiving from surface annulus, and negative input comes to dendritic-tree surround region from the one or two cone types from surface annulus.

Double-opponent cells differentiate two regions' wavelength-intensity differences, to detect boundaries between regions.

Double-opponent cells start the process for cortical processes to maintain color constancy under low, medium, or high illumination and/or changes in illumination frequency-intensity spectrum.

9. Opponent processes start vision pathways for vision properties

Vision opponent processes start vision pathways that calculate brightness intensities, hue (yellow, blue, red, green) intensities, and no-hue (black, white) intensities:

Spot opponent-process output for intensity relatively lower than surround, and spot opponent-process output for intensity relatively higher than surround, make a pair that goes to two pathways/channels that calculate relative dimness and brightness, respectively.

Spot opponent processes for what will be blue, yellow, red, and green go to pathways/channels that calculate what will be the four hue intensities.

Spot opponent processes for what will be relative brightness, relative dimness, blue, yellow, red, and green go to a pathway/channel that calculates what will be color saturation.

All spots together make a topological array that defines all directions, so each spot has a calculated direction. Spot-opponent-process output for intensity relatively higher than surround goes to a pathway/channel that calculates distance for that direction.

10. Evolution/development

White-black opponency opposes the lightest and darkest colors. Yellow-blue opponency opposes the second-lightest and second-darkest main colors. Red-green opponency opposes the third-lightest and third-darkest main colors. Perhaps colors split into black and white, then blue and yellow, and then red and green, to make lightness categories.

Vision processing evolves/develops to combine and process inputs to make color categories: black, white, yellow, blue, red, green, chartreuse, orange, magenta, and cyan.

Opponent-Process Input and Output Properties

Opponent processes measure a difference between two quantities, which are like opposite states.

Opponencies can vary in input values, input ranges, input weights, difference range, difference change rate, baseline output, and output range. Opponencies can vary in spatial distribution.

1. Input is negative, zero, or positive

Opponent-process input is a function (e) of the difference (an extensive quantity) between two functions (f and g), each of which has one variable (x or y): e(f(x) - g(y)). For vision, receptor outputs are positive, so and f(x) and g(y) are always positive. The difference can be negative, zero, or positive. (The smaller function may negate/cancel some of the larger function.)

Note: f(x) and g(y) can change with inhibition or excitation. Vision adjusts weights based on experience of different visual contexts during development.

2. Possible values of input components

A difference is minuend minus subtrahend. A difference, by itself, says nothing about the minuend or subtrahend. For example, for the difference x - y, if both x and y range from 0 to 1, the same difference has different possible x and y pairs:

Difference of -0.5 could be x = 0 and y = 0.5, ..., x = 0.25 and y = 0.75, ..., or x = 0.5 and y = 1.0.

Difference of 0.0 could be x = 0 and y = 0, ..., x = 0.5 and y = 0.5, ..., or x = 1 and y = 1.

Difference of +0.5 could be x = 0.5 and y = 0, ..., x = 0.75 and y = 0.25, ..., or x = 1.0 and y = 0.5.

For the white-black opponent process:

If difference is half-maximum negative, center absolute brightness could be medium and surround high, or low and surround medium.

If difference is zero, center is same as surround. Center absolute brightness could be low and surround low, medium and surround medium, or high and surround high.

If difference is half-maximum positive, center absolute brightness could be medium and surround low, or high and surround medium.

For the yellow-blue opponent process:

If difference is half-maximum negative, receptors could have high blue and medium yellow, or half blue and no yellow.

If difference is zero, receptors could have low blue and low yellow, medium blue and medium yellow, or high blue and high yellow.

If difference is half-maximum positive, receptors could have high yellow and medium blue, or medium yellow and low blue.

For the red-green opponent process:

If difference is half-maximum negative, receptors could have high green and medium red, or medium green and low red.

If difference is zero, receptors could have low green and low red, medium green and medium red, or high green and high red.

If difference is half-maximum positive, receptors could have high red and medium green, or medium red and low green.

Note: Perhaps cortical-opponency ganglion-cell outputs can also measure input differential (intensive quantity). Input differentials differ for the three cases, and so can distinguish between cases with equal input differences. Knowing both difference and differential allows calculating opponency minuend and subtrahend.

3. Values of input components at maximum and minimum difference

All differences have two extremes, the opposite system states, when one quantity is maximum and one minimum. When the opponent process has f(x) at maximum value and g(y) at minimum value, the difference is maximum positive. When the opponent process has g(y) at maximum value and f(x) at minimum value, the difference is maximum negative, the opposite state.

Only when the difference is maximum positive or negative is there a definite minuend and a definite subtrahend. For example:

Difference of -1.0 requires x = 0 and y = 1.

Difference of +1.0 requires x = 1 and y = 0.

For the white-black opponent process:

If difference is maximum negative, center absolute brightness must be lowest (appearing black) and surround must be highest (appearing white).

If difference is maximum positive, center absolute brightness must be highest (appearing white) and surround must be lowest (appearing black).

For the yellow-blue opponent process:

If difference is maximum negative, center has highest blue, with no no-hue and no yellow.

If difference is maximum positive, center has highest yellow, with no no-hue and no blue.

For the red-green opponent process:

If difference is maximum negative, center has highest green, with no no-hue and no red.

If difference is maximum positive, center has highest red, with no no-hue and no green.

4. Difference range

Depending on f(x) and g(y), the difference range can be small, medium, or large. If both f(x) and g(y) range from 0 to 1, the difference ranges from -1 to +1. If f(x) and g(y) double, the difference doubles. If f(x) and g(y) halve, the difference halves.

Opponencies can have small, medium, or wide range.

5. Zero difference point can be lower or higher than range midpoint

Depending on f(x) and g(y), the state where difference equals zero can be in the middle of the difference range, or lower or higher.

Therefore, the range from most negative to zero can be smaller, the same as, or larger than the range from zero most positive.

Opponencies can have low midpoint and so smaller range down and larger range up, high midpoint and so larger range down and smaller range up, or high midpoint and so larger range down and smaller range up.

6. Difference change rate

A linear function has constant change rate. For example, the difference x - y has constant slope.

A non-linear function has decreasing, increasing, or increasing-and-decreasing change rate. For example, (x - y)^2 has increasing slope, and a sigmoid curve has increasing and then decreasing slope.

Vision opponent processes can have linear or non-linear difference-change rates.

Vision Physiology after Opponent Processes: Analysis and Synthesis

Successive cortical processes compare and combine ganglion-cell outputs to make new color, brightness, and light variables and parameters. Vision physiology acquires information about brightness, color lightness, and other color properties, and color categories, relations, and associations.

From neuron-input quantities, nervous-system anatomy and physiology can calculate neuron and neuron-assembly output quantities, using arithmetic, algebra, and statistical processes.

Vision physiology calculates spatial direction, distance, location, spatial extension, temporal extension, and motion.

Vision physiology integrates "what" and "where", for cognition of features, objects, scenes, space, and concepts.

Vision physiology learns from experience and development.

Nervous-system anatomy and physiology keep vision, hearing, touch, smell, taste, and pain separate.

1. Vision higher systems

Visual processing proceeds through lateral geniculate nucleus, visual cortices, and association cortices.

1.1. Lateral geniculate nucleus

Correlating with retina, lateral-geniculate-nucleus ganglion cells are for white-black, yellow-blue, and red-green opponencies [Lennie, 1984]. It has alternating layers for each eye.

1.2. Visual cortex

Visual-cortex topographic maps have hypercolumns in a spatial array [LeVay and Nelson, 1991] [Wandell, 1995]. Hypercolumns have minicolumns that calculate color, brightness, location (direction and distance), and other information for one spatial direction [Dow, 2002].

Visual cortex has simple cells (for line orientation and width) and complex cells (for line motion). Cell assemblies detect features [Teller, 1984].

Some visual-cortex cells respond to spatial frequency.

Some cells respond to a narrow frequency band, shape, and/or orientation [Zeki, 1973] [Zeki, 1980] [Zeki, 1985]. Some cells respond to figure-ground hue differences and shapes [DeValois and DeValois, 1975].

Some chromatically opponent visual-cortex cells are also spatially opponent ON-center/OFF-surround cells.

Some chromatically opponent visual-cortex cells are double-opponent cells, involving lateral inhibition.

Some visual-cortex cells combine right-eye and left-eye information.

1.3. After visual cortex

Using occipital and posterior-parietal cortices, the "where" system directs attention to locations in space, at directions and distances, to gather more information about what, shape, and size. Location and motion use only brightness.

Using occipital and ventral/inferior temporal cortices, the "what" system is about shape and size and directs memory to associated locations, to gather more information about where relative-location [Ungerleider and Mishkin, 1982].

Inferior parietal lobe, prefrontal-cortex working-memory neurons, and whole-visual-field topographic neurons integrate the where and what systems.

1.4. Neural networks

Neural networks find pattern categories. Vision cortex uses the three opponencies as input vectors to neural networks that use coordinate transformations to find output vectors as color and brightness categories (such as pale yellow-green).

2. Signal flows

Cortex has electrochemical flows through neuron assemblies and nerve bundles. Local flows directly relate to stimulus intensity. Coordinated brain pathways/circuits, in neuron bundles, have coordinated signal flows. Visual-system massively parallel signal flow goes from retina to visual cortex to perceptual cortex and location cortex to frontal lobes and spatial brain regions to motor and gland systems.

Neuron-assembly output signals flow longitudinally through retina, thalamus, occipital lobe, and association cortex, making linear (terminal or pass-through) processing. Processing excites or inhibits neurons.

Neuron-assembly output signals have lateral flows and circuits, for excitation or inhibition, making transverse (cross-sectional) processing.

Electrochemical flows have excitations and inhibitions that are like fluid pressure; longitudinal speeds that are like fluid velocity; transverse motions that are like fluid viscosity; and intensities that are like fluid density. Tensors model longitudinal and transverse fluid motions and pressure fields, so tensors can describe neuron-assembly-output-signal flows.

Locally and globally, different senses have different brain anatomy and physiology, so different senses have different flow patterns, and sense subcategories (such as red and blue) are variations on sense flow pattern.

Electrochemical flows can model information processing and have input, processing, and output. Such information processing is the neural correlate of perceptions/sensations/experiences.

Signal flows have nerve-signal spatial and temporal aggregations, configurations, and structures, with densities, gradients, velocities, and complex flow patterns. Signal flows can have smooth, abrupt, increasing, decreasing, and combined patterns of velocity and acceleration directions. Signal flows can be linear or have loops.

Flow cross-sections have up-down and right-left and maintain spatial relations among space directions. The signal stream codes for intensities, colors, features and objects, and locations for all space directions.

Signal flows can have analog and/or digital information.

Feedback, feedforward, and reverberations/resonances can maintain data structures and processes and continuously update them.

Signal flows have multiple threads.

Current computers do not have flows.

2.1. Discrete-to-continuous and local-to-global signal flow

Flow analysis and synthesis merge and transform discrete flow points into continuous surface areas with intensities, colors, features and objects, and locations, for all space directions. This higher processing makes digital analog, discrete continuous, local global, and information mental. Senses can go from digital/discrete to analog/continuous by adding filters, current or voltage sources, and/or resistances.

With no carrier signal, digital to analog can use a binary counter, a decoder or demultiplexer, a set of operational amplifiers, and a resistive network. Examples are an R-2R Ladder Network or an integrated circuit with operational amplifiers, resistors, and capacitors. Filters can remove high-frequency "steps" in the smoothed signals.

Using an analog carrier signal, digital to analog can use amplitude modulation, frequency modulation, or phase shift:

Amplitude Shift Keying has constant amplitude for bit 1 and zero amplitude for bit 0 while keeping phase and frequency constant. Amplitude modulation can have increasing amplitudes for bits 1, 2, 3, and so on.

Frequency Shift Keying uses high frequency for bit 1 and low frequency for bit 0 while keeping phase and amplitude constant. Frequency modulation can have increasing frequencies for bits 1, 2, 3, and so on.

Phase Shift Keying shifts phase pi degrees for bit 1 and has no shift for bit 0 while keeping frequency and amplitude constant. Phase shift can have a series of increasing shifts for bits 1, 2, 3, and so on.

Quadrature amplitude modulation combines amplitude modulation and phase shift keying. Perceiving is a statistical process, and perceptions have the highest probability.

2.2. Coarse and fine tuning

Coarse tuning has only intensity and type. For example, it can only distinguish loudness and frequency, plus computed space location.

Fine tuning is about harmonics, relations, and detail. For example, people can distinguish whose voice it is and the voice has a location in three-dimensional space, which appears through the detailed space-relation network.

3. Neurons, neuron circuits, and quantities

Physical/chemical intensities (energy per unit time per unit area) impact sense receptors. Sense receptors change electric potential when impacted by force that causes energy transfer (energy results from force/interactions, and energy causes force). Photons impact vision sensors. Eardrum vibrations impact hearing sensors. Pressure impacts touch sensors. Temperature change impacts hot and cold sensors. Chemical reactions impact smell and taste sensors. Chemical reactions impact pain sensors. Sensors measure quantity of change from baseline potential.

Sense sensor electric potentials excite or inhibit neurons. Input carries no information about the sense, only about quantity of change.

Neurons have energy flows that represent power from unit-surfaces (and intensity from larger surfaces, since power is intensity times surface area).

Neuron signals represent quantities, with no information about the sense.

Neuron signals excite or inhibit the next neurons, adding or subtracting quantities, respectively.

Neural circuits can model all functions of electrical/electronic circuits.

3.1. Measuring differences and contrasts

Some neurons measure quantity of difference between two input signals from different neurons. They are in pairs, to measure the difference in both directions. For example, one could be for center vs. surround, and the other for surround vs. center. Alternatively, one could be for a category vs. the opposite category, and the other for opposite category vs. category. Each neuron of the pair sends to a separate neuron circuit. Because the pair measures opposites, only one output can be above resting-signal quantity, so only one of the two neuron circuits has signals above resting. If both inputs are equal, both outputs are at resting level, and both neuron circuits are at resting level.

Brain anatomy and physiology emphasizes contrasts. The largest contrast gets the first attention. When/if that decreases by attenuation, the next largest contrast gets attention, and so on.

3.2. Measuring above thresholds and filtering

Some neurons require that input be above a threshold quantity to make output above resting quantity. A neuron series can have a series of thresholds, and each output goes to a separate circuit, which detects above lowest, low, medium, high, or highest level.

Pairs of such neurons are for filtering, to detect low, medium, or high range.

3.3. Feedback and feedforward

Feedback measures output and uses it to increase or decrease input, or measures output, compares it to a standard output or a previous output, and uses the difference to increase or decrease input. Feedback control mechanisms use a model of behavior, such as an optimum temperature, position, speed, time, concentration, and so on.

Feedforward measures input and uses it, and a model of behavior, to increase or decrease that input or other inputs, or measures input, compares it to a standard input or a previous input, and uses the difference to increase or decrease that input or other inputs. Feedforward control mechanisms use a model of behavior, such as an additive, subtractive, multiplier, or divider factor to model how input affects output.

Neuron-assembly physiology may use deliberate perturbations, to vary things and to reach true equilibrium without sticking in the wrong state.

4. Sensory-organ brain-pathway variables and quantities

Brain pathways from sensory organs are for physiological variables. Brain pathways have coded or noncoded analog and/or digital information about variable quantities. (Pathways/variables are not knowable because they are physiological.)

From retina, each space direction has three pathways/variables: electromagnetic-wave relative intensity (white-black), low to high wavelength (yellow-blue), and middle to high-or-low wavelength (red-green).

At cortex, each space direction has three pathways/variables about color brightness, hue, and saturation. Each space direction also has three pathways/variables for space location, distance, horizontal angle, and vertical angle.

4.1. Brain-pathway physiology

Along pathways, neurons have spikes/second for impulses and mass/second for chemicals:

Impulses have rates, rate changes, spike clusters, intervals between spikes and clusters, and impulse envelopes. Rates range from baseline to 800/second. Time intervals are typically 2 milliseconds to 20 milliseconds.

Brain chemicals control individual and group neuron behavior. Chemicals can be for 2-millisecond to 20-millisecond time intervals, longer intervals, and very long intervals. Chemicals can change digital signals into analog signals.

4.2. Brain-pathway spatial and temporal characteristics

Neuron pathways have spatial and temporal characteristics:

Neurons can send to one or many neurons, and many neurons can converge on one or many neurons.

Neuron pathways can split into different pathways that have different lengths and timings.

Neuron pathways can have loops/recurrences and make circuits.

Neuron pathways have spatial organization, so each is for a space direction and/or distance.

Neuron pathways have temporal organization, so each is for a time position and/or interval.

4.3. Multiple paths, divergence, convergence, amplification, and signal patterns

Senses use multiple pathways. Some paths can carry the original signals and perform original processing, and other paths can evolve/develop to perform new processing and modify/embellish the signal.

Multiple paths can converge and/or diverge.

Multiple paths can cause amplification, using a multiplier circuit.

Some neurons have multiple outputs and can trigger signal patterns in complex or multiple circuits.

5. Clustering/principal component analyses

Principal component analysis and clustering find space axes. Vision cortex uses the three opponencies as input vectors to principal component analyses that use clustering and coordinate transformations to find color and brightness categories (such as pale yellow-green).

6. Labels

Cortical processing assigns labels/patterns to color categories, so they become variables for further processing.

7. Functions

Brain signal flows can be adders/integrators, subtractors, multipliers/amplifiers, dividers/differentiators, filters, sequencers, and timers. Nerve-signal amplification and modulation depend on the number of neurons and on feedback and feedforward (reverberations).

Signal flows can have feedback and feedforward, and lateral inhibition and spreading excitation.

A third neuron can lower threshold in a pathway.

Input to parallel neurons can multiply net output.

Neurons do not use flow resistance, capacitance, inductance, or impedance (resistances and reactances), and so are not like electrical circuits.

8. Circuits

Brain circuits can have spatial and temporal patterns, including series of levels (such as those for >800 cycles/second hearing).

Brains have many interacting circuits, perhaps with resonating waves.

8.1. Circuit types

Neural circuits can be converging (many to one, for addition), diverging (one to many, for spatial and temporal patterns), reverberating (positive feedback, for rhythms), or parallel after-discharge (one to many parallel pathways with different numbers of synapses, for temporal/spatial extensions) circuits.

Filter circuits can keep low, high, low and high, or middle frequencies.

Switching circuits (relays are electromechanical switches) can do computations.

Amplifying circuits can use recurrent excitation, which inhibitory input can modulate.

8.2. Learning

Using input-stimuli features and predictive coding, vision circuits can self-organize and self-train.

9. Integrating visual information

From input-stimuli features (information context) and predictive coding, for each space direction, vision builds colors/brightnesses, regions and boundaries, features, objects, illumination sources, and scenes at distances in three-dimensional-space directions. Vision uses top-down and bottom-up processes.

Vision uses differentiation and lateral inhibition to increase contrast, suppress noise, sharpen boundaries, and distinguish.

Vision uses integration and spreading activation to reduce contrast, fill in, blur boundaries, and expand and unify.

Vision uses unconscious inductive inference, associations, relaxation and optimization, clustering, constraint satisfaction, signal detection, statistics, and principal component analysis.

Vision physiology becomes vision information processing and computation to analyze and synthesize light, brightness, color, distance, direction, feature, object, scene, and space information.

Vision uses spatial Fourier analysis and distributed information [Harris, 1980] [Levine and Sheffner, 1981] [Robson, 1980] [Weisstein, 1980].

Vision analysis and synthesis includes attention, selection, and association. Vision also depends on memory and recall.

Vision Calculates Color Properties

After opponent processing, vision calculates color properties. Vision knows the concepts of continuous and discrete quantities, low-to-high and high-to-low relative quantities, and quantity categories.

Color has six inputs from three opponencies.

1. Hue and color categories

Vision calculates hue and color categories.

Vision calculates light luminance: short wavelengths have low luminance, middle wavelengths have high luminance, and long wavelengths have middling luminance. Vision finds four hue categories based on luminance: short and middle wavelengths are a hue, and long wavelengths are its opposite hue, and short and long wavelengths are a hue, and middle wavelengths are its opposite hue. The categories and their properties are:

Blue: low luminance, low first hue parameter, low second hue parameter

Red: middle luminance, low first hue parameter, high second hue parameter

Green: medium high luminance, high first hue parameter, low second hue parameter

Yellow: high luminance, high first hue parameter, high second hue parameter

The yellow-blue opponent-process has yellow-wavelength and blue-wavelength ganglion-cell outputs, of which only one can have output above baseline. The red-green opponent-process has red-wavelength and green-wavelength ganglion-cell outputs, of which only one can have output above baseline. Therefore, hue mixes red and blue, blue and green, green and yellow, or red and yellow.

Vision calculates luminance extremes, which have no net hue and so no first hue parameter and no second hue parameter:

Black: lowest overall luminance

White: highest overall luminance

Colors mix black, white, and/or red and blue, blue and green, green and yellow, or red and yellow.

Vision finds that any other number of hue and color categories cannot have complete and consistent luminances.

Color categories and properties have associations and relations.

2. Luminance and white intensity

Vision calculates white intensity. (White has no net hue.)

The white-black opponent process has an ON-center ganglion-cell output. ON-center-output quantity is total luminance (from which later comes brightness). White intensity is a function of total luminance minus the two hue luminances.

3. Missing hue and black quantity

Vision calculates black quantity, which equals missing hue intensity. (Black has no hue.)

The white-black opponent process has an OFF-center ganglion-cell output. OFF-center-output quantity is inverse of total luminance. Black quantity is a function of OFF-center-output quantity.

4. Hue quantities and hue saturation

Vision calculates hue quantities.

The blue-yellow and red-green opponent processes have outputs for blue-, yellow-, red-, and green-wavelength intensities. Hue quantity is a function of hue-output quantity.

The highest of the four hue-opponent-process-output quantities determines hue saturation. For example, if output is maximum, hue saturation is 100%. If output is half maximum, hue saturation is 50%. If output is zero, hue saturation is 0%. (Unsaturation equals 100% minus hue saturation. Black quantity plus white quantity equals unsaturation.)

Vision calculates the extent of hue-saturation ranges. What will be yellow and green have a narrow range when adding what will be white, and a wide range when adding what will be black. What will be red and blue have a wide range when adding what will be white, and a narrow range when adding what will be black.

5. Transforming hue-property coordinates

Vision can transform color-property coordinates.

The two hue opponent processes naturally have two principal components that we label blue-yellow and red-green. The two coordinates can transform to two other principal components that correspond to two hue properties, color temperature and color lightness:

One component runs from cyan, blue, and green through no hue to orange, yellow, and red. Red/yellow and blue/green have opposite values. Later vision processing uses this component for color temperature (cool/warm), vividness (vivid/dull), color vibrancy (low/high activity), attention (attracting/repulsing), and salience (foreground/background). Note: Blue and green have high light frequency, and yellow and red have low light frequency, so hue relative light frequency varies inversely with hue temperature/vividness, which may account for the yellow-blue and red-green opponencies.

The other component runs from chartreuse, yellow, and green through no hue to magenta, blue, and red. Red/blue and yellow/green have opposite values. Later vision processing uses this component for color lightness (dark/light), color strength (weak/strong), chroma range (narrow/wide), and color depth (shallow/deep). Note: Yellow and green have middle light frequencies, and blue and red have low and high light frequencies, so hue relative light frequency varies with chroma range and color strength, which may account for the yellow-blue and red-green opponencies.

Then the three vision opponent processes calculate relative brightness, relative dimness, hue lightness, hue darkness, hue high activity, and hue low activity, which together define a color uniquely.

Section about Color and Brightness Experiences

Color and Brightness Properties

Color has brightness, hue, saturation, and other properties.

1. Colors have brightness, hue, and saturation

Every color has a unique set of values of the three opponencies, which are independent properties, and people can distinguish 256^3 colors. From the three opponencies, every color has a unique set of values of brightness, hue, and saturation. Light-stimuli total power relates to brightness. Light-stimuli energy distribution and relative uniformity relate to hue and saturation.

1.1. Brightness

Color brightness (value) is relative light intensity of center compared to surround, so it depends on a contrast. Brightness relates to light-stimuli total power. Perceived intensity {subjective magnitude} is a power function of stimulus intensity.

For the same surround: White has highest brightness. Yellow has very high brightness. Green has high brightness. Red has medium brightness. Blue has low brightness. Black has lowest brightness.

1.2. Hue

Hue has colorfulness. Hue is blue, yellow, red, or green, or a mixture of red and yellow, red and blue, blue and green, or green and yellow.

Hue relates to light-stimuli energy distribution and relative uniformity. Hue causes brightness.

1.3. Saturation

Saturation is hue percent. More black, gray, or white reduces saturation. Less black, gray, or white increases saturation.

Saturation relates to light-stimuli energy distribution and relative uniformity. White causes highest brightness, and black contributes very little to brightness.

Color mixtures have higher unsaturation, because more wavelengths make both yellow-blue and red-green opponency inputs move toward zero.

1.4. Evolution

Mammals can perceive brightness and some colors. Old World monkeys have trichromatic vision. Human newborns can perceive brightness levels, and, by four months old, human infants can perceive color (and size and shape).

2. Hue

Hue is surface appearance compared to red, orange, yellow, green, blue, or purple. Hues form a closed system (color space), in which they mix to make only other hues. Colors differ in hue along a circular scale (color circle), and adjacent hues are similar [Armstrong, 1978]. People can distinguish 100 hues.

Hues combine blue and red, blue and green, green and yellow, or red and yellow. Those pairs have one hue each from the yellow-blue and red-green opponencies and so are adjacent hues that mix to make intermediate color. Note: There are two other hue combinations, but they have one hue each from the yellow-blue or red-green opponency and do not make intermediate color: Equal yellow and blue make no-hue white, and equal red and green make yellow.

Just noticeable differences in hue (and saturation) vary with wavelength.

Hue relates to highest peak (average wavelength) height and narrowness (wavelength standard deviation) in the luminous frequency-intensity spectrum. Higher and narrower peaks have more hue. Lower and wider peaks have more unsaturation/no net hue. (However, color is not frequency or wavelength.)

Color colorfulness is perceived intensity of hue. Colorfulness depends on hue. Colorfulness increases with saturation. Low illumination makes less color and lower colorfulness, and high illumination makes more color and higher colorfulness. However, very high illumination makes white and so lowers colorfulness (Hunt effect). Colorfulness may be the height of the highest-intensity frequency band in the luminous frequency-intensity spectrum compared to maximum possible height. Colorfulness may vary directly with peak width, because wider peaks make lower heights. White, grays, and black have no colorfulness(and they have no highest-intensity frequency band).

For reflected light, color chroma is colorfulness relative to the brightness, at the same illumination, if the surface was white. For emitted light, color chroma is colorfulness relative to the brightness, at the same intensity, if the emission had no color. Chroma is independent of illumination or intensity. Chroma increases with saturation. Chroma depends on hue. Chroma range is color's number of Munsell equal steps from no chroma to maximum chroma. Blue and red have wide chroma range. Green and yellow have narrow chroma range. Chroma range correlates with hue lightness. Black and white have no chroma and no chroma range.

Different hue saturations have different brightness. Different illuminations have different hue saturation and brightness. Hue maintains the same hue (color constancy) over varying illuminations, saturation, and brightness and over surface orientation, waviness, and texture.

Increasing illumination makes hue lighter, weaker, neutral temperature, and less colorful. Decreasing illumination makes hue darker, stronger, warmer or cooler, and more colorful. Increasing illumination increases brightness and bleaches hue, lightening and weakening, to give rise to white. Decreasing illumination increases dimness and unbleaches hue, darkening and strengthening, to give rise to black.

Compared to brightness, dimness makes less surface texture, fewer boundaries, and harder to discern distances and directions.

2.1. Primary, secondary, and tertiary hues

Figure 1 shows primary, secondary, and tertiary hues of the RGB color system.

Table 2 shows percentages of RGB-color-system primary colors for 41 colors.

2.2. Hues, wavelengths, and yellow-blue and red-green opponencies

The combined yellow-blue and red-green opponencies separate the frequency-intensity spectrum into four wavelength regions/categories, from short wavelengths to long wavelengths:

Blue and red

Blue and green

Green and yellow

Yellow and red

All hues are one of those four mixtures.

Hues correlate with wavelength:

The shortest wavelengths appear violet, with small blue and smaller red.

The next wavelengths appear blue, with blue and small red.

The next wavelengths appear blue, with blue and no red or green.

The next wavelengths appear blue, with blue and small green.

The next wavelengths appear cyan, with equal blue and green.

The next wavelengths appear green, with green and small blue.

The next wavelengths appear green, with green and no blue or yellow.

The next wavelengths appear green, with green and small yellow.

The middle wavelengths appear chartreuse, with equal yellow and green.

The next wavelengths appear yellow, with yellow and small green.

The next wavelengths appear yellow, with yellow and no green or red.

The next wavelengths appear yellow, with yellow and small red.

The next wavelengths appear orange, with equal yellow and red.

The next wavelengths appear red, with red and small yellow.

The longest wavelengths appear maroon, with small red and no yellow.

In the sRGB system, hues and wavelengths correlate like this:

Violet: 437 nm (violet: 380 nm to 445 nm)

Blue: 472 nm

Azure: 481 nm

Cyan: 487 nm

Spring green: 500 nm

Green: 524 nm

Chartreuse: 554 nm

Yellow: 573 nm

Orange: 592 nm to 610 nm

Red: 645 nm to 700 nm (maroon: 700 nm to 740 nm)

Purples mix red and blue:

Red-magenta (rose) mixes highest-intensity red with some blue.

Magenta (fuchsia) equally mixes highest-intensity red and highest-intensity blue.

Blue-magenta (bright violet) mixes highest-intensity blue with some red.

2.3. Primary colors

Three hues (primary colors) mix to make any and all colors. Surfaces reflect subtractive primary colors: red, yellow, and blue. Light sources make additive primary colors: red, green, and blue.

The primary colors blue, red, and green have even spread over the spectrum. Blue anchors a spectrum end. Green is in spectrum middle. Red anchors a spectrum end.

Colors from light sources, and colors from pigment reflections, cannot add to make blue or red, but can add to make green.

Using blue, red, and green as the three additive primary colors best calculates surface depth and figure/ground. Blue, red, and green have the most differentiation of color lightness, temperature, colorfulness/vividness, and strength.

The three primary colors do not cover the whole color gamut.

2.4. Complementary colors

For additive colors, red-cyan, green-magenta, and blue-yellow (complementary colors) mix red, green, and blue equally and so make white.

For subtractive colors, red-green, yellow-violet, and blue-orange mix red, yellow, and blue equally and so make black.

All hues have complementary hues.

2.5. Analogous colors

Three adjacent colors (analogous colors) are similar in wavelength and properties. Examples are blue, azure, and cyan, and red, red-orange, and orange.

Four or five adjacent colors are also analogous colors, such as blue through cyan through green, and red through orange through yellow.

2.6. Afterimages

Afterimages can make imaginary color or chimerical color.

Visual stimuli excite all three cone types. Stimulating just one cone type should make new colors (imaginary color) (unrealizable color).

Staring at bright pure color and then looking at black, white, or opposite-color background makes the afterimage be a new color (chimerical color). Staring at yellow and then looking at black makes very dark very saturated blue (stygian color). Staring at green and then looking at white makes glowing red (self-luminous color). Staring at cyan and then looking at orange makes very saturated orange, and staring at magenta and then looking at green makes very saturated green (hyperbolic color).

2.7. Ambient illumination and coloring

Colors look redder or greener at lower ambient illumination, and yellower or bluer at higher ambient illumination (Bezold-Brücke hue shift) [Bezold, 1873] [Brücke, 1878], perhaps because the red-green and yellow-blue processes differ in activity for different ambient illuminations.

2.8. Hue and saturation

Hue varies with saturation (purity-on-hue effect) (Abney effect) [Abney, 1910]. Adding white light shifts perceived hue.

2.9. Grassmann's laws of color mixing

Grassmann described color-mixing laws that are vector additions and multiplications in wavelength mixture space [Grassmann, 1853]. To illustrate, use 475-nm blue B, 487-nm cyan C, 500-nm green G, 577-nm yellow Y, 630-nm red R, and white W.

Association: Adding a color to a mixture of two colors makes the same color as adding the color to each mixture color. For example, mixing green and red makes yellow: G + R = Y. Adding blue to yellow makes white: Y + B = W. Adding blue to green, and adding blue to red, also makes white: (G + B) + (R + B) = W.

Distribution: Increasing the intensity of a mixture of two colors makes the same color as increasing the intensity of each mixture color. For example, mixing green and red makes yellow: G + R = Y. Decreasing yellow intensity makes dark yellow: i*Y = dY. Decreasing green intensity and decreasing red intensity also makes dark yellow: i*G + i*R = dY.

Identity: Adding color pairs that both make the same color makes the same color. For example, mixing cyan and red makes white: C + R = W, and mixing blue and yellow makes white: B + Y = W. Adding the pairs makes white: (C + R) + (B + Y) = W.

2.10. Color-mixture appearances

Red and blue mixtures make magentas, which appear to be reddish and/or bluish.

Red and green mixtures make chartreuses, yellows, and oranges. Red and green do not make reddish-green or greenish-red. Chartreuse appears to have green and yellow. Yellow appears to have no green and no red. Orange appears to have no green.

Blue and green mixtures make azures, cyans, and spring greens, which appear to be bluish and/or greenish. Azure appears to have no green. Cyan appears to have no green. Spring green appears to have no blue.

For two 100% RGB colors on a flat screen, compare the equal mixture (no edge contrast), dots and field with equal areas (small edge contrast), and very-small-square checkerboard (medium edge contrast):

Mixtures: Blue/green, blue/red, and red/green make 100% secondary color. Red/yellow, green/yellow, blue/cyan, green/cyan, red/magenta, and blue/magenta make 100% tertiary color. Blue/yellow, red/cyan, and green/magenta make 100% white.

Dots and field, from far away: with straight-on viewing angle, blue/green, blue/red, and red/green average 67% secondary color (not 50%, because of overlap). Red/yellow, green/yellow, blue/cyan, green/cyan, red/magenta, and blue/magenta average 67% of one primary color and 100% of another primary color (not 50%/100%, because of overlap). Blue/yellow, red/cyan, and green/magenta average 67% white. (At higher viewing angles, colors are darker or lighter.)

Very-small-square checkerboard: Blue/green, blue/red, and red/green average 67% secondary color. Red/yellow, green/yellow, blue/cyan, green/cyan, red/magenta, and blue/magenta average 67% of one primary color and 100% of another primary color. Blue/yellow, red/cyan, and green/magenta average 67% white.

2.11. Light scattering

After white light passes through a length of dusty or moist air, you see yellow, because that air scatters blue, scatters green half, and scatters red little. "Yellow is a light which has been dampened by darkness" (paragraph 502) [Goethe, 1810].

After white light passes through a longer length of the same dusty or moist air, or through very dusty or moist air, you see dark red, because that air scatters blue, scatters green, and scatters red half.

If white light strikes dusty or moist air from the side, and you look through that air at a black surface, you see blue, because that air scatters blue, scatters green half, and scatters red little. "Blue is a darkness weakened by light" (paragraph 502) [Goethe, 1810].

If white light strikes very dusty or moist air from the side, and you look through that air at a black surface, you see lighter and paler blue, because that air scatters blue, scatters green three-quarters, and scatters red half, and blue, green, and red mix to make white.

If white light strikes slightly dusty or moist air from the side, and you look through that air at a black surface, you see darker and deeper blue, because that air scatters blue three-quarters, scatters green one-quarter, and scatters red little.

If white light strikes very slightly dusty or moist air from the side, and you look through that air at a black surface, you see violet, because that air scatters blue half, scatters green little, and scatters red very little.

Light scattering shows that white light is a mixture of different-wavelength light waves.

2.12. Prism

White light goes through a tiny square slit to make a light beam. A triangular prism rests on its base. The light beam comes from the left heading up and enters the left prism surface at a 45-degree angle. The light beam exits the right prism surface heading down and is wider than the entering beam.

Red bends least in the prism and is at the top of the exiting beam. (Above the red, there is no light, so black appears.) The other colors bend more and so are no longer where red is:

Below the red, there is still red, and green begins, so the overlap makes orange.

Below the orange, there is less red, but more green, so the overlap makes yellow.

Below the yellow, there is no more red, so only green appears. (The total width of the red, orange, yellow, and green is typically the width of the original light beam.)

Below the green, there is still green, and blue begins, so the overlap makes cyan.

Below the cyan, there is no more green, so only blue appears.

Below the blue, blue has low intensity, so violet appears.

Below the violet, there is no light, so black appears.

If the entering light beam first encounters a small square barrier, the barrier makes a shadow in the middle of the light beam. After the beam passes through the prism, the central shadow narrows. All the red and half the green do not bend enough to get out of the top of the light beam, so the top of the light beam still appears white:

Below the light beam, where the shadow was, there is half green, and blue begins, so the overlap makes cyan.

Below the cyan, there is no more green, so only blue appears.

Below the blue, there is still blue, and red from the lower light beam begins, so the overlap makes violet. (The total width of the cyan, blue, and violet is typically the width of the original shadow.)

Below the violet, there is no blue, so only red appears.

Below the red, there is red and some green, so the overlap makes orange.

Below the orange, there is some red and green, so the overlap makes yellow.

Below the yellow, the bottom white light is much brighter than any green or blue, so the bottom light beam appears white.

Prism refraction shows that white light is a mixture of different-wavelength light waves.

3. Brightness and lightness

At each wavelength, sources and surfaces radiate light intensity. Integrating over the frequency-intensity spectrum (spectral power distribution) calculates total intensity.

People sense different light wavelengths with different sensitivities. Color spaces use a standard colorimetric observer, who has specific weights for all visible-light wavelengths. Light intensity must be high enough so that rod receptors do not count. Centers must be larger than four degrees of angle.

For the standard colorimetric observer, a luminosity function assigns weights to all visible-light wavelengths to make a normal distribution. White-black opponency has low sensitivity to blue, medium sensitivity to red, high sensitivity to green, and very high sensitivity to yellow. The CIE luminosity function centers on 555 nm:

Wavelength 400 nm has weight 0.

Wavelength 450 nm has weight <0.1.

Wavelength 500 nm has weight ~0.3.

Wavelength 555 nm has weight 1.

Wavelength 600 nm has weight ~0.6.

Wavelength 650 nm has weight <0.

Wavelength 700 nm has weight 0.

Luminosity is total light power after weighting by the luminosity function, so it approximates sensed light power.

Luminance is total light intensity (luminous intensity) after weighting by the luminosity function. It is luminous-power surface or angle density. Luminance is the sum of wavelength intensities (total intensity) in the frequency-intensity spectrum. Luminance is not about wavelength and has no hue information. Brightness models luminance.

Light sources have different frequency-intensity spectra:

CIE standard illuminant D65 closely matches the frequency-intensity spectrum of the average midday sunlight and diffused clear-sky light at European latitudes, with color temperature 6500 K. For white, CIE XYZ (x,y) = (0.31, 0.33). Other D illuminants have different color temperatures.

CIE standard illuminant E has the same intensity at each frequency, making CIE XYZ (x,y) = (1/3, 1/3) for white.

CIE standard illuminants F are for fluorescent sources.

CIE standard illuminants L are for LED sources.

3.1. Brightness

White-black opponency senses center relative brightness. Center color brightness comes from hue and from no-hue gray. People can distinguish more than 100 brightness levels. Brightness is for contrast between adjacent surfaces.

Color brightness depends on light intensity, and so, for reflected light, on illumination. Brightness depends on illuminant frequency-intensity spectrum and luminosity function. Brightness b is a root of intensity i: b = i^(1/gamma), where gamma>1.8. As intensity increases, brightness increases slower. (If intensity increases exponentially as a power function, brightness is the logarithm. For example, if i = 2^b, brightness increases by one unit when intensity doubles.)

Brightness is not potential energy and is not kinetic energy and so is not physical intensity.

In dim light, rods take over from cones, so luminosity function changes. In bright light, irises narrow, so eyes control brightness.

Surface brightness is relative to illumination brightness and surrounding brightness. There is also a feeling of ambient brightness, from light sources and from reflections from all surfaces.

If intensity is too high, people see painful flashes and cannot discriminate color surface.

If intensity is zero, such as the black in the night sky, people still see a surface.

Eye iris opens wide at low intensity. Eye iris closes at high intensity. Receptors also change.

Note: A computer display's gamma and color-temperature settings affect color brightness and appearance.

3.2. White-black opponency

Surrounding central hue with white, or with complementary color, appears to darken hue. Surrounding central hue with black appears to lighten hue. Colors are not constant in different visual contexts.

3.3. Light controls

Brightness control: Brightness is average intensity level. Good brightness control increases all intensities by same amount. Note: A computer display or television "Brightness" control sets "picture" or "white level" by changing the multiple of intensity. It increases ratio between black and white and so really changes contrast.

Contrast control: Contrast is difference between lowest and highest intensity. Good contrast control has black at zero intensity and white at maximum intensity. Note: A computer display or television "Contrast" control sets "black level" by shifting the intensity scale to set the lowest (zero) intensity to no input signal. It changes all intensities by same amount and so really changes brightness.

Exposure control: Exposure is being dark or light. Good exposure control integrates over the correct time interval to make lowest intensity black and highest intensity white.

Tint control: Tint is overall color. Good tint control is neutral and makes lowest intensity black and highest intensity white.

3.4. Color lightness

Surface lightness (color lightness [Goethe, 1810]) is color brightness compared to white brightness. Surface lightness ranges from 0 for black to 1 for white. Lightness is a function of surface reflectance. People can distinguish more than 100 lightness levels. In the CIE L*a*b* color space, middle gray has 50% black and 50% white and has 50% lightness, so achromatic colors have lightness equal to white percent (absolute whiteness). For the same illuminant, blue is dark, red is medium, green is light, and yellow is very light [Goethe, 1810].

Color lightness is relative brightness. For emitted light, it is ratio to white light, and so is independent of light intensity. For reflected light, it is ratio to white, and so is independent of illumination. Black is darkest. Blue is dark. Red has medium lightness. Green is light. Yellow is very light. White is lightest. (For hues, lightness is highest at middle light frequencies and lowest at high and low frequencies.) For mixed colors, black, blue, and red typically add darkness. White, green, and yellow typically add lightness.

Value is color lightness expressed as a range between 0 and 10.

Lightness/value depends on contrast, so different illuminations make similar lightness/value.

Color lightnesses do not add linearly. More white increases lightness. More black decreases lightness. Mixtures of two colors increase color lightness.

3.5. Relative luminance

Relative luminance is surface luminance divided by white-surface luminance. Relative luminance ranges from 0% to 100%. Darkest color appears black, and lightest color appears white. For example, middle gray (50% black and 50% white) has 18% relative luminance in the CIE L*a*b* color space.

In color mixtures, primary-color relative luminances add linearly.

For surfaces, color lightness is a function of luminance. In the CIE L*a*b* color space, surface lightness L* is 116 * (luminance / white-point luminance)^0.33 - 16, where luminance ranges from 0 to ~100, and white-point luminance is near 100. L* ranges from 0 for black to 1 for white. Note: Surface luminance is ((L* + 16) / 116)^3.

Luminance gradients have gradually varying brightness over long distances. Luminance-gradient causes include gradients of angles of incidence and/or reflection, shading over curved surfaces, shadows, and blur. Later visual processing integrates all luminance gradients to find all light sources and reflectors and uses ray tracing/casting to make scenes and space.

3.6. Reflectance

Surface reflectance is a function of percentage of impinging light reflected by surface. Standardly, reflectance equals relative luminance, so it ranges from 0% to 100%, and middle gray has 18% reflectance. In the CIE L*a*b* color space, surface lightness L* is 116 * (reflectance / white-point reflectance)^0.33 - 16, where reflectance ranges from 0 to 1, and white-point luminance is near 1. For example, middle gray has lightness 50 and reflectance 0.18. Note: Surface relative reflectance is ((L* + 16) / 116)^3.

For color mixtures, relative reflectances add linearly:

Blue 0.07 and red 0.21 add to 0.28, and magenta is 0.28.

Blue 0.07 and green 0.71 add to 0.78, and cyan is 0.78.

Red 0.21 and green 0.71 add to 0.92, and yellow is 0.92.

Blue 0.07, red 0.21, and green 0.71 add to 0.99, and white is 1.00.

Blue 0.07 and yellow 0.92 add to 0.99, and white is 1.00.

Red 0.21 and cyan 0.78 add to 0.99, and white is 1.00.

Green 0.71 and magenta 0.28 add to 0.99, and white is 1.00.

0.5*Green 0.71 and red 0.21 add to 0.57, and 0.5*yellow 0.92 and 0.5*red 0.21 add to 0.57, and orange is 0.57.

Green 0.71 and 0.5*red 0.21 add to 0.81, and 0.5*yellow 0.92 and 0.5*green 0.71 add to 0.81, and chartreuse is 0.81.

For achromatic light sources and surfaces, surfaces appear black if reflectance is less than 3% and lightness is 14% or less, medium gray if reflectances are 10% to 25%, and white if reflectance is more than 80% and lightness is 91% or more.

3.7. Lightness of mixed colors

For mixed colors, green, yellow, and white make lighter color. Blue and black make darker color. Red makes color more medium.

The top part of Figure 2 shows color lightness for percentages of RGB-color-system primary, secondary, and tertiary colors mixed with white. RGB 75% is 25% white (and 0% black), RGB 50% is 50% white (and 0% black), and RGB 25% is 75% white (and 0% black). RGB 0% would be 100% white (and 0% black).

The middle part of Figure 2 shows color lightness for percentages of RGB-color-system primary, secondary, and tertiary colors mixed with black (because the screen background is black). RGB 75% is 25% black (and 0% white), RGB 50% is 50% black (and 0% white), and RGB 25% is 75% black (and 0% white). RGB 0% would be 100% black (and 0% white).

The bottom part of Figure 2 shows color lightness for percentages of RGB-color-system primary, secondary, and tertiary colors mixed with gray (because the screen background is black). RGB 75% is 25% gray, RGB 50% is 50% gray, and RGB 25% is 75% gray. RGB 0% would be 100% gray.

At every percentage level, green is lighter than red, and red is lighter than blue.

Reds change color lightness almost the same as grays do.

Blues change from dark to light only above 100% blue with 75% white.

Greens change from light to dark only below 50% green.

RGB hues cannot have percentage less than 0% or greater than 100%.

Figure 3 shows grays from 100% whiteness to 0% whiteness.

Black surrounded by white looks different than black surrounded by light gray and black surrounded by dark gray.

White surrounded by black looks different than white surrounded by dark gray and white surrounded by light gray.

For grays, going from 0% white to 100% white is linear. Gray lightness equals grayscale whiteness. Because the screen background is black, grayscale blackness equals 100% minus grayscale whiteness. Grays cannot have black or white percentage less than 0% or greater than 100%.

3.8. Hue lightness compared to gray lightness

Hues appear brighter than gray of the same luminance (Helmholtz-Kohlrausch effect). The effect is greatest for blue and least for yellow: blue, red, magenta, green, orange, cyan, yellow. The effect varies inversely with hue lightness. The effect varies directly with hue chroma.

3.9. Primary-color lightness range

Because pure blue is dark, blue intensity from zero to maximum has small lightness range. Blue can add more of white than red, green, or yellow.

Because pure red is medium light, red intensity from zero to maximum has medium lightness range. Red can add more of white than green and yellow but less than blue.

Because pure green is light, green intensity from zero to maximum has large lightness range. Green can add less of white than red and green but more than yellow.

Because pure yellow is very light, yellow intensity from zero to maximum has very large lightness range. Yellow can add less of white than blue, red, or green.

3.10. Primary-color and yellow lightness with different background

Hues appear to have different lightness depending on their surroundings. A white, gray, or black background affects color appearance.

Figure 4 shows primary-color and yellow lightness with different backgrounds. White and complementary-color backgrounds make color darker. Black background makes color lighter. Equally bright foreground and background makes intermediate lightness.

The effect is the same for smaller and larger sizes.

3.11. Intrinsically photosensitive retinal ganglion cells

Intrinsically photosensitive retinal ganglion cells (ipRGCs) contain melanopsin. They help synchronize brain internal clocks with daylight changes. They measure brightness linearly. Image brightness depends on both cones and ipRGCs.

3.12. Surface spatiotemporal pattern

You can directly see surface spatiotemporal patterns. In dim light, do not focus on anything. You may see one-degree-of-arc (in brighter light) to three-degrees-of-arc (in dimmer light) circular regions that flicker between black and white several times each second (variable resolution).

The dots do not move sideways, forward, or backward, but if you move your eyes, the pattern moves.

Variable resolution is due to receptor-signal oscillation, from competitive inhibition and excitation, when low light activates only a few receptors [Hardin, 1988] [Hurvich, 1981].

4. Saturation

Color saturation (color richness) is ratio of colorfulness to brightness. Saturation is independent of hue. Saturation is independent of illumination or intensity. Black, grays, and white have no saturation.

Colors have a saturated part and an unsaturated part. Color saturated part is hue as a mixture of red and blue, yellow and red, green and blue, or yellow and green. Color unsaturated part has no hue as a mixture of black and white (shade of gray). Saturation percent plus unsaturation percent is 100%. Color mixtures typically have lower saturation. Added black and added white unsaturate hues to equal extent.

Just noticeable differences in saturation (and hue) vary with wavelength. People can distinguish up to 30 saturation levels.

4.1. Hue chroma

Chroma is surface colorfulness relative to a white surface. Chroma is resistance to unsaturation by white. Chroma varies directly with colorfulness and strength. Chroma is independent of illumination.

Chroma increases with surface reflectance, so different hues have different maximum chroma values. Yellow (with very high color lightness) has lowest. Green (with high color lightness) has second lowest. Red (with medium color lightness) has second highest. Blue (with low color lightness) has highest. Relative maximum colorfulness is:

Black: 0

Middle gray: 0

White: 0

Cyan: 10 (because coolest and very light)

Yellow: 12 (because warm and very light)

Azure: 12 (because very cool and light)

Orange: 14 (because very warm and light)

Chartreuse: 18 (because neutral and light)

Green: 18 (because cool and light)

Red: 18 (because warmest and medium light)

Red-magenta: 18 (because very warm and medium dark)

Spring green: 18 (because very cool and light)

Magenta: 20 (because warm and dark)

Blue-magenta: 24 (because neutral and very dark)

Blue: 24 (because cool and darkest)

White, grays, and black have no hue, so they have no chroma.

Adding other hues, black, gray, or white to hue reduces chroma (and changes brightness).

Hue saturation is ratio of chroma to lightness. For chromatic surfaces with the same surface reflectance relative to a white surface's reflectance, blue surfaces appear most saturated, red surfaces appear highly saturated, green surfaces appear medium saturated, and yellow surfaces appear least saturated.

4.2. Chromaticity

Color chromaticness is hue saturation. Chromaticness ranges from 0% to 100%. Different hues have different chromaticness characteristics. Also, chromaticness depends on luminance. At different illuminations, a color's chromaticness varies. (Chromaticness is not about reflectance and so is about light, not surface.) Black, grays, and white have no hue and so have 0% chromaticness. Adding black or white decreases chromaticness.

Color purity is hue percent relative to white percent. Purity is the same as chromaticity. All hues have the same purity range, from 0% to 100%, so purity is independent of hue. Also, a color's purity is independent of luminance. At different illuminations, a color's purity stays constant. (Purity is not about reflectance and so is about light, not surface.) White has no hue and so has 0% purity: adding white makes lower purity. Black has no hue and undefined purity: adding black does not change purity.

Adding white (tinting) to a pure hue makes it less saturated, lighter, paler, and brighter. White adds to hues linearly. A hue has a tint at which it appears purest. For example, bright green may appear too light, whereas a darker green may look like pure green.

Subtracting hue, and so adding black (shading) to a pure hue, makes it less saturated, darker, deeper, and duller. Black subtracts hue linearly.

Adding gray to a pure hue (toning) changes its shade and tint. Adding one of the two adjacent colors (toning) to a pure hue changes its hue.

4.3. RGB saturation

RGB-color-system saturation is hue percent, equal to 100% minus black+white percent. For example:

Middle gray is 50% white and 50% black, black is 0% white and 100% black, and white is 100% white and 0% black, and they all have saturation 0%.

Gray red has 25% black, 25% white, and 50% pure red (it has 75% red that contributes 25% to white), and so has 50% saturation.

Gray magenta has 25% black, 25% white, and 50% pure magenta (it has 75% magenta that contributes 25% to white), and so has 50% saturation.

Saturation with respect to white

The top third of Figure 2 shows RGB-color-system primary colors at a percent, mixed with white at 100% minus primary-color percent:

100% primary color and 0% white has 100% saturation, such as a mixture of 100% red, 0% green, and 0% blue.

75% primary color and 25% white has 75% saturation, such as a mixture of 100% red, 25% green, and 25% blue.

50% primary color and 50% white has 50% saturation, such as a mixture of 100% red, 50% green, and 50% blue.

25% primary color and 75% white has 25% saturation, such as a mixture of 100% red, 75% green, and 75% blue.

0% primary color and 100% white has 0% saturation, such as a mixture of 100% red, 100% green, and 100% blue.

Mixing two pure primary colors makes a color with no white (or black), so saturation is 100%. Mixing two half-white and half-primary-color colors makes a color with half white (and no black), so saturation is 50%.

Saturation with respect to black

The middle third of Figure 2 shows RGB-color-system primary colors at a percent, with black at 100% minus primary-color percent:

100% primary color and 0% black has 100% saturation.

75% primary color and 25% black has 75% saturation, such as a mixture of 75% red, 0% green, and 0% blue.

50% primary color and 50% black has 50% saturation, such as a mixture of 50% red, 0% green, and 0% blue.

25% primary color and 75% black has 25% saturation, such as a mixture of 25% red, 0% green, and 0% blue.

0% primary color and 100% black has 0% saturation.

Mixing two pure primary colors makes a color with no black (or white), so saturation is 100%. Mixing two 50% primary colors makes a color with half black (and no white), so saturation is 50%.

Saturation with respect to gray

The bottom third of Figure 2 shows RGB-color-system primary colors at a percent, with gray at 100% minus primary-color percent:

100% primary color and 0% gray has 100% saturation.

75% primary color and 25% gray has 75% saturation, such as a mixture of 88% red, 13% green, and 13% blue.

50% primary color and 50% gray has 50% saturation, such as a mixture of 75% red, 25% green, and 25% blue.

25% primary color and 75% gray has 25% saturation, such as a mixture of 63% red, 38% green, and 38% blue.

0% primary color and 100% gray has 0% saturation.

Saturation with respect to both black and white

Figure 5 shows RGB-color-system primary colors mixed with middle gray, light gray, or dark gray.

Middle gray has one part black and one part white:

100% primary color, 0% black, and 0% white has 0% middle gray and 100% saturation, such as a mixture of 100% red, 0% green, and 0% blue.

75% primary color, 12% black, and 12% white has 25% middle gray and 75% saturation, such as a mixture of 75% red, 25% green, and 25% blue.

50% primary color, 25% black, and 25% white has 50% middle gray and 50% saturation, such as a mixture of 75% red, 25% green, and 25% blue.

25% primary color, 37% black, and 37% white has 75% middle gray and 25% saturation, such as a mixture of 62% red, 37% green, and 37% blue.

0% primary color, 50% black, and 50% white has 100% middle gray and 0% saturation, such as a mixture of 50% red, 50% green, and 50% blue.

Light gray has one part black and two parts white:

75% primary color, 8% black, and 17% white has 25% light gray and 75% saturation, such as a mixture of 92% red, 17% green, and 17% blue.

50% primary color, 17% black, and 33% white has 50% light gray and 50% saturation, such as a mixture of 83% red, 33% green, and 33% blue.

25% primary color, 25% black, and 50% white has 75% light gray and 25% saturation, such as a mixture of 75% red, 50% green, and 50% blue.

Dark gray has two parts black and one part white:

75% primary color, 17% black, and 8% white has 25% dark gray and 75% saturation, such as a mixture of 83% red, 8% green, and 8% blue.

50% primary color, 33% black, and 17% white has 50% dark gray and 50% saturation, such as a mixture of 67% red, 17% green, and 17% blue.

25% primary color, 50% black, and 25% white has 75% dark gray and 25% saturation, such as a mixture of 50% red, 25% green, and 25% blue.

Adding medium gray changes blue little, darkens green, and seems to change red's color.

Adding light gray lightens blue, darkens green some, and washes out red.

Adding dark gray darkens blue, darkens green, and makes red brown.

4.4. Primary-color saturation range

Primary-color saturation range from pure hue to black differs for all three primary colors, and perceivable difference in saturation differs for all three primary colors:

Blue: Because lightness difference between all black and pure blue is smaller, saturation range is smaller, with fewer saturation levels. Blue requires a larger difference in saturation to make a perceivable difference in saturation.

Red: Because lightness difference between all black and pure red is medium, saturation range is medium, with a medium number of saturation levels. Red requires a medium difference in saturation to make a perceivable difference in saturation.

Green: Because lightness difference between all black and pure green is larger, saturation range is larger, with more saturation levels. Green requires a smaller difference in saturation to make a perceivable difference in saturation.

Yellow: Because lightness difference between all black and pure yellow is largest, saturation range is largest, with most saturation levels. Yellow requires the smallest difference in saturation to make a perceivable difference in saturation.

Primary-color saturation range from pure hue to white differs for all three primary colors, and perceivable difference in saturation differs for all three primary colors:

Blue: Because lightness difference between all white and pure blue is larger, saturation range is larger, with more saturation levels. Blue requires a smaller difference in saturation to make a perceivable difference in saturation.

Red: Because lightness difference between all white and pure red is medium, saturation range is medium, with a medium number of saturation levels. Red requires a medium difference in saturation to make a perceivable difference in saturation.

Green: Because lightness difference between all white and pure green is smaller, saturation range is smaller, with fewer saturation levels. Green requires a larger difference in saturation to make a perceivable difference in saturation.

Yellow: Because lightness difference between all white and pure yellow is smallest, saturation range is smallest, with fewest saturation levels. Yellow requires the largest difference in saturation to make a perceivable difference in saturation.

For saturation with respect to both black and white (gray), saturation ranges are similar for blue, red, green, and yellow.

4.5. Saturation with black or white and transparency of foreground color

Figure 2 shows lightness for primary, secondary, and tertiary hues at different saturations with respect to white or black.

In computer graphics, colors can be in foreground or background. Foreground colors can have transparency. 0% transparency (opaqueness) means that no background color comes through. Opaqueness is maximum foreground-color density. 100% transparency means that all background color comes through. Transparency is zero foreground-color density.

With a white background, opacity is the same as saturation with respect to white, and transparency is the same as no saturation. Blue looks most opaque, and green and yellow look least opaque.

With a black background, opacity is the same as saturation with respect to black, and transparency is the same as no saturation. Blue looks most opaque, and green and yellow look least opaque.

4.6. Saturation varies with luminance

Saturation varies with luminance (Hunt effect) [Hunt, 1952].

Decreased light intensity and color brightness and saturation

If center light intensity decreases, cones have lower output, so white-black opponent-process input decreases and brightness decreases. Also, both red-green and yellow-blue opponent-process inputs are closer to zero, so hue has lower saturation.

Adding black pigment to center hue pigment makes white-black opponent-process input decrease, so color is darker. Also, both red-green and yellow-blue opponent-process inputs decrease, so hue has lower saturation.

Increased light intensity and color brightness and saturation

If center light intensity increases, cones have higher output, so white-black opponent-process input increases and brightness increases. Also, both red-green and yellow-blue opponent-process inputs are closer to zero, so hue has lower saturation.

Adding white pigment to center hue pigment makes white-black opponent-process input increase so color is lighter. Also, both red-green and yellow-blue opponent-process inputs decrease, so hue has lower saturation.

5. Color temperature and vividness

Hue temperature is relative coolness/dullness or warmness/vividness [Goethe, 1810]. Red and yellow are warm and vivid, and blue and green are cool and dull. Because they have no net hue, both black and white have neutral temperature and undefined vividness.

Color mixtures have more neutral color temperature, because more wavelengths make both yellow-blue and red-green opponency inputs move toward zero.

Other names/concepts for color temperature are liveliness/dullness, boldness/quietness, activity/stillness, and salience/background.

Color phenomena relate to color temperature/vividness:

Radial motion is along sight line. Blues and greens recede from observer. Reds, oranges, and yellows approach observer.

Transverse motion is across sight line. Blues and greens contract transverse to observer. Reds, oranges, and yellows expand transverse to observer.

Size is relative surface area. Blues and greens have smaller size. Reds, oranges, and yellows have larger size.

Texture is roughness or smoothness. Blues and greens have smoother surface texture. Reds, oranges, and yellows have rougher surface texture.

Because they have no hue, white and black have neither recede nor approach, neither contract nor expand, are neither in background nor in foreground, are neither smaller nor larger, and are neither smoother nor rougher.

6. Color strength

Hue strength is dominance in mixtures of two hues. Other names/concepts for color strength are density, heaviness, solidity, compactness, coverage, and opaqueness. Figure 2 shows mixtures of two primary colors to make secondary and tertiary colors, and shows that primary colors have different strengths. Red and blue are strong. Green and yellow are weak. Black is strong. White is weak.

For hues, color strength is lowest at middle light frequencies and highest at high and low frequencies, and is lowest at high brightness and highest at low brightness, so strength correlates with lightness.

For black and white, color strength is lowest at high brightness and highest at low brightness, so strength correlates with lightness.

Primary-color strength varies directly with lightness, vividness, and chroma. Lightness is less important than chroma and vividness.

6.1. Blue, red, green, and yellow hue-canceling strengths

Adding equal intensities of blue, red, and green light, and adding equal intensities of blue and yellow light, make gray. However, the same intensity of blue contributes less to brightness than the same intensity of red, green, or yellow. The same intensity of red contributes less to brightness than the same intensity of green or yellow. The same intensity of green contributes less to brightness than the same intensity of yellow. Therefore, in color mixtures, blue is a stronger canceller of hue than red, green, or yellow; red is a stronger canceller of hue than green or yellow; and green is a stronger canceller of hue than yellow.

6.2. Blue vs. green in color mixtures

The color between cyan and spring green appears half green and half blue, whereas cyan appears more blue than green, spring green appears green with no blue, and azure appears to have no green. Therefore, in color mixtures, blue is stronger than green. Note that pure blue is darker and has more chroma than pure green, while both have similar vividness.

6.3. Blue vs. red in color mixtures

Magenta appears half red and half blue, whereas blue-magenta appears to have more blue than red, and red-magenta appears red with little blue. Therefore, in color mixtures, blue and red have similar strength. Note that red is lighter and more vivid than blue, while both have similar chroma.

6.4. Yellow vs. green in color mixtures

Chartreuse appears to have more green than yellow, whereas chartreuse-yellow appears to have equal green and yellow, and green-chartreuse appears green. Therefore, in color mixtures, green is stronger than yellow. Note that yellow is lighter and more vivid than green, while both have similar chroma.

6.5. Yellow vs. red in color mixtures

Orange appears to have more red than yellow, whereas orange-yellow appears to have equal yellow and red, and red-orange appears to have much more red than yellow. Therefore, in color mixtures, red is stronger than yellow. Note that yellow is lighter, and has less chroma, than red, while both have similar vividness.

6.6. Red vs. green, and blue vs. yellow, in color mixtures

Red and green mixtures, and blue and yellow mixtures, have no net hue. Red is a stronger canceller of hue than green, and blue is a stronger canceller of hue than yellow.

6.7. White in color mixtures and color strength

Light hues appear to have more hue than white. Therefore, white has low color strength. Note that white is light, with no chroma or vividness.

6.8. Black in color mixtures and color strength

Dark hues appear to have more black than hue. Therefore, black has high color strength. Note that black is dark, with no chroma or vividness.

Note: Hue relative strengths stay the same as they get darker (Figure 2). Also, primary, secondary, and tertiary hues maintain hue as they get darker (Figure 2). Dark yellow is olive, which has no green, but yellow-black's black appears greenish, perhaps because green is brighter than red. Dark orange is brown.

7. Color mixtures

Red and yellow, red and blue, blue and green, and green and yellow can mix. Color mixtures cannot make blue or red. Black and white can mix with all hues.

Color mixes no-hue gray (mixture of black and white) and hue (mixture of red and yellow, red and blue, blue and green, or green and yellow). Color-mixture property values are weighted averages of the color property values.

The main color mixtures are gray, purple, cyan, orange, chartreuse, pink, brown, tan, and cream:

Black and white light and/or pigments mix to make gray.

Blue and red light and/or pigments mix to make magenta/purple.

Blue and green light and/or pigments mix to make cyan.

Yellow and red light and/or pigments mix to make orange.

Yellow and green light and/or pigments mix to make chartreuse.

White and red (or white, blue, and red) mix to make pink, a pale red (or pale magenta).

Red, yellow, and black mix to make brown, a dark orange.

White, red, and yellow mix to make tan, a pale brown.

White and yellow mix to make cream, a pastel yellow.

Blue and yellow light mix to make low, middle, and high frequencies, with no dominant frequency, so color is white. Blue and yellow pigments have some green, and the blue and yellow add to cancel hue, so the mixture has dark green color.

Green and red light add middle to low frequencies, so color is yellow. Green and red pigments have some yellow, and the red and green add to cancel hue, so the mixture has dark yellow color.

Note: Magenta and cyan light and pigments mix to make blue. Magenta and yellow light and pigments mix to make red. Cyan and yellow light and pigments mix to make green.

Color Coordinates

Every color has a unique set of values of three independent properties, which do not add, interfere, displace, or overlap each other. Therefore, colors have locations in three-dimensional coordinate systems.

1. CIE L*a*b* color space, with lightness and two opponency coordinates

The CIELAB or CIE L*a*b* color space (1976) is for diffuse reflection from surfaces illuminated by a standard illuminant. The non-linear coordinates match the non-linear perceptions, so coordinate differences are linearly proportional to perceived color differences, making perceptual uniformity. The CIE L*a*b* color space covers the human gamut.

L* equals lightness, which ranges from 0 to 1 (brightest white). The L* coordinate is perpendicular to the planes of the two opponency coordinates. The L* coordinate determines relative color brightness, which ranges from darkness to lightness. Lightness L* equals 116 * (reflectance)^0.33 - 16, for reflectances greater than 0.01. For example, 18%-reflectance achromatic gray surface has lightness 0.50. Reflectance = ((0.16 + lightness) / 1.16)^3. Note: The best approximation using a power function is lightness equals (reflectance)^0.42. For example, 18%-reflectance achromatic gray surface has lightness 0.49. Reflectance = (lightness)^(1 / 0.42).

a* is for red-green opponency, which ranges from -1 to +1. The a* coordinate determines the intensity of red or green that is not in yellow and/or white.

b* is for yellow-blue opponency, which ranges from -1 to +1. The b* coordinate determines the intensity of blue or yellow that is not in white. Hues are mixtures of red or green and yellow or blue.

The a* and b* axes are perpendicular. Each a*b* plane has a square of colors centered on a* = 0, b* = 0. Colors are relative to a* = 0 and b* = 0 white point, typically based on illuminant D50. Hue and saturation depend on linear combinations of a* and b*. The a* and b* coordinates together determine hue and saturation.

In CIE (1976), red has wavelength 650 nm, green has wavelength 546 nm, and blue has wavelength 400 nm.

sRGB: CIE L*a*b* can transform to sRGB. As an example, middle gray has CIE L*a*b* lightness 50% and reflectance 18%, with RGB coordinates (0.47, 0.47, 0.47) and sRGB brightness 47% (with gamma correction 2.44).

White: Black has lightness 0 and reflectance 0. White has lightness 100 and reflectance 1. Blue has lightness 32 and reflectance 0.07. Red has lightness 53 and reflectance 0.21. Green has lightness 88 and reflectance 0.71. Reflectance is linear, but lightness is not linear. Mixed colors add reflectances, so white is 0.07 B + 0.21*R + 0.71*G. Complementary colors have reflectances that add to 1, so, for example, magenta's 0.28 and green's 0.71 add to 1.

1.1. CIE XYZ color space, with luminance and two hue coordinates

The CIE XYZ color space (1931) is for diffuse reflection from surfaces illuminated by a standard illuminant. It is not perceptually uniform, so coordinate differences are not proportional to perceived color differences. The CIE XYZ color space covers the human gamut.

Y equals luminance, which tracks the luminosity function (normal distribution centered on 555 nm) and approximates medium-wavelength-cone output. For different media, the white point can have different luminances. Luminance can be set to relative luminance (surface reflectance), which ranges from 0 (darkest black) to 1 (brightest white). The luminance coordinate is perpendicular to the XZ plane.

Z approximates short-wavelength-cone output.

X is a linear function of long-, medium-, and short-wavelength-cone outputs.

The XZ plane has the color chromaticities for relative luminance 1. Pure hues make an upside-down U shape. Green is at top left. Blue is at bottom left. Red is at low right. Purples go from blue to red. Hue is angle around center. Saturation is distance from center. The center has coordinates (0.33, 0.33), where color is white. Color purity is ratio of (distance to white point) to (distance of hue's boundary point to white point).

sRGB: CIE XYZ can transform to sRGB. As an example, middle gray has CIE XYZ luminance/reflectance (relative whiteness) 50% and lightness 76%, with RGB coordinates (0.5, 0.5, 0.5) and sRGB brightness 74% (with gamma correction 1.0).

1.2. CIE LUV color space, with luminance and two hue coordinates

The CIELUV or CIE L*u*v* color space (1976) is perceptually uniform, so coordinate differences are linearly proportional to perceived color differences.

L* is the CIE XYZ space's Y.

u* and v* are transformations of the CIE XYZ space's X and Z.

1.3. CIE RGB color space, with luminance and two hue coordinates

The CIE XYZ color space (1931) is a linear transformation of the CIE RGB color space (1931), whose three primary colors were red 700 nm, green 546 nm, and blue 436 nm. To match a single-wavelength color, observers added three brightnesses:

From 380 nm (end point) to 436 nm (the red and green starting point and blue maximum point) has increasing blue brightness, zero red brightness, and zero green brightness.

From 436 nm to 490 nm (the blue and green crossing point) has decreasing blue brightness, increasing negative red brightness, and increasing green brightness.

From 490 nm to 515 nm (red minimum point) has decreasing blue brightness, increasing negative red brightness, and increasing green brightness.

From 515 nm to 546 nm (the blue endpoint, blue and red crossing point, and green maximum point) has decreasing blue brightness, decreasing negative red brightness, and increasing green brightness.

From 546 nm to 580 nm (the red and green crossing point) has zero blue brightness, increasing red brightness, and decreasing green brightness.

From 580 nm to 610 nm (red maximum point) has zero blue brightness, increasing red brightness, and decreasing green brightness.

From 610 nm to 630 nm (the green end point) has zero blue brightness, decreasing red brightness, and decreasing green brightness.

From 630 nm to 700 nm (end point) has zero blue brightness, decreasing red brightness, and zero green brightness.

The luminosity function (normal distribution) for blue centers on 440 nm. For green, it centers on 530 nm. For red, it centers on 610 nm and also centers negatively on 515 nm. The three functions have the same area, normalizing the three brightnesses.

The luminance/brightness, equal to Y in CIE XYZ, is 0.18*red + 0.81*green + 0.01*blue, where r, g, and b range from 0 to 1, and 0.18 + 0.81 + 0.01 = 1.

The CIE RGB color space covers the human gamut.

2. Munsell color space, with value, hue, and chroma

A coordinate system (Munsell color space) is for diffuse reflection from surfaces illuminated by a standard illuminant. The coordinate system indicates hue by angle around a circle, saturation (chroma) by distance from axis along circle radii, and lightness (value) by position along black-white axis along perpendicular through circle centers.

2.1. Value

Munsell value is a function of relative-luminance cube root. Value is on vertical axis and ranges from 0 to 10, for example:

Black: 0

Pure blue: 4

Pure red: 5

Pure green: 8

Pure yellow: 9

White: 10

For mixed colors, adding values is not linear.

2.2. Hue

Hues are in a circle, with equal change in hue for every angle. The five main hues are blue, red, yellow, green, and purple.

Complementary colors are in approximately opposite directions.

Black, white, and grays are not hues, and so are only along vertical axis.

2.3. Chroma

Chroma (saturation) is distance from vertical axis. All chroma levels have equal distance.

Green has fewest levels and so lowest maximum chroma value.

Yellow has second-fewest levels and so second-lowest maximum chroma value.

Red has second-most levels and so second-lowest maximum chroma value.

Blue has most levels and so highest maximum chroma value.

3. Natural Color System, with hue, blackness, and chromaticness

A coordinate system (Natural Color System NCS) is for diffuse reflection from surfaces illuminated by a standard illuminant. It uses hue, blackness, and chromaticness coordinates.

There are six elementary colors: black, white, blue, red, green, and yellow. NCS defines the six colors by human-perception experiments. Black and white are the same as in RGB color space, black with 0% of R, G, and B, and white with 100% of R, G, and B. NCS blue mixes B with G. NCS red mixes R with B. NCS green mixes G with B. NCS yellow mixes R and G. Note that there are no primary or secondary colors, only the four unique hues.

3.1. Hue

Hue is the percent (NCS first-hue percent) of NCS blue, red, green, or yellow, depending which is more, and the percent (NCS second-hue percent) of NCS blue, red, green, or yellow, depending which is less. (The remaining two of NCS blue, red, green, and yellow are not present.) For chromatic colors, the two numbers add to 100%:

Pure, dark, and light NCS blues, reds, greens, and yellows have first-hue 100% and second-hue 0%.

Pure, dark, and light NCS cyans and magentas have first-hue 50% and second-hue 50%.

Pure, dark, and light NCS oranges have NCS yellow 50% and NCS red 50%, as an example of tertiary colors.

Pure, dark, and light NCS red-magentas have NCS red 75% and NCS blue 25%, as another example of tertiary colors.

Black, grays, and white have 0% for NCS blue, red, green, and yellow.

3.2. Blackness

Blackness is black percent. For example, pure yellow has 0% blackness. Olive has 50% blackness. Black has 100% blackness.

3.3. Chromaticness

Chromaticness is most-saturated hue percent and is 100% minus black percent minus white percent. For example:

Pure yellow has 100% yellow, 0% black, and 0% white, and has 100% chromaticness.

Olive has 50% yellow, 50% black, and 0% white, and has 50% chromaticness.

Light yellow has 50% yellow, 0% black, and 50% white, and has 50% chromaticness.

Gray yellow has 50% yellow, 25% white, and 25% black, and has 50% chromaticness.

Black, grays, and white have no hue and so 0% chromaticness.

3.4. Whiteness

Whiteness is white percent and is 100% minus blackness minus chromaticness. For example, pure yellow has 100% - 0% - 100% = 0% whiteness. Gray yellow has 100% - 25% - 50% = 25% whiteness.

Saturation is ratio of chromaticness to (chromaticness plus whiteness).

4. Ostwald color system, with hue, reflectance, and purity

A coordinate system (Ostwald color system) is for diffuse reflection from surfaces illuminated by a standard illuminant. It uses hue, reflectance, and purity coordinates:

A circle has 24 equilateral triangles, for 24 hues at their average wavelength. Each triangle has one side along the perpendicular to the circle starting at circle center. Triangle third point is on circle circumference. Each triangle has seven squares along the perpendicular and 28 (7 + 6 + 5 + 4 + 3 + 2 + 1) squares total.

Lower reflectance (surface's reflected light energy per second divided by received light energy per second) means less hue and more black. Higher reflectance means more hue and less black.

Purity is hue percent relative to white percent. Lower purity means less hue and more white. Higher purity means more hue and less white.

Triangle third point has hue maximum purity and hue maximum reflectance.

The top point on the side beside the perpendicular has highest reflectance and lowest purity and so is gray/white.

The bottom point on the side beside the perpendicular has lowest reflectance and lowest purity and so is black.

Along any vertical, hue has the same purity. Reflectance decreases from vertical center point down and increases from vertical center point up.

Along the horizontal, hue decreases in purity from triangle third point to side beside the perpendicular.

5. Color cube, with blue, red, and green coordinates

The color cube has rectangular blue B, red R, and green G positive-only coordinates, for the three additive primary colors. R, B, and G come from three light emitters with specific wavelengths, for example, 436 nm (blue), 546 nm (green), and 610 nm or 700 nm (red). R, G, and B range from 0 to 1 (for percentages), or from 0 to 255 (for octet or HEX values). Every color in the color cube has a unique set of B, R, and G percentages.

Different vectors from coordinate origin represent different colors. Vectors from coordinate origin that are in the R, B, or G axis represent colors with one primary color and black. Vectors from coordinate origin that are in the RG, RB, or GB plane represent colors with two primary colors and black. Color-cube diagonal represents black, grays, and white; complementary colors add their two vectors to put resultant vector on the diagonal.

RGB color spaces do not cover the human color gamut. They are about machine color production and color mixing.

The Standard RGB (sRGB) color space (of Microsoft and Hewlett-Packard, 1996) uses the same three primary colors, the same D65 illuminant (and so same white standard), the same standard colorimetric observer, and the same luminosity function as the ITU-R BT.709 specification and the CIE XYZ color space.

Note: Adobe RGB (for example, in Adobe Photoshop) has a wider gamut of colors than sRGB.

5.1. Brightness, intensity, and voltage

Selecting R, G, and B values sends voltages to pixels, whose red, green, and blue intensities are exponential functions of voltages and so non-linear. Expressing intensities in the range from 0 to 1, light intensity = ((x + 0.055) / 1.055)^2.4 (gamma expansion, with gamma = 2.4), where x = R, G, or B voltage, for x > 0.04. For example, R = 0 has intensity 0, R = 0.5 has intensity 0.21, and R = 1 has intensity 1.

People see those light intensities as brightnesses, which are logarithmic/root functions of intensity. Expressing brightness in the range from 0 to 1, brightness = (1.055 * intensity)^(1/2.4) - 0.055 (gamma compression), for intensity > 0.003. For example, intensity = 0 has brightness 0, intensity = 0.5 has brightness 0.50, and intensity = 1 has brightness 1. Note that brightness then matches RGB voltages.

As an example, middle gray has RGB coordinates (0.5, 0.5, 0.5) and sRGB brightness 0.5. By comparison, middle gray has CIE L*a*b* lightness 0.53 and has CIE XYZ reflectance 0.21.

5.2. Hue mixtures

Hue mixes (RGB-highest-percent-primary-color percent minus RGB-lowest-percent-primary-color percent) and (RGB-middle-percent-primary-color percent minus RGB-lowest-percent-primary-color percent).

At highest intensity, representative RGB values for the wavelength spectrum are:

Violet (darkest): 380 nm RGB(97,0,97)

Violet (medium): 387 nm RGB(116,0,128)

Violet (medium): 395 nm RGB(127,0,157)

Violet (bright): 420 nm RGB(106,0,255)

Blue (bright): 440 nm RGB(0,0,255)

Azure (bright): 461 nm RGB(0,127,255)

Cyan (bright): 490 nm RGB(0,255,255)

Spring green (bright): 502 nm RGB(0,255,123)

Green (lime): 510 nm RGB(0,255,0)

Chartreuse (bright): 540 nm RGB(129,255,0)

Yellow (bright): 580 nm RGB(255,255,0)

Orange (bright): 617 nm RGB(255,130,0)

Red (bright): 645 nm to 700 nm RGB(255,0,0)

Red (maroon): 766 nm RGB(128,0,0)

Red (maroon): 780 nm RGB(97,0,0)

5.3. Blue, red, and green percentages and color mixing

The RGB color space has blue-in-all-colors B, red-in-all-colors R, and green-in-all-colors G percentages. In that color space, pure primary color, pure secondary color, black, and white are definite and have percentages. Colors mix the three additive primary colors. For example, gray orange has 25% blue, 75% red, and 50% green.

The RGB highest-percentage primary color contributes to white, secondary color, and/or pure primary color.

The RGB middle-percentage primary color contributes to white and/or secondary color, and never appears as a pure primary color.

The RGB lowest-percentage primary color contributes only and all to white, and never appears as a pure primary color or in a secondary color.

Black percent

The RGB color space has a black screen, so, when B, R, and G are all 0, color is black. In color mixtures, black percent equals 100% minus RGB highest percent, because that primary color is in pure primary color, pure secondary color, and white. For example, gray-orange's 75% red results in 25% black.

Adding or subtracting that primary color is the only way to change black percent. Note: If color has more than one primary color, decreasing that primary color changes hue. Decreasing the primary color with RGB middle percent by the same proportion keeps hue the same. Adding or subtracting the primary color with RGB lowest percent changes white percent, but not black percent.

White percent

In color mixtures, white percent equals RGB lowest percent, because that primary color is all in white. (White is presence of all three primary colors.) For example, gray-orange's 25% blue results in 25% white. Note: Because adding a third primary color adds white, that third primary color is never in a hue.

Adding or subtracting that primary color is the only way to change white percent. Adding or subtracting the same percent of all three primary colors increases white percent (and decreases black percent) by that percent. Note: A hue has two primary colors. Changing intensity of the primary color with RGB lowest percent changes hue if it becomes the primary color with RGB middle percent. Adding or subtracting the same percent of all three primary colors does not change hue.

Gray percent

In color mixtures, gray percent equals RGB lowest percent plus 100% minus RGB highest percent. For example:

Gray orange has red at 75%, green at 50%, and blue at 25%, and its white percent is 25% and black percent is 25%, so it is 50% medium gray.

Dark gray orange has red at 60%, green at 40%, and blue at 20%, and its white percent is 20% and black percent is 40%, so it is 60% dark gray.

Light gray orange has red at 80%, green at 60%, and blue at 40%, and its white percent is 40% and black percent is 20%, so it is 60% light gray.

Black, white, primary color, and secondary color

Pure primary color (red, green, or blue) percent is RGB highest percent minus RGB middle percent. For example, gray-orange's 25% red equals 75% red minus 50% green.

Pure secondary color (yellow, cyan, or magenta) percent is RGB middle percent minus RGB lowest percent (white percent). For example, gray-orange's 25% yellow equals 50% green minus 25% blue.

Note: Yellow mixes equal percentages of red and green, with no blue, so yellow percentage equals the lower of R or G percentage, minus the B percentage (white percent), so colors with yellow have no pure red or pure green. (However, yellow has no red and has no green, but is a main color.)

5.4. RGB colors to grayscale whiteness

A function can convert RGB percentages to grayscale whiteness, which ranges from 0% to 100%, with middle gray at 50% whiteness. Note: The function is about whiteness, not about brightness/luminance, lightness, reflectance, or luminosity.

ITU-R 709

To convert RGB percentages to grayscale whiteness for television and for LED monitors, International Telecommunications Union Radiocommunication (ITU-R) Sector Recommendation BT.709 uses primary colors such that whiteness equals 0.213 * R + 0.715 * G + 0.072 * B.

The coefficients are primary-color reflectances and add to one. Those reflectances make blue lightness 0.33, red lightness 0.54, and green lightness 0.88, the CIE L*a*b* lightness values.

ITU-R 601

To convert RGB percentages to grayscale whiteness, Windows 10 "Grayscale Effect", Adobe PDF and Photoshop, and other programs use International Telecommunications Union Radiocommunication (ITU-R) Sector Recommendation BT.601, for television, with primary colors such that whiteness equals 0.299 * R + 0.587 * G + 0.114 * B.

The coefficients are primary-color reflectances and add to one. Those reflectances make blue lightness 0.40, red lightness 0.60, and green lightness 0.80, far from CIE L*a*b* lightness values but close to Munsell values.

MacOS before version 10.6

In versions before MacOS 10.6, RGB grayscale whiteness percentages in the PDF Viewer and in Microsoft Word 2004 for Mac are 25% for 100% blue, 46% for 100% red, and 80% for 100% green. Monitor primary colors are such that whiteness equals ((0.25 * Blue)^1.8 + (0.46 * Red)^1.8 + (0.80 * Green)^1.8)^0.556, where Blue, Red, and Green are percentages.

The formula uses gamma correction: gamma = 1.8, and 1 / gamma = 0.556. 100% color has gamma-corrected value 1.00^1.8 = 1.00. 50% color has gamma-corrected value 0.50^1.8 = 0.29.

The coefficients are primary-color reflectances and add to one. Those reflectances make blue lightness 0.35, red lightness 0.57, and green lightness 0.86, close to CIE L*a*b* lightness values.

5.5. Transforming coordinates

Rotating the color cube transforms coordinates. Instead of using the red, green, and blue coordinates, color cubes can use the magenta, cyan, and yellow coordinates, or any linear combination of the blue, red, and green coordinates.

Rotating the color cube counterclockwise around the green axis transforms the blue axis to magenta axis. (Rotating the color cube around the green axis clockwise transforms the blue coordinate to negative magenta coordinate, so do not use negative rotation angles.) Rotating the color cube counterclockwise around the perpendicular to both the green and new magenta axes transforms the original blue axis to a mixture of three primary colors (which has some white).

Rotating the color cube counterclockwise around the green axis transforms the blue axis to magenta axis. Rotating farther counterclockwise transforms the magenta axis to red axis. (Rotating farther counterclockwise transforms the red axis to negative values, so do not rotate farther.)

Yaw, pitch, roll

Rotating w counterclockwise around the z axis (yaw, in the xy plane), then rotating v counterclockwise around the y axis (pitch, in the xz plane), and then rotating u counterclockwise around the x axis (roll, in the yz plane) keeps the old and new coordinates in the same relations.

Rotating 45 degrees (0.79 radians) counterclockwise around the green axis (in the blue-red plane), then rotating 45 degrees counterclockwise around the red axis (in the blue-green plane), and then rotating 45 degrees counterclockwise around the blue axis (in the red-green plane) puts the blue axis on the diagonal, and the red and green coordinates along two other old diagonals.

The new axes have the same lengths as the old axes, so the three new axes still indicate blue, red, and green percentages.

x-convention

Rotating w counterclockwise around the z axis (in the xy plane), then rotating v counterclockwise around the new x' axis (in the y'z plane), and then rotating u counterclockwise around the new z' axis (in the x'y' plane) keeps the old and new coordinates in the same relations.

Rotating 45 degrees counterclockwise around the green axis (in the blue-red plane), then rotating 45 degrees counterclockwise around new blue axis (in the red'-green plane), and then rotating 45 degrees counterclockwise around the new green axis (in the blue'-red' plane) puts the blue axis on the diagonal, and the red and green coordinates along two other old diagonals:

The new axes have the same lengths as the old axes, so the three new axes still indicate blue, red, and green percentages.

6. Color circle and color cone

The color circle can represent all hues and hue percentages (Figure 6). The color circle is about light sources and color mixing, not human perception. Its RGB blue, green, and red are specific wavelengths, such as 436 nm, 546 nm, and 610 nm or 700 nm.

The yellow-blue vertical line represents the yellow-blue opponency. Blue and yellow do not mix (to yellowish-blue or bluish-yellow), are complements of white, and have no red or green. Blue needs more weight than yellow. Yellow is warm and very bright, and blue is cool and very dark.

The black horizontal line represents the red-green opponency. Red and green do not mix (to reddish-green or greenish-red), are equal components of yellow, and have no yellow or blue. Red and green have equal weight. Red is very warm and medium bright, and green is cool and bright.

Color-circle circumference depicts pure hues, each at a polar angle: red 30, yellow 90, green 150, cyan 210, blue 270, and magenta 330 degrees. Note: People experimentally place their unique hues close to color-circle values: red 26, yellow 92, green 162, cyan 217, blue 272, and magenta 329 degrees.

The color circle approximates a circle through Munsell color space, with pure hues at maximum saturation with their Munsell brightness.

Black has 0% hue, and white has 0% net hue, so circle center represents black, grays, and white.

6.1. Color-circle polar coordinates

Color-circle points have polar coordinates: angle and radius. Hues have origin at circle center.

In the color circle, hue polar angle defines hue and equals RGB-highest-percent-primary-color angle plus or minus (60 degrees times (RGB-middle-percent-primary-color percent minus RGB-lowest-percent-primary-color percent) divided by (RGB-highest-percent-primary-color percent minus RGB-lowest-percent-primary-color percent)).

Each hue has a unique angle from polar axis, for example:

Red-magenta (rose): 0 degrees

Red: 30 degrees

Orange: 60 degrees

Yellow: 90 degrees

Chartreuse: 1200 degrees

Green: 150 degrees

Spring green: 180 degrees

Cyan: 210 degrees

Azure: 240 degrees

Blue: 270 degrees

Blue-magenta (violet): 300 degrees

Magenta (fuchsia): 330 degrees

Pure hues have unit length from origin to circumference. Hue length represents hue percentage (and saturation percentage).

Complementary colors (red and cyan, blue and yellow, and green and magenta) are in opposite directions.

Adding black and/or white does not change hue angle.

6.2. Color-circle rectangular coordinates

Color-circle horizontal and vertical coordinates for pure RGB hues are:

(+1.00, +0.00) Red-magenta

(+0.87, +0.50) Red

(+0.50, +0.87) Orange

(+0.00, +1.00) Yellow

(-0.50, +0.87) Chartreuse

(-0.87, +0.50) Green

(-1.00, +0.00) Spring green

(-0.87, -0.50) Cyan

(-0.50, -0.87) Azure

(+0.00, -1.00) Blue

(+0.50, -0.87) Blue-magenta

(+0.87, -0.50) Magenta

(+0.00, +0.00) White, grays, and black

6.3. Examples

100% red and 50% green is the same as 50% red and 50% yellow and so bright orange, whose angle is halfway between red and yellow, with unit length.

100% red, 100% green, and 50% blue is the same as 50% yellow and 50% white and so light yellow, whose angle is same as yellow, with half-unit length.

50% red, 100% green, and 50% blue is the same as 50% green and 50% white and so light green, whose angle is same as green, with half-unit length.

75% red, 25% green, and 25% blue is the same as 50% red, 25% black, and 25% white and so gray red, whose angle is same as red, with half-unit length.

50% red and 25% green is the same as 25% yellow, 25% red, and 50% black and so dark orange, whose angle is same as orange, with quarter-unit length.

6.4. Color cone

Color-circle points indicate hue and saturation (hue percent). Hue percent plus black percent plus white percent equals 100%. White percent is percentage of RGB-lowest-percent primary color, and hue percent equals RGB-highest-percent-primary-color percent minus RGB-lowest-percent-primary-color percent. Perpendiculars to color circle model white percentage:

The perpendicular to color-circle center, where hue percent equals 0%, has 100% black and 0% white at its bottom. It has 50% black and 50% white at its middle. It has 0% black and 100% white at its top.

Every color-circle-circumference point, where hue percent equals 100%, has 0% black and 0% white, and so no perpendicular.

The perpendicular to every other color-circle point, where hue percent is greater than 0% and less than 100%, has (100% minus hue percent) black and 0% white at its bottom. It has ((100% minus hue percent) / 2) white and ((100% minus hue percent) / 2) black at its middle. It has (100% minus hue percent) white and 0% black at its top.

The perpendiculars and the color circle make a color cone. The color cone does not model brightness or lightness.

6.5. Color hexagon

The RGB color circle can have an inscribed hexagon. The color hexagon models RGB primary and secondary colors as vectors and their mixtures as vector additions. Primary colors are at 120-degree angles (Figure 6). Hues have an angle from polar axis. Hue vectors have origin in color-hexagon center. Vectors have unit length from coordinate origin to color-hexagon corner. Hue-vector magnitude compared to maximum magnitude at that angle represents hue percentage (and saturation percentage).

Trying to use a color hexagon shows that mixed colors are not vector sums of primary colors, because their percentages and saturations are not correct.

7. HSV color space, with hue, saturation, and value

A coordinate system (HSV color space) is about light sources and color mixing, not human perception. It uses hue, saturation, and value (lightness) coordinates. A cone has hue angle around circle, saturation along circle radii, and value along perpendicular downward from circle center. Circle center is 100% white. Cone vertex is 100% black.

8. HSL color space, with hue, saturation, and lightness

A coordinate system (HSL color space) is about light sources and color mixing, not human perception. It uses hue, saturation, and lightness coordinates. A bicone has hue angle around circle, saturation along circle radii, and lightness along perpendicular through circle center. Circle center is 50% white. Bicone vertices are 100% black and 100% white.

Section about Human Vision Physiology for Distance, Direction, and Space

Directions, Distances, Locations, and Spatial Relations

Vision (and all senses) use trigonometry and vector algebra, and statistical processes, to calculate directions, distances, angles, locations, and spatial relations.

1. Directions

Primary-visual-cortex, pre-frontal-lobe, and frontal-lobe topographic maps have arrays of macrocolumns/hypercolumns, which have coordinated microcolumns/minicolumns [Dow, 2002]. Columns are neuron cylinders, perpendicular to cortical neuron layers, with an axis and cross-section. Each column represents one of a million radial directions from eye out into space. The columns have relative vertical elevations and horizontal azimuths for all spatial directions and measure distances.

2. Distances using topographic maps

Just as somatosensory-cortex macrocolumns detect skin or joint-and-muscle stimulation for one body-surface patch [Buxhoeveden and Casanova, 2002] [Mountcastle, 1998], primary-visual-cortex topographic-map columns measure relative distances apart between points and angles, in all spatial directions [Dow, 2002].

Topographic maps have grid markers whose spatial frequencies help establish a distance metric and calculate distances.

Sensory and motor topographic maps have regularly spaced lattices of superficial pyramidal cells. Non-myelinated and non-branched superficial-pyramidal-cell axons travel horizontally 0.4 to 0.9 millimeters to synapse in superficial-pyramidal-cell clusters. The skipping pattern aids macrocolumn neuron-excitation synchronization [Calvin, 1995]. The superficial-pyramidal-cell lattice makes spatial frequencies that help calculate distances and lengths.

Topographic maps use sky, ground, verticals, horizontals, and landmarks to make a system of distances.

3. Distances using calculations

The "where" visual system [Rao et al., 1997] measures distances and directions to objects in space and directs attention to gather more information about what.

Distance representations can use actual distance, scaled distance, or logarithm of distance.

Metric depth cues help calculate distances. Closer surfaces have larger average surface-texture size and larger spatial-frequency-change gradient. Closer regions are brighter.

Angle comparisons, and convexity or concavity, help calculate distances. Closer concave angles appear larger, while closer convex angles appear smaller.

Eye accommodation helps calculate distances.

Animals continually track distances and directions to distinctive landmarks and navigate environments using maps with centroid reference points and gradient slopes [O'Keefe, 1991].

Visual feedback builds the correct distance-unit metric.

4. Relative distance away

Vision can use knowledge about visual angles to measure distances.

Using coordination among vision, motor system, and kinesthetics, vision knows rotation angles as eye, head, and/or body rotate to fixate on space locations.

To measure relative distance, vision can fixate on an object point. Then eye, head, and/or body rotate to fixate on a second object point. The rotation angle is the visual angle between the two points. Visual angle varies inversely with relative distance: d = k / A, where k is a constant, A is visual angle, and d is relative distance. If relative visual angle is larger, point is nearer.

To measure absolute distance, eye, head, and/or body then rotate to fixate on a third object point. Vision compares the two visual angles in two equations to calculate the constant k and find the absolute distance.

Using the same angle knowledge and calculations, vision can measure distance of a moving object on a trajectory. Vision fixates on the object at the first, second, and third location.

5. Space locations

Pre-frontal-lobe and frontal-lobe neurons respond to stimuli at specific three-dimensional locations and build egocentric maps of object locations (distances in directions) and spatial relations.

Hippocampus "place cells" respond to stimuli at specific three-dimensional locations [Epstein et al., 1999] [Nadel and Eichenbaum, 1999] [O'Keefe and Dostrovsky, 1971] [Rolls, 1999].

Medial entorhinal cortex has "grid cells" that fire when body is at specific spatial locations of a hexagonal grid [Fyhn et al., 2004] [Hafting et al., 2005] [Sargolini et al., 2006].

Dorsal presubiculum, postsubiculum, entorhinal cortex, anterior dorsal thalamic nucleus, and retrosplenial cortex have "head-direction cells" that fire when organism is facing in an absolute spatial direction [Sargolini et al., 2006] [Taube, 1998] [Taube, 2007] [Taube et al., 1990].

Subiculum, presubiculum, parasubiculum, and entorhinal cortex have "boundary cells" (boundary vector cell, border cell) that fire when organism is at a distance in a direction from a boundary [Barry et al., 2006] [Lever et al., 2009] [Savelli et al., 2008] [Solstad et al., 2008].

Hippocampus has "time cells" that fire upon time delays at locations, distances, and directions [Kraus et al., 2013] [MacDonald et al., 2011].

Coding patterns among place, grid, head-direction, boundary, and time cells build an allocentric/geocentric map of space, with locations, distances, and directions.

6. Space coordinates

Knowing distances, and/or angles, from a point to reference points allows calculating point coordinates.

6.1. Intersection

From three points with known coordinates, calculate the three bearing angles to a space point. The point's location coordinates are where the three lines intersect.

6.2. Resection

From a space point, calculate the three bearing angles to three points with known coordinates. The point's location coordinates are where the three lines intersect.

6.3. Triangulation

The trigonometry tangent rule (equivalent to the sine rule) can find triangle sides and angles: (a - b) / (a + b) = tan(0.5 * (A - B)) / tan(0.5 * (A + B)), where a and b are triangle sides, and A and B are their opposite angles. Two reference points have a distance between them and make a line segment. From a space point, measure the two angles to the line segment. The third angle is 180 degrees minus the sum of the two angles. Use the tangent rule to calculate the other two side lengths, and so find point coordinates.

The trigonometry sine rule can find triangle sides and angles: d = l * (sin(A) * sin(B) / sin(A + B)), where l is side length, angles A and B are the two angles to the side made by lines from the point, and d is distance from point to side. Two reference points have a distance between them and make a line segment. From a space point, measure the two angles to the line segment. Use the sine rule to calculate the distance from point to side, and so find point coordinates.

6.4. Trilateration and multilateration

Use the three distances from a point to three reference points with known coordinates to find the point's coordinates (trilateralization). The four points form a tetrahedron (with four triangles). Distance from first reference point defines a sphere. Distance from second reference point defines a circle on the sphere. Distance from third reference point defines two points on the circle.

The time differences in signal arrivals from a point to three known points use a similar calculation (multilateration) to find point coordinates.

Surfaces, Features, and Objects

Vision physiology finds physical-space points, lines, lengths, rays, angles, surfaces, and regions [Burgess and O'Keefe, 2003] [Moscovitch et al., 1995]. Surfaces have orientations, curvatures, boundaries, fills, and filters. Vision physiology can then find shapes, features, and objects, and their spatial relations.

1. Points

ON-center-neuron dendritic trees have a center region, on which input excites neuron output, and have a surrounding annulus, on which input inhibits neuron output [Hubel and Wiesel, 1959] [Kuffler, 1953]. Light from a point has small diameter and so lands on either center region or surrounding annulus, not both. For its direction, a single neuron can detect whether or not a point is a light source.

2. Lines and rays

An ON-center-neuron series has a dendritic pattern with an ON line and OFF lines on both sides. ON-center-neuron series are for all line lengths, orientations, and directions. ON-center-neuron series can be for straight or curved lines, edges, boundaries, and contours [Livingstone, 1998] [Wilson et al., 1990].

2.1. Line orientation

Lines have orientation compared to horizontal, vertical, or straight-ahead.

Ocular-dominance hypercolumns have minicolumns (orientation columns) that detect one stationary, moving, or flashing line (edge) orientation, for a specific line length, for one spatial direction [LeVay and Nelson, 1991]. Orientations differ by approximately ten degrees of angle, making 18 principal orientations.

Visual-cortex hypercolumns that receive from both eyes detect both orientation and distance for lines for one spatial direction [Wandell, 1995].

Vision assigns convexity and concavity to edges, boundaries, and contours [Horn, 1986]. Angle comparisons, and convexity or concavity, help calculate line orientations.

Vision indexes lines for distance and orientation [Glassner, 1989].

2.2. Boundaries

At a boundary, brightness changes. It can go up (positive), then down (negative), or vice versa (zero crossing). Vision uses zero-crossing brightness gradients to find boundaries [Horn, 1986].

Boundary-line perpendiculars have highest brightness gradient, while other directions have lower brightness gradients.

2.3. Vectors

A line with orientation and direction is a vector.

An ON-center-neuron-series dendritic pattern can have an opponent process with value positive if pointing in a direction and value negative if pointing in the opposite direction.

Macrocolumns/hypercolumns and their microcolumns/minicolumns have a central axis, from layer I through layer VI, and a preferred transverse direction, along the plane of same-orientation columns. The central axis and preferred transverse direction are vector components.

Motor brain regions have coordinated signals that make a muscle move in a direction with a magnitude. Similarly, perceptual brain regions have coordinated signals that represent object motion in a direction with a magnitude, so topographic-map signal patterns can represent vectors.

3. Angles

Because two intersecting lines make angles, ON-center-neuron arrays can detect angles.

4. Surfaces

An ON-center-neuron-array dendritic pattern can have a large ON center and OFF surround to detect flat and curved surfaces.

Because brightness gradient is small over surfaces, vision uses brightness variations to find two-dimensional visual primitives (blobs) [Horn, 1986].

4.1. Convexity and concavity

Vision can use visual primitives to find surface flatness, convexity, or concavity [Horn, 1986].

4.2. Surface orientation

Topographic-map orientation minicolumns detect surface orientation, for a specific surface size, for a specific spatial direction [Blasdel, 1992] [Das and Gilbert, 1997] [Hübener et al., 1997].

Visual-cortex hypercolumns that receive from both eyes detect both orientation and distance for surfaces for one spatial direction [Wandell, 1995].

Angle comparisons and convexity or concavity help calculate surface orientation.

Vision indexes surfaces for distance and orientation [Glassner, 1989].

4.3. Surface texture

Surfaces have fill colors, gradients, shading, transparency, patterns (foreground and background), and textures (smoothness-roughness and spacing, which is sparse or dense, random or non-random, clustered or dispersed spatial distribution).

Surface textures can diffuse, fade, blur, blend, flood, merge, offset, blacken or whiten, make linear or radial gradients, and change lighting direction and intensity.

5. Regions

An ON-center-neuron-assembly dendritic pattern can have an ON center and OFF surround to detect a region.

Generalized cones or cylinders can describe local three-dimensional regions.

6. Features and objects

Vision uses point, line, surface, and region detection to find shapes. Shapes have points/vertices, lines/edges, and surfaces. Shapes have natural axes (such as vertical, horizontal, radial, long axis, and short axis) and orientations. Shapes have adjacent and separated parts, in directions at distances.

Point, line, and surface configurations make features. For example, a corner has two intersecting rays.

Feature configurations make objects. For example, a circle is a figure with constant distance from center. Alternatively, it is a closed figure with constant curvature and no vertices.

6.1. Feature and object spatial relations

Vision uses all point, line, surface, and region information to assign spatial coordinates and relations to points, lines, surfaces, regions, features, and objects.

Spatial relations include adjacency, gradient, right-left, above-below, front-back, in-out, near-far, forward-backward, and up-down.

Vision topographic maps have neuron arrays that detect when adjacent nerve pathways have coincidences, and so know point, line, ray, angle, surface, and region pairs and triples and their spatial relations.

Spatial relations include symmetries and invariants [Burgess and O'Keefe, 2003] [Moscovitch et al., 1995].

Vision indexes features and objects for spatial relations [Glassner, 1989].

Scenes, Space, and Time

Vision physiology represents scenes, space, and time.

1. Scenes

Vision experiences whole scene (perceptual field), not just isolated surfaces, features, or objects. The feeling of seeing whole scene results from maintaining general scene sense in semantic memory, attending repeatedly to scene objects, and forming object patterns. Perceptual field provides background and context, which can identify objects and events.

Scenes have different spatial frequencies in different directions and distances. Scenes can have low spatial frequency and seem open. Low-spatial-frequency scenes have more depth, less expansiveness, and less roughness, and are more natural. Scenes can have high spatial frequency and seem closed. High-spatial-frequency scenes have less depth, more expansiveness, and more roughness, and are more about towns.

Scenes have numbers of objects (set size).

Scenes have patterns or structures of object and object-property placeholders (spatial layout), such as smooth texture, rough texture, enclosed space, and open space. In spatial layouts, object and property meanings do not matter, only placeholder pattern. Objects and properties can fill object and object property placeholders to supply meaning. Objects have spatial positions, and relations to other objects, that depend on spacing and order. Spatial relations include object and part separations, feature and part conjunctions, movement and orientation directions, and object resolution.

Scenes have homogeneous color and texture regions (visual unit).

1.1. Sketches and arrays

Vision uses known shapes and orientations, consistent convexities and concavities, surface shading and texture, and motion information to make two-dimensional sketches (intrinsic image) that represent scene local properties [Marr, 1982].

From sketches, vision combines local properties to make scene two-dimensional line arrays.

From line arrays, vision uses depth information to make scene two-and-one-half-dimensional contour arrays. (They are like oblique projections, or like 3/4 views with axonometric projections.)

From contour arrays, vision assigns consistent convexities and concavities to lines and vertices, to make three-dimensional regions with surface-texture arrays.

1.2. Scene files

A vision scene file assigns relative three-dimensional positions, directions, distances (depths), orientations (surface normals), and spatial relations, as well as illumination and reflectance (albedo), to feature and object placeholders. Objects include ground and sky.

2. Space

Vision integrates scene, sketch, and array information, along with information from other senses and motor system, to make three-dimensional space. All use the same space.

2.1. Topographic maps

Midbrain tectum and cuneiform nucleus have multimodal neurons, whose axons envelop reticular thalamic nucleus and other thalamic nuclei to map three-dimensional space [Andersen et al., 1997].

Topographic maps specify body and three-dimensional-space locations [Gross and Graziano, 1995] and represent three-dimensional space [Olson et al., 1999] [Rizzolatti et al., 1997]. Locations include sensory-organ, sensory-receptor, and motor-system spatial layouts.

Topographic maps can hold three-dimensional arrays. For example, 3 x n vertex matrices can represent n-point three-dimensional arrays (or n-vertex polygons/polyhedrons, or n-point curves/surfaces). Array indices correlate with directions, distances, and coordinates.

2.2. Continuity

Because adjacent topographic-map hypercolumns overlap and connect, they can represent continuous (rather than discrete) three-dimensional space.

Vision physiology uses many neurons, which average over time and space, to represent each space point, so output can represent continuous three-dimensional space.

Three-dimensional arrays a(x, y, z) can have elements that are themselves arrays b(i(x, y, z), j(x, y, z), k(x, y, z)). Topographic maps can hold such three-dimensional nested arrays. Nested arrays can represent connections to other space points, and elements can overlap their edges, to represent continuous three-dimensional space.

2.3. Coordinate axes

The vertical gaze center, near midbrain oculomotor nucleus, detects up and down motions [Pelphrey et al., 2003] [Tomasello et al., 1999], helping establish a vertical axis and determine heights.

The horizontal gaze center, near pons abducens nucleus, detects right-to-left motion and left-to-right motion [Löwel and Singer, 1992], helping establish a horizontal axis and plane.

Animals have right and left eyes, and this arrangement helps establish a vertical axis and a horizontal plane.

The superficial-pyramidal-cell lattice can represent vertical and horizontal coordinate axes.

Same-orientation topographic-map orientation columns have connections, helping establish spatial axes.

Vestibular-system saccule, utricle, and semicircular canals make three planes, one horizontal and two vertical, and detect body accelerations, head rotations, and gravity direction. Vestibular system works with visual system to establish a vertical axis and ground plane.

2.4. Distance metric

Vision establishes a distance metric for the spherical-coordinate system. The radial coordinate uses the distance unit to assign numerical distances.

2.5. Angle metric

Vision establishes an angle metric for the spherical-coordinate system. The horizontal and vertical coordinates use the angle unit to assign numerical angles.

2.6. Coordinate origin

Coordinate axes intersect at coordinate origin.

The superficial-pyramidal-cell lattice can represent topographic-map reference points, including coordinate origin.

Two observations make two scenes, whose differences define a projection matrix. Inversion of perspective finds lines of sight and eye location in space. Eye location relates to coordinate origin.

2.7. Coordinate system

Coordinate axes and origin define a coordinate system.

Spherical coordinates have a central coordinate origin, a radial vector from origin, a straight-ahead vector for polar axis, a transverse direction angle right and left from polar-axis, and a transverse direction angle up and down from polar-axis. A metric defines distance along radial direction. For vision, coordinate origin is at eye.

Cartesian coordinates have a coordinate origin, a vector from origin straight-ahead, a vector from origin up, and a vector from origin right. A metric defines distance along vectors. For vision, coordinate origin is at eye.

The visual, auditory, haptic, proprioceptive, and motor systems use the same egocentric coordinate system and egocenter [Bridgeman et al., 1997] [Owens, 1987], so perceptions and motions use the same coordinate system, point locations, and point-pair distances.

3. Time

Time has intervals, events, and temporal relations (before, during, after). Time intervals establish a clock for timing and flows.

Brain programming uses spatiotemporal events to model time intervals, clocks/counting, sequences, before-after, and motions. Memory realizes before and after, and historical time.

Coordinate Transformations and Stationary Space

The location/direction/distance/where pathway finds space locations as distances in directions, using spherical and rectangular coordinates to model physical-space directions and distances.

Vision works with motor, auditory, haptic, proprioceptive, and kinesthetic systems.

Thinking, dreaming, imagining, and memory recall use the same space.

Attention and memory track features, objects, scenes, and space locations and spatial relations.

Vision can track object motions in space.

Vision can transform spherical-coordinate-system coordinates, to translate, rotate, scale, and zoom.

As eye, head, and/or body move, coordinate transformations compensate for movements to make a stationary coordinate system, aligned with physical space.

1. Motor system

The motor system models muscle and bone positions and motions.

Muscles rotate right or left, flex or extend (bend or straighten), abduct or adduct (go away from or toward spine), elevate or depress (raise or lower), and pronate or supinate (go up-forward or down-backward).

Extremity bones are like connected rotatable vectors, which have origin (at joint), terminus (at joint or tip), length, and direction. Vectors translate and rotate.

For voluntary movements, brain signals mark vector final position and/or speed and direction. The motor system compares the signal to current position, speed, and direction and makes a difference signal. Difference signals go to muscles to trigger and coordinate muscle movements. Bones and muscles have controlled movement direction and speed as they go from original position to final position. (Voluntary processes use will.)

Visual perceptions of muscle and body positions integrate with motor-system muscle commands and reflexes, to develop fine voluntary-muscle control.

2. Touch, proprioception, and kinesthesia

To be surface-exploring devices, senses evolve and develop an experience surface. For example, a touch experience surface is on outside of skin. Experience surfaces have a spatial layout, with known sensory-receptor locations, distances, and directions.

Body, head, and sense organs have a spatial layout, with known locations, distances, and directions.

Body, head, and sense organs move so that eye, skin, ear, nose, and tongue sensory receptors can gather and store intensities.

Proprioception and kinesthesia know eye, head, and body positions and position changes, and store positions in memory using series over time.

Vision works with touch, proprioception, and kinesthesia.

Touch knows the positions, and position changes, of objects nearby in space.

3. Hearing

Hearing knows the positions, and position changes, of objects nearby in space. Vision works with hearing.

4. Attention and memory

Attention processes select features, objects, and their spatial and temporal relations.

Memory processes store and re-imagine features, objects, and their spatial and temporal relations.

5. Motion tracking

Vision, hearing, touch, kinesthesia, proprioception, and motor systems use the same perceptual space and work together to track object positions, motions, trajectories, orientations, vibrations, velocities, and accelerations.

The vertical gaze center detects up and down motions, and the horizontal gaze center detects right-to-left motions and left-to-right motions.

Perceptual brain regions have coordinated signals that represent object motion in a direction with a magnitude.

Vision can track trajectories, as objects move or as eye, head, or body move.

6. Coordinate transformations

Coordinate systems can transform (translate, rotate, scale, and zoom) coordinates.

Motor-system commands cause eye, head, and body movements. As eye, head, and/or body move, objects and scenes have translations, rotations, and scalings, and perceptual field has gradients and flows.

Linear transformations and extrapolation reveal how eyes, head, and body, and egocentric coordinate origin, move through space.

Integrated motor and sensory systems learn to track coordinate-system transformations, using tensor operations. (Topographic maps can represent magnitudes and directions and so represent vectors, matrices, and tensors.)

Early in development, as voluntary-muscle control develops, integrated motor and sensory systems correlate coordinate transformations, senses, and muscles, to calculate how actual and intended eye, head, and/or body movements cause egocentric-space coordinate transformations.

7. Stationary space

As eye, head, and/or body move, computation maintains stationary space by performing coordinate transformations of the spherical-coordinate system to compensate for movements.

Repeated scenes allow vision to learn to transform coordinates to compensate for body, head, and/or eye movements, using tensor operations to reverse egocentric-space coordinate transformations caused by anticipated body, head, and eye movements.

Canceling egocentric-space coordinate transformations establishes stationary allocentric/geocentric space and aligns directions, distances, and positions with physical-space directions, distances, and positions.

Stationary space has invariant space positions for named and indexed features, objects, events, and scenes.

Moving objects have trajectories through stationary space. For example, as body, head, and/or eyes move, coordinate origin moves though stationary space.

Coordinate transformations provide feedback to visual, auditory, haptic, kinesthetic, and motor systems and help know eye, head, and body-part sizes, shapes, orientations, directions, distances apart, and locations, and object distances, directions, and locations.

All senses and the motor system use the same stationary space, which aligns with physical space.

8. Location concepts and principles

Brains construct distance/direction/coordinate/spatial-relation concepts and principles about:

Direction, distance, and location

Point, line, surface, and solid

Space, coordinates, coordinate origin, and viewpoint

Adjacent, above-below, right-left, near-far, and other spatial relations

Orientation, spatial extent, boundary, angle, overlap, and occlusion

Location categories (such as "distant top right" or "on ground")

Motions, transformations, trajectories, and accelerations

Section about Other Senses

Hearing/Sound, Loudness, and Tone

Hearing refers to object that makes sound, not to accidental or abstract properties nor to concepts about hearing.

Hearing is an analytic sense. Tones are independent, so people can simultaneously hear different frequencies (with different intensities).

Hearing awareness models relatively-large-surface vibrations. Hearing has whisper, tones, and noise, with sound loudness, frequency, and harmonics.

1. Sound properties

Tone/loudness models sound-wave frequency-intensity spectrum. Sounds are a function of a set of intensities over a frequency range.

Sounds have continuity, with no parts and no structure.

Sounds have rate of onset (attack) and rate of offset (decay).

Two frequencies can have harmonic ratios, and tones are in octaves.

Low-frequency tones sound louder. High-frequency tones sound quieter.

Warm tones have longer and lower attack and decay, longer tones, and more harmonics. Cool tones have shorter and higher attack and decay, shorter tones, and fewer harmonics.

Clear tones have narrow frequency band. Unclear tones have wide frequency band.

Full tones have many frequency resonances. Shallow tones have few frequency resonances.

Shrill tones have higher frequencies. Dull tones have lower frequencies.

Sounds with many high-frequency components sound sharp or strident. Tones with mostly low-frequency components sound dull or mellow.

2. Space

Hearing analyzes sound-wave phases to locate sound directions and distances in three-dimensional space.

3. Language

Tone/loudness properties and categories have a vocabulary, grammar (about relations and properties), and language (with contexts, associations, and meanings).

Touch, Pressure, and Deformation/Vibration

Touch refers to object that makes touch, not to accidental or abstract properties nor to concepts about touch.

Touch finds self and object movements, surface texture, surface curvature, material density, material hardness, and material elasticity.

Touch is a synthetic sense, with some analysis.

Touch awareness models surface vibrations, tensions, extensions, compressions, and torsions. Touch has tickle, touch, and pain, with pressure and vibration.

1. Touch properties

Touch/pressure models body-part strains, vibrations, and motions. Touches are a function of a set of pressures/vibrations over surfaces.

Touches have continuity, with no parts and no structure.

Touches have rate of onset and rate of offset.

2. Space

Touch/feel analyzes topographic maps to locate touch directions and distances in three-dimensional space.

3. Language

Deformation/vibration/pressure properties and categories have a vocabulary, grammar (about relations and properties), and language (with contexts, associations, and meanings).

Temperature, Magnitude, and Warmth/Coolness

Temperature finds material warmth or coolness.

Temperature refers to object that makes temperature, not to accidental or abstract properties nor to concepts about temperature.

Temperature is a synthetic sense, with some analysis.

1. Temperature properties

Temperature/level models skin heat flows. Temperatures are a function of a set of heat flows over surfaces.

Temperatures have continuity, with no parts and no structure.

Temperatures have rate of onset and rate of offset.

2. Space

Temperature/warmth/coolness analyzes topographic maps to locate touch/temperature directions and distances in three-dimensional space.

3. Language

Warm/cool/magnitude properties and categories have a vocabulary, grammar (about relations and properties), and language (with contexts, associations, and meanings).

Smell, Concentration, and Odor

Smell refers to object that makes smell, not to accidental or abstract properties nor to concepts about smell.

Smell is an analytic sense, with some synthetic properties. Smells blend in concordances and discordances, like music harmonics. Pungent and sweet can mix. Pungent and sweaty can mix. Perhaps, smells can cancel other smells, not just mask them.

Smell awareness models region vibrations. Smell has the main odors, with odor strength/concentration, sweetness, pungentness, and so on.

1. Smell properties

Odor/strength models volatile-chemical concentration gradient. Smells are a function of a set of concentrations of classes of gas molecules.

Smells have continuity, with no parts and no structure.

Smells have rate of onset and rate of offset.

Odors have volatility, reactivity, and size.

Odors are camphorous, fishy, fruity, malty, minty, musky, spermous, sweaty, or urinous.

Odors can be acidic, acrid or vinegary, alliaceous or garlicy, ambrosial or musky, aromatic, burnt or smoky, camphorous or resinous, ether-like, ethereal or peary, floral or flowery, foul or sulfurous, fragrant, fruity, goaty or hircine or caprylic, halogens or mineral, minty, nauseating, peppermint-like, pungent or spicy, putrid, spearmint-like, sweaty, and sweet.

Aromatic, camphorous, ether, minty, musky, and sweet are similar. Acidic and vinegary are similar. Acidic and fruity are similar. Goaty, nauseating, putrid, and sulphurous are similar. Smoky/burnt and spicy/pungent are similar. Camphor, resin, aromatic, musk, mint, pear, flower, fragrant, pungent, fruit, and sweets are similar. Putrid or nauseating, foul or sulfur, vinegar or acrid, smoke, garlic, and goat are similar. Vegetable smells are similar. Animal smells are similar.

Acidic and sweet smells are opposites. Sweaty and sweet smells are opposites.

Smells can have harshness and be sharp, or they can have dullness and be smooth.

Smells can be cool or hot.

2. Space

Smell/odor analyzes topographic maps to locate smell directions and distances in three-dimensional space.

3. Language

Odor/concentration properties and categories have a vocabulary, grammar (about relations and properties), and language (with contexts, associations, and meanings).

Taste, Concentration, and Flavor

Taste refers to object that makes taste, not to accidental or abstract properties nor to concepts about taste.

Taste is a synthetic sense, with some analytic properties.

Taste awareness models surface and region vibrations. Taste has sourness, saltiness, bitterness, and sweetness, and savoriness, with flavor strength/concentration, acidity, saltiness, and so on.

1. Taste properties

Flavor/strength models soluble-chemical concentration gradient. Tastes are a function of a set of concentrations of classes of liquid molecules.

Tastes have continuity, with no parts and no structure.

Tastes have rate of onset and rate of offset.

Flavors have acidity, polarity, and size.

Taste detects salt, sweet, sour, bitter, and savory:

Salt has neutral acidity, is ionic, and has medium size.

Sweet has neutral acidity, is polar, and has large size.

Sour has is acid, is ionic, and has small size.

Bitter is basic, is polar, and has small, medium, or large size.

Savory has neutral acidity, is ionic, and has large size.

Sour acid and salt are similar. Bitter and salt are similar. Sweet and salt are similar.

Sour (acid) and bitter (base) are opposites. Sweet (neutral) and sour (acid) are opposites. Salt and sweet are opposites.

2. Space

Taste/flavor analyzes topographic maps to locate taste directions and distances in three-dimensional space.

3. Language

Flavor/concentration properties and categories have a vocabulary, grammar (about relations and properties), and language (with contexts, associations, and meanings).

Pain, Level, and Pain Type

Pain refers to accidental or abstract properties, not to concepts about pain or objects.

Pain is an analytic sense.

Pain awareness models region high-frequency vibrations and strong torsions. Pain has pressure, aching, squeezing, cramping, gnawing, burning, freezing, numbness, tingling, shooting, stabbing, and electric, with strength/intensity, acute/chronic, and so on.

1. Pain properties

Pain/level models pain-chemical concentration gradient. Pains are a function of chemical concentrations over surfaces.

Pains have continuity, with no parts and no structure.

Pains have no rate of onset and no rate of offset.

2. Space

Pain analyzes topographic maps to locate pain directions and distances in three-dimensional space.

3. Language

Pain type/level properties and categories have a vocabulary, grammar (about relations and properties), and language (with contexts, associations, and meanings).

Section about Perception, Cognition, and Meaning

Perception

Perception acquires information about physical objects and events using unconscious inductive inference. Perception/recognition depends on alertness and attention and on memory and recall. Perception requires sensation but not awareness or consciousness.

The what pathway finds features, objects, scenes, and space. Vision uses fast multisensory processes and slow single-sense processes. Brain processes object recognition and color from area V1, to area V2, to area V4, to inferotemporal cortex. Cortical area V1, V2, and V3 damage impairs shape perception and pattern recognition. Perception involves amygdala, septum, hypothalamus, insula, and cingulate gyrus.

Vision can perceive/recognize patterns, shapes, and objects. To survive and reproduce, organisms need to recognize food/prey, dangerous situation/predator, and related organism: mate, child, relative, and self. They can recognize different levels, such as food that is easier to get or more nutritious.

Perception involves sensation, feature, object, and scene distinction, identification/recognition, organization/categorization, indexing, association, and narrative.

Vision requires time to gather information from separated locations. Vision requires space to gather information from separated times.

Perception uses statistical methods, data clustering, principal-component analysis, hypothesis testing, trial and error, constraint satisfaction, optimization, gestalt principles, rules, and geometry to perceive features, objects, scenes, and space and make categories. Perception integrates local and global information, takes account of context, and uses associations and memories. It changes discrete to continuous.

1. Boundaries and regions

Perception first finds boundaries and regions.

1.1. Boundaries

Vision uses edge information to make object boundaries and adds information about boundary positions, shapes, directions, and noise. Neuron assemblies have different spatial scales to detect different-size edges and lines. Tracking and linking connect detected edges.

Differentiation subtracts second derivative from intensity and emphasizes high frequencies.

Sharp brightness (or hue) difference indicates edge or line (edge detection).

Point clustering indicates edges.

Vision uses contrast for boundary making (sketching). Lateral inhibition distinguishes and sharpens boundaries.

Pattern recognition uses shortest line, extends line, or links lines. Secondary visual cortex neurons can detect line orientation.

Contour outlines indicate objects and enhance brightness and contrast. Irregular contours and hatching indicate movement. Contrast enhances contours, for example with Mach bands. Contrast differences divide large surfaces into parts.

Vision can separate scene into additive parts, by boundaries, rather than using basis functions.

1.2. Regions

Regions form by clustering features, smoothing differences, relaxing/optimizing, and extending lines using edge information.

Surfaces recruit neighboring similar surfaces to expand homogeneous regions by wave entrainment. Progressive entrainment of larger and larger cell populations builds regions using synchronized firing.

Regions can form by splitting spatial features or scenes. Parallel circuits break large domains into similar-texture subdomains for texture analysis. Parallel circuits find edge ends by edge interruptions.

Region analysis finds, separates, and labels visual areas by enlarging spatial features or partitioning scenes.

HBF or RBF basis functions can separate scene into multiple dimensions.

Vision can connect pieces in sequence and fill gaps.

Vision can use dynamic programming to optimize parameters.

Vision accurately knows surface tilt and slant, directly, by tilt angle itself, not by angle function [Bhalla and Proffitt, 1999] [Proffitt et al., 1995].

Averaging removes noise by emphasizing low frequencies and minimizing high frequencies.

1.3. Region expansion and contraction

Boundaries, surfaces, regions, features, objects, scenes, and space result by optimal expansion and contraction.

Clustering features, smoothing differences, relaxation, optimization, and extending lines make surfaces and regions.

Association (over space and/or time), spreading activation (excitation of adjacent areas), and progressive entrainment (by synchronized firing) enlarge surfaces and regions.

Lateral inhibition makes region boundaries, minimizes surfaces and regions, and separates figure and ground.

Parallel texture-analysis circuits break large areas into smaller similar-texture areas. Parallel edge-interruption circuits find edge ends.

Constraint satisfaction, such as minimization [Crane, 1992], detects edges and minimizes areas.

1.4. Figure and ground

Vision separates figure and ground by detecting edges and increasing homogeneous regions, using constraint satisfaction [Crane, 1992]. Smaller region is figure, and nearby larger region is ground. Edges separate figure and ground.

1.5. Gestalt

Vision groups points, lines, and regions into three-dimensional representations (gestalt) depending on figure-ground relationship, proximity, similarity, continuity, closure, connectedness, and context [Ehrenfels, 1891] [Köffka, 1935] [Köhler, 1929]. Figures have internal consistency and regularity (pragnans).

1.6. Part relations

Objects are wholes and have parts. Wholes are part integrations or configurations and are about gist. Parts are standard features and are about details.

2. Features

Perception then finds features. (Object classification first needs high-level feature recognition. Brain extracts features and feeds forward to make hypotheses and classifications.)

Features can remain invariant as images deform or move.

2.1. Vertex perception

Vision can label vertices as three-intersecting-line combinations. Intersections can be convex or concave, to right or to left.

2.2. Signal detection theory

Signal detection theory can find patterns in noisy backgrounds. Patterns have stronger signal strength than noise. Detectors have sensitivity and response criteria.

2.3. Segmentation

Vision separates scene features into belonging to object and not belonging (segmentation problem). Large-scale analysis is first and then local constraints. Context hierarchically divides image into non-interacting parts.

Using Bayesian theory, image segmentation extends edges to segment image and surround scene regions.

2.4. Deconvolution

Feature deconvolution separates feature from feature mixture.

2.5. Relational matching

For feature detection, brain can use classifying context or constrain classification {relational matching}.

3. Shapes

Vision can recognize geometric shapes. Shapes have lines, line orientations, and edges. Shapes have surfaces, with surface curvatures, orientations, and vertices. Shapes have distances and natural metrics, such as lines between points.

3.1. Shading for shape

If brain knows reflectance and illumination, shading can reveal shape. Line and edge detectors can find shape from shading.

3.2. Shape from motion

Motion change and retinal disparity are equivalent perceptual problems, so finding distance from retinal disparity and finding shape from motion changes use equivalent techniques.

3.3. Axes

Shapes have natural position axes, such as vertical and horizontal, and natural shape axes, such as long axis and short axis. Vision uses horizontal, vertical, and radial axes for structure and composition.

3.4. Shape functions

Vision can use shape functions [Grunewald et al., 2002]. Shapes have:

Convex, concave, or overlapping lines and surfaces.

Shape-density functions, with projections onto axes or chords.

Axis and chord ratios (area eccentricity).

Perimeter squared divided by area (compactness).

Minimum chain-code sequences that make shape classes (concavity tree), which have maximum and minimum concavity-shape numbers.

Connectedness (Euler number).

4. Object representations

From features and shapes, perception finds objects. Objects have shape, size, orientation, feature and part relations, color, texture, and location.

Object representations include generalized cone, generalized cylinder, structural description, template, and vector coding.

4.1. Generalized cone

Generalized cones describe three-dimensional objects as conical shapes, with axis length/orientation and circle radius/orientation. Main and subsidiary cones can be solid, hollow, inverted, asymmetric, or symmetric. Cone surfaces have patterns and textures. Cone descriptions can use three-dimensional Fourier spherical harmonics, which have volumes, centroids, inertia moments, and inertia products.

4.2. Generalized cylinder

Generalized cylinders describe three-dimensional objects as cylindrical shapes, with axis length/orientation and circle radius/orientation. Main and subsidiary cylinders can be solid, hollow, inverted, asymmetric, or symmetric. Cylindrical surfaces have patterns and textures. Cylinder descriptions can use three-dimensional Fourier spherical harmonics, which have volumes, centroids, inertia moments, and inertia products.

4.3. Structural description

Structural descriptions are about object parts and spatial relations. Structure units can be three-dimensional generalized cylinders, three-dimensional geons, or three-dimensional curved solids. Structural descriptions are only good for simple recognition {entry level recognition}, not for superstructures or substructures. Vision uses viewpoint-dependent recognition, not structural descriptions.

4.4. Template

Defining properties make templates. Templates are structural descriptions. Templates can be coded shapes, images, models, prototypes, patterns, or abstract-space vectors.

4.5. Vector coding

Vector codings are sense-receptor intensity patterns and/or brain-structure neuron outputs, which make feature vectors. Vector coding can identify rigid objects in Euclidean space. Vision uses non-metric projective geometry to find invariances by vector analysis [Staudt, 1847] [Veblen and Young, 1918]. Motor-representation middle and lower levels use code that indicates direction and quantity.

5. Pattern and object recognition

Vision recognizes/identifies patterns and objects. Vision uses statistics to assign probability to patterns recognized.

5.1. Association

The first and main pattern-recognition mechanism is association. Complex recognition uses multiple associations.

5.2. Templates

Vision compares input patterns to template using constraint satisfaction on rules or criteria and then selects best-fitting match, by score. If input activates one representation strongly and inhibits others, representation sends feedback to visual buffer, which then augments input image and modifies or completes input image by altering size, location, or orientation. If representation and image then match even better, vision recognizes object. If not, vision inhibits or ranks that representation and activates next representation.

Matching can use heuristic search to find feature or path. Low-resolution search over whole image looks for matches to feature templates.

5.3. Detection threshold

To test patterns against feature sets, if pattern has a feature, add a distinctiveness weight to object distinctiveness-weight sum until sum is greater than threshold (detection threshold). (Set detection threshold using context.)

5.4. Distinctiveness weight

Object features have weights (distinctiveness weight), based on how well feature distinguishes object from other objects. Consulting the feature-vs.-weight table (perhaps built automatically using experiences) shows whether object is present.

5.5. Gabor transform

Gabor transform/filter makes series, whose terms are for independent visual features, have constant amplitude, and have functions. Term sums are series [Palmer et al., 1991]. Visual-cortex complex cells act like Gabor filters with power series. Terms have variables raised to powers. Complex-cell types are for specific surface orientation and object size. Gabor-filter complex cells typically make errors for edge gaps, small textures, blurs, and shadows.

5.6. Histogram density estimate

Histogram density estimate calculates density. Algorithm tests various cell sizes by nearest-neighbor method or kernel method. Density is average volume per point.

5.7. Kernel method

Kernel method tests various cell sizes, to see how small volume must be to have only one point.

5.8. Linear discriminant function

Linear discriminant function finds abstract-space hypersurface boundary between space regions (classes), using region averages and covariances.

5.9. Memory-based model

Memory-based models match input-pattern components to template-pattern components, using weighted sums, to find highest scoring template. Scores are proportional to similarity. Memory-based models uniquely label component differences. Memory-based recognition, sparse-population coding, generalized radial-basis-function (RBF) networks, and hyper-basis-function (HBF) networks are similar algorithms.

5.10. Nearest neighbor method

Nearest neighbor method tests various cell sizes to see how many points (nearest neighbor) are in cells.

5.11. Cluster analysis

Cluster analysis finds classes or subsets in abstract space.

5.12. Selectionism

Selectionism compares multiple variations and selects best match.

5.13. Pattern matching

Pattern matching tries to match two network representations by two parallel searches, starting from each representation. Searches look for similar features, components, or relations. When both searches meet, they excite the intermediate point (not necessarily simultaneously), whose signals indicate matching.

5.14. Pattern theory

Pattern theory uses feedforward and feedback processes and relaxation methods to move from input pattern toward memory pattern. Algorithm uses probabilities, fuzzy sets, and population coding, not formal logic.

5.15. Production systems

Production systems) use IF/THEN rules on input to conditionally branch to one feature or object. Production systems have three parts: fact database, production rule, and rule-choosing control algorithm:

Database: Fact-database entries code for one state {local representation, database}, allowing memory.

Rules: Production rules have form "IF State A, THEN Process N". Rules with same IF clause have one precedence order. Rules move from one state to the next.

Controller: It checks all rules, performing steps in sequence {serial processing}. For example, if system is in State A and rule starts "IF State A", then controller performs Process N, which uses fact-database data.

Discrete systems have state spaces whose axes represent parameters, with possible values. System starts with initial-state parameter settings and moves from state to state, along a trajectory, as controller applies rules.

6. Pattern recognition properties

Frequency is more important than recency.

Spatial organization and overall pattern are more important than parts. Parts are more important for nearby objects.

Size/scale and orientation do not matter.

Pattern recognition uses gray-level changes, not colors. Motion detection uses gray-level and pattern changes.

7. Pattern generalization

Pattern generalization eliminates one dimension, uses one subpattern, or includes outer domains.

8. Perception properties

Perceptions are on or from a surface at a distance along all directions from eye/head in continuous three-dimensional space. Perceptions have intensity and quality properties.

8.1. Continuity

Perceptions appear continuous (not particulate), so continuous geometric figures are in continuous space. Perceptions do not have parts, but they have components [Hardin, 1988].

8.2. Context

Perceptions appear different in different contexts. Perception can have illusions, omissions, and additions.

9. Object concepts and principles

Brains construct concepts and principles about:

Features, objects, scenes, and space categories

Feature, object, scene, and space properties

Shapes

Spatial relations

Temporal relations

Associations

Contexts

Perception Properties

In the frontal lobes, perception integrates all "what", "where", and "what" and "where" information, to unify location and sense information, represent physical three-dimensional space, represent sense and multisense information, and model substances, densities, motions, and radiations. Perception has knowledge of sense categories, relations, associations, and properties.

Perceptions are on or from a surface at a distance along all directions from eye/head in continuous three-dimensional space. Perceptions have intensity and quality properties.

Perceptions are about surface appearance and object properties. They are for surface, feature, object, and scene identification, location, categorization, memory, and recall. Perceptions provide feedback.

Surfaces, features, objects, and scenes maintain continuous existence as subjects and objects of verbs, with pasts/histories, current events, and futures, and have movement patterns, making narratives.

Perceptions are similar to, different from, opposite to, and exclusive of other perceptions, in a unified system of relations. Perception properties statistically covary, and perception uses the covariances. Perception is a complete and consistent system.

Perceptions are for use by muscles/actions and glands/feelings. Perception is for behavior: action preparation and action.

Perceptions are intermediate behaviors. (The first perceptions came from attraction and/or repulsion.)

Note: People can perceive without experience, as in seeing while sleepwalking and when awakened by a loud sound or bright light.

1. Processing

Perception requires time to gather information from separated locations and requires space to gather information from separated times.

Information goes from local to global, to help find regions, categories, and concepts (that people can express in language). Perceptions (and imagination and memory) have linguistic and non-linguistic labels/categories/patterns for brightness, hue, and saturation and spatial and temporal locations and extensions.

Information goes from global to local, general goes to particular, and category goes to detail/individual, to help find distinctions. Perceptions detect individual differences and fine gradations. Perceptual differences help discriminate/distinguish shapes, boundaries, regions, objects, depths, and figures from ground.

Active non-conscious cognitive processes {constructivism} use associations, spatial and temporal relations, extrapolations, interpolations, interpretations, analysis, synthesis, and schema to make cognitions about physical phenomena, number, space, time, causation, and logic.

Cortical information processing builds perceptions, associations, concepts, memories, and imagination about observer, observations, eyes, head, body, sensation sources, and sense properties. Perceptions can have example objects as paradigms, such as oranges for orange color.

2. Light, brightness, and color perception

Vision processes color through successive cortical regions and uses unconscious inductive inference to gain information about, and construct, color, brightness, and light properties and color coordinates. Vision distinguishes, relates, labels, and indexes color properties and categories.

Vision uses differentiation and lateral inhibition to increase contrast, suppress noise, sharpen boundaries, and contract and distinguish regions. Vision uses integration and spreading activation to reduce contrast, fill in, blur boundaries, and expand and unify regions. Vision uses associations, relaxation and optimization, clustering, constraint satisfaction, signal detection, statistics, principal component analysis, and computation. Vision also depends on memory and recall. Vision uses top-down and bottom-up processes. All processes help to make color and brightness boundaries and categories.

Vision builds a complete and consistent system of color and brightness properties, categories, and relations. Color and brightness are continuous, have multiple parameters, and are spatial.

Vision constructs intensity, quality, and surface appearances at distances along all directions from eye/head in continuous three-dimensional space. Perception constructs vision as an observer with observations.

Light, brightnesses, and colors and their properties provide the bases for gathering location and object information to build scenes and space.

3. Continuity

Perceptions appear continuous (not particulate), so continuous geometric figures are in continuous space. Perceptions do not have parts, but they have components [Hardin, 1988].

Massively parallel digital/discrete signals become analog/continuous, which involves diverging and converging and makes fluid, as brains use larger-scale and longer-time processes. Labeled lines become neuron assemblies, and signals become patterns.

4. Multiple parameters

Unlike physical quantities, perceptions have multiple intensive and/or extensive quantities. Perceptions are quantity sets/information multiples, over surfaces and time intervals. Perception labels the pathways/variables.

Vision represents colors using multi-dimensional spaces/arrays for brightness, hue, saturation, colors, and space location.

5. Macroscopic level

Organism small-scale processes lead directly to their large-scale processes. Electrochemical signals in neurons and glia lead directly to neuron-assembly representations of perceptions. Experiences are even higher bulk properties. (It is like molecular statistical mechanics and molar/mass classical thermodynamics.)

Brains (unlike computers) have gigabit registers and processors, with large-scale structures and parallel-and-serial processes that model object spatial relations and space, with three-dimensional spherical and rectangular coordinates. Brains know details, such as colors, spatial relations, and individuals, and general information, such as categories, regions, and the whole visual field.

6. Perception, space, and time

Spaces have three-dimensional coordinate systems, with viewpoint at coordinate origin. Egocentric spherical coordinates are for vertical/elevation and horizontal/azimuth angles and radial distances. Allocentric/geocentric rectangular coordinates are for front-back, left-right, and down-up. Spaces use physical-distance metrics. Spaces have one-dimensional time.

Perception integrates space and time.

Perceptions are all in the same continuous three-dimensional egocentric and allocentric//geocentric/stationary space and in the same continuous time.

People experience time as flowing (at differing rates). Observer travels through time and has memories.

Perception has knowledge of space, time, space and time categories and relations, kinetics, and dynamics.

7. Viewpoint and egocentric space

Perception has egocentric space, with a viewpoint. People experience a coordinate origin that correlates with brain location. An observer observes along sight lines to color and brightness experiences.

Eye, head, and body can move, so viewpoint moves in stationary space. Observer travels through space and has trajectories. A viewpoint/observer makes perspective, which can change.

Perhaps perception evolves/develops from using a kind of vector algebra and/or algebraic geometry to using a kind of geometric algebra. Geometric algebra directly makes spatial objects and space and can transform coordinates/properties.

Note: People cannot perceive empty space or its coordinates, but only perceive space and build coordinates from feature/object color spatial relations.

Unlike a person looking at a printed page or monitor display, nothing is outside looking in. Unlike a computer, people do not write output or read input, and have no input or output devices with writing or reading mechanisms.

8. Stationary space

People feel that they are moving through space, so people experience space as stationary. Space has ground (down) and sky (up), left and right, and front and behind. Perception models the physical world. Far away observations are stationary. Objects have trajectories. All motions and relations coordinate with eye, head, and body motions.

Space is stable, to be continually updated. Stationary space can maintain colors, optimize trajectories, and allow tracking of objects and self.

Stationary space relies on a set of brain regions that stays the same for each observer location in a setting (room, field, town, and so on).

Perception models motions/trajectories/transformations. In particular, as eye, head, and/or body move, tensor operations transform egocentric spherical coordinates to maintain allocentric/geocentric stationary space.

Cognition

Cognitive processes include perception, selection, attention, categorization, recognition, identification, indexing, association, learning, memory, recall, language, and reasoning.

Cognition relates to decision-making, action initiation, dreaming, imagination, will, voluntary motions, and affective mechanisms (pain, pleasure, and mood: happy, sad, angry, afraid, surprised, confused).

Brains can derive the meaning of perceptions, concepts, and principles.

1. Cognitions

Cognitions are intuitions or concepts.

Intuitions are conscious, immediate, and singular cognitions. They are objective representations about an object, property, or state. Empirical intuitions are perceptions. A priori intuitions are "meta-perceptions" or "sub-perceptions".

Concepts are conscious, mediated, and general cognitions. They are objective representations about objects, properties, or states. Empirical concepts are perceptual categories. A priori concepts are reasoning, feeling, and mood categories. Concepts have subclasses and superclasses:

Concept extension is more specific subclasses that still have class properties. For example, things can be material or immaterial, and material things can be can be alive or not alive. Alternatively, concept extension is all instances of the class or all things with a property.

Concept intension is the list of specific values of the concept's superclass in the hierarchy. For example, people are alive and material.

1.1. Concept types

Concepts have types.

Opposites: A concept can have opposite sub-concepts, perhaps with an in-between sub-concept. Such concepts can be about direction or about substance, state, and/or process:

Substance/state/process examples are dark/middling/light, cold/neutral/hot, and base/neutral acid. Such opposites are different in kind.

Direction examples are in/neutral/out, contracting/stationary/expanding, valley/plain/mountain, trough/middle/peak, take/neutral/give, and defend/stationary/attack. Because space is isotropic, directions are equivalent, so opposites based on direction are not different in kind.

Phases: A concept can have two fundamentally different substances/states/phases, with no middle state/phase. An example is fermion spin, with values -1/2 and +1/2. A concept can have three fundamentally different substances/states/phases. An example is solid, liquid, and gas. A concept can have more than three fundamentally different substances/states/phases. An example is ice, which has many different frozen states.

Group: A concept can have sub-concepts that form a mathematical group. It has an operation (such as addition), and perhaps a second operation, such as multiplication. It can have association. It can have commutation or anti-commutation. It can have an identity element. It can have an inverse element. Examples include geometric-figure rotations, such as the color circle. Values may be angles of -180 degrees to +180 degrees to a reference line (polar axis).

Vector: A concept can be a vector, with magnitude and direction. Values range from negative minimum through zero to positive maximum. Its sub-concepts have vector addition, inner product, outer product, and geometric product. Examples include force and momentum.

Conservation: A concept can conserve a quantity. Examples are conservation of energy, mass, and charge. Total energy is constant, though kinetic energy and potential energy interchange. Conservation is expressible in percent, so, for example, kinetic-energy percent plus potential-energy percent always equals 100%.

2. Representations

Cognitions always are representations.

Representations have content, such as objects, properties, states, reasoning, feelings, mood, and events.

Representations have basis/origin, such as empirical or a priori.

Representations can be conscious or unconscious. Unconscious representations are information intermediaries. Conscious representations can be experiences or be cognitions of concepts or intuitions.

3. Listening and reading

Listening and reading are cognitive processes that use attention, selection, association, memory, recall, and reasoning to find meaning and understand the scene/situation/narrative.

Listening requires ability to:

Select sound out of all sounds and noise (selection).

Attend to selected sound (attention).

Recognize, identify, and categorize sound (understanding), such as sound source, associations, and meaning.

Listening to speech requires ability to:

Assign symbols to selected/attended sounds (phonics and phonemic awareness).

Form symbol sequences into words (phonics and phonemic awareness), using speech segmentation and identification.

Give words meaning using pictures or concepts (vocabulary).

Parse word strings to put words into contexts (recognize subject, verb, object, and their modifiers) and identify a scenario/situation and a narrative (comprehension).

Reading requires ability to:

Recognize, identify, and categorize selected/attended letters/marks and assign known phonemes/symbols to them (phonics).

Form letter/mark strings into words, assign sounds (phonemic awareness) to words, and give words meaning using pictures or concepts (vocabulary).

Parse word strings to put words into contexts (recognize subject, verb, object, and their modifiers) and identify a scenario/situation and a narrative (comprehension).

Fluency is accuracy and speed/efficiency at these tasks.

During reading or listening, people may:

Use whole context to find meaning.

Understand theme and references.

Decide to remember.

Associate with previous knowledge.

Attend to idea or action.

Use reasoning to make an inference.

Listening and reading require an integrated symbol system, with many connected datatypes at different hierarchy levels, with meaning as a place in the meaning relationship network.

Meaning

Meaning requires symbols, symbol systems, reference frames, association, and categorization. Spatial and other relations make categories and associations, and so make an integrated symbol system, with many connected datatypes at different hierarchy levels. Meaning is a place in a meaning relationship network.

Perception has relations in space, and space has structure from perception. Correspondence with the physical world provides meaning. Perceptions have meaning because they are in space.

1. Symbols

Symbols represent, reference, and/or point to things, ideas, relations, processes, and functions. For example, the symbol A can represent "the highest grade". Anything can be a symbol of something else. Anything can be the reference of a symbol. Events, actions, and motions can be symbols, as can ideas of events, actions, and motions.

2. Symbol systems

A set of symbols can be a complete and consistent system. All symbols are of similar type, and all symbol references are in one category.

For example, a symbol system can be about single characters for phonemes, in which the alphabet letters A through Z are symbols for sound syllables.

As another example, a symbol system can be about single characters for success levels in school. The symbol A represents "the highest level", the symbol F represents "the lowest level", and so on.

3. Reference frames and integrated symbol systems

An integrated symbol system has relations among its symbols. Symbols have an order, values, roles, or probabilities, in relation to each other. For example, the letters A through Z are all letters and have an order, first to last.

An integrated symbol system has relations among its representations. Representations have an order, values, roles, or probabilities, in relation to each other. For example, all sounds are syllables/phonemes, some sound syllables/phonemes are consonants, and consonants have different articulation types.

Integrated symbol systems give meaning (grounding) to symbols by assigning parameters to references. Parameters make a reference frame. For example, a parameter can have values good or bad, true or false, large or small, or multiple values. In the letter-grade reference frame, A is good, and F is bad. Meaning requires an integrated symbol system.

4. Association

Association links two things. Some bases of association are location, time, shape, feature, property, process, function, state, structure, or substance.

Association can be sharing the same property value. For example, two things can have the same location. The sound a and letter A link language speaking and writing.

Association can be sharing the same property. For example, left and right associate along a horizontal direction.

Association can be groups. For example, objects can be near each other in a cluster. Objects can be near each other in time. Objects can be parts of a whole.

Association can be over evolution or development. For example, two advanced objects start from the same primitive object.

Symbols and representations can have associations.

Associations can make reference frames.

5. Categorization

Categories group things. Things that have the same or similar location, time, shape, feature, property, process, function, state, structure, or substance can be in a group. For example, the consonants form a category.

Categories are collections of objects. For example, the letters are in the alphabet. Categories are higher than objects.

Categories can have associations.

Categories can make reference frames.

6. Meaning

Symbols, representations, reference frames, integrated symbol systems, associations, and categories make a meaning relationship network, which has connections and hierarchy levels. Meaning is a node or link in a meaning relationship network.

7. Human, computer, and experiential languages

Human languages are complex integrated symbol systems, with symbols, symbol systems, reference frames, association, and categorization, so they can have meanings.

Computer languages are not integrated symbol systems, because they have no reference frame, so they cannot have meanings. Objects have relations but do not use representations. Computer code, input, output, datatypes, and constant and variable values are only series of off-on switch settings. Computer processing transforms switch settings. Computer languages are only formal, abstract, and syntactic. (However, human observers give datatypes and variables meaning when they design input, make programs, and observe output.)

Experiences have space as reference frame, and so make an integrated symbol system, with meaning.

Knowledge, Perception, Cognition, Meaning, and Behavior

Vision physiology builds knowledge, perception, cognition, and meaning, for use in behavior:

Distances, angles, lengths

Radial distances in space directions, absolute and relative to eye, head, and body

Brightness, hue, saturation, and color; lightness, temperature, strength, vividness, depth, and glossiness; and color categories and quantities: black, white, yellow, blue, red, and green (four hue categories and two non-hue categories) and their properties

Points, surfaces, and regions, with orientation, finish (glossy, semi-glossy, or matte), material phase, mass, density, texture, depth, transparency, translucence, and opaqueness, plus overlapping, shadows, and occlusions

Light sources/radiations/emissions/reflections and their refractions and depths

Features, objects, scenes, and space, and feature and object spatial locations and configurations

Spherical (radial distance, azimuth, and elevation) and rectangular (right-left, up-down, and near-far) coordinate systems, including landmarks, with observer at coordinate origin

Relative (before, during, and after) and absolute time

Sequences and series

Motions (translations, rotations, and vibrations), flows, and body movements in egocentric and allocentric/geocentric space

Physical-property and chemical-property labels

Behaviors

Numbers, text, and languages

Purposes, functions, and activities in relation to self and other objects

Methods about actions (muscles) and emotions (glands)

Emotion, mood, attention, attraction/repulsion, signifiers, and all behavior influences, plus observation self or not-self designation

Observations, observer, and observation-observer relations and spatial models

Context, association, memory, and recall

Knowledge, perception, cognition, and meaning work together, use the same time and space, discriminate, categorize, and model physical properties, to make the bases for behaviors for actions/muscles and emotions for feelings/glands.

Neural-assembly attractors and repellers direct behavior.

Knowledge, perception, cognition, and meaning allow affective processes, which increase survival by avoiding problems and finding resources better.

Brains can control and make voluntary movements and movement sequences. Will is about space directions (movements) and experience motivations (movement reasons and agency).

Knowledge, perception, cognition, and meaning call attention and mark significance.

Knowledge, perception, cognition, and meaning have an observer and recognize self and not-self. Narratives are about observer moving through space, knowing itself and objects in space. Selves have interactions among all their parts and surroundings.

Section about Evolution and Development of Sensation and Perception

Evolution of Sensation and Perception

Sensation and perception evolved as organisms evolved from one-celled organisms to humans.

1. Protista/Protozoa

Stimulus Detection by Cell Membrane: Cell-membrane receptor molecules react to pressure, light, or chemicals.

Potential Difference across Cell Membrane: Cell-membrane ion channels actively transport ions across membrane, to make a concentration gradient that causes electric-voltage difference across cell membrane.

2. Marine metazoa

Glands: Mesoderm develops into glands, which release hormones to regulate cell metabolism (and into muscles). (Endoderm develops into digestive tract.)

Neurons: Ectoderm develops into sense receptors, neurons, and nerves (and into outer skin).

Nerve Impulse: Local stimulation can discharge local-cell-membrane electric potential, which discharges adjacent local-cell-membrane electric potential, and so on, so nerve impulse travels along nerve.

Nerve Excitation: Excitation raises membrane potential to make reaching impulse threshold easier, or to amplify stimuli.

Nerve Inhibition: Inhibition lowers membrane potential to damp competing weaker stimuli to leave stronger stimuli, or to more quickly damp neuron potential back to resting state (to allow timing).

Neuron Coordination: Sense receptors and neurons connect by membrane electrical and chemical connectors, allowing neuron coordination.

3. Bilateria

Bilateral Symmetry: Flatworms have symmetrical right and left sides and have front and back.

Ganglia: Neuron assemblies have functions.

4. Deuterostomes

Body Structure: Flatworm embryos have enterocoelom; separate mouth, muscular gut, and anus; and circulatory system. Embryo inner tube opens to outside at anus, not head.

5. Chordata

Body Structure: Larval and adult stages have notochord and distinct heads, trunks, and tails.

Nervous System: Chordates have head ganglion, dorsal hollow nerve, and peripheral nerves.

Reflexes: Sense receptors send electrochemical signals to neurons that send electrochemical signals to muscle or gland cells, to make reflex arcs.

Interneurons for Excitation and Inhibition: Interneurons connect reflex arcs and other neuron pathways, allowing simultaneous mutual interactions, alternate pathways, and networks.

Association: Interneurons associate pathway neuron states with other-pathway neuron states. Simultaneous stimulation of associated neurons modifies membrane potentials and impulse thresholds.

Attention: Association allows input acknowledgement and so simple attention.

Circuits for Timing and Calculation: Association series build neuron circuits. Outside stimulation causes electrochemical signal flows and enzyme releases. Circuits have signal sequences and patterns whose flows can spread stimulus effects over time and space and calculate algorithms.

Receptor and Neuron Arrays for Feature Detection: Sense-receptor and neuron two-dimensional arrays detect spatial and temporal stimulus-intensity patterns, and so constancies, covariances, and contravariances over time and/or space, to find curvatures, edges, gradients, flows, and sense features.

Topographic Maps for Spatial and Temporal Locations and for Objects and Events: Neuron arrays are topographic, with spatial layouts similar to body surfaces and space. Electrochemical signals stay organized spatially and temporally and so carry information about spatial and temporal location. Topographic maps receive electrochemical-signal vector-field wave fronts, transform them using tensors, and output electrochemical-signal vector-field wave fronts that represent objects and events.

Memory for Excitation and Inhibition: Secondary neuron arrays, maps, and circuits store associative-learning memories.

Recall for Excitation and Inhibition: Secondary neuron arrays, maps, and circuits recall associative-learning memories, to inhibit or excite neuron arrays that control muscles and glands.

6. Vertebrates/fish

Motor Coordination: Hindbrain has motor cerebellum.

Sleeping and Waking: Hindbrain has sleep and wakefulness functions.

Sensation: Hindbrain and midbrain have sense ganglia.

Sensation Organization: Forebrain has vision occipital lobe, hearing-equilibrium temporal lobe, touch-temperature-motor parietal lobe, and smell frontal lobe.

Balance: Vestibular system maintains balance.

7. Fresh-water lobe-finned fish

Hearing: Eardrum helps amplify sound.

8. Amphibians

No New Perception Functions: Early amphibians had no new sense or nervous-system features.

9. Reptiles

Sensation Organization: Paleocortex has two cell layers.

Vision: Parietal eye detects infrared light.

10. Anapsids, diapsids, synapsids, pelycosaurs, and pristerognathids

No New Perception Functions: Early anapsids, diapsids, synapsids, pelycosaurs, and pristerognathids had no new nerve or sense features.

11. Therapsids

Warm-Blooded: Therapsids have thermoregulation.

Hearing: Outer ear has pinna.

12. Cynodonts, Eutheria, Tribosphenida, Monotremes, and Theria

No New Perception Functions: Early cynodonts, Eutheria, Tribosphenida, monotremes, and Theria had no new nerve or sense features.

13. Mammals

Stationary Space: Vision can maintain stationary three-dimensional space, with a fixed reference frame.

Sensation Organization: Neocortex has four cell layers.

Vision: Mammals can see brightness and color.

14. Insectivores

Vision: Face has a front, with eyes with two overlapping visual fields.

15. Primates, prosimians, and monkeys

No New Perception Functions: Early primates, prosimians, and monkeys had no new nerve or sense features.

16. Old World monkeys

Vision: Old World monkeys have trichromatic vision.

17. Apes

Self: Chimpanzees and humans over two years old can recognize themselves using mirror reflections and can use mirrors to look at themselves and reach body-surface locations.

18. Anthropoid apes

Planning and Prediction: Neocortex frontal lobes have better memory and space functions, with planning and prediction.

19. Hominins

Multisensory Sensation Coordination: Neocortex has six layers and has multisensory regions.

20. Humans

Sensation and Motor Coordination and Space: New associational cortex is for better perception-motion coordination and behaviors. Frontal lobes have better spatial organization.

Language: Neocortex has language areas. Parietal lobes have better communication.

Human Growth and Development of Sensation and Perception

Development is from embryo to fetus, newborn, infant, toddler, child, adolescent, and adult.

During development, humans pass through many different environments, so development has many possibilities. Environment (nurture) and genes (nature) have approximately equal importance [Carey, 1987] [Winick, 1978].

1. Growth and development

Growth and development begins at fertilization and involves cell proliferation and specialization that creates and modifies structures, functions, and behaviors.

In early development, source cells secrete peptide transcription factors (morphogens) that diffuse through all stem-cell tissue, making concentration gradients. Different concentrations cause different stem-cell chemical reactions that differentiate stem cells into more specialized cells: low concentrations affect only low-threshold reactions, while high concentrations affect low-threshold and high-threshold reactions.

Later in development, in each brain region and sub-region, cells secrete signaling proteins (morphogens), making concentration gradients. Different concentrations cause different chemical reactions to make specialized cells.

Adult human bodies have 256 cell types and 10^15 cells.

2. Development stages

Human development passes through stages: embryo, fetus, newborn, infant, toddler, child, adolescent, and adult.

2.1. Embryo

By seven days after fertilization, cells that will make brain and spinal cord have rolled into a hollow cylinder (periventricular germ layer), multiplied, migrated outwards, and become neuroblasts.

From 7 days to 21 days, neuroblasts make a cell slab (cortical plate) in embryo upper layer, with two hemispheres around empty regions (ventricles).

From 14 days to 21 days, ectoderm above the notochord makes first neural plate, then neural groove, then neural folds, and then neural tube. Neural tube has inner ventricular zone (ependyma) around central ventricle, intermediate-zone (mantle-layer) gray matter, and marginal-zone (pia) white matter. Ependyma becomes neurons and glia. Mantle layer becomes dorsal sense neurons (alar plate) and ventral motor neurons (basal plate). Pia is myelinated axons.

Mesoderm (neural crest) lies next to neural tube and makes adrenal medulla, sympathetic ganglia, and dorsal-root ganglia.

Genes direct nervous-system development:

After head and tail develop, proneural genes code transcription factors that make neural precursor cells to start brain development.

Neurogenic genes make cell-to-cell signal proteins for cell adhesion, signal transduction, membrane channels, and transcription factors for differential cell proliferation and specialization.

Selector genes make neuron types.

Neurotrophic genes secrete neurotrophic factors that keep neurons alive, differentiate neurons, and make neurotransmitters.

Neurotrophic-factor receptor genes code neurotrophic-factor receptor proteins.

By 21 days, brain interneurons have sent axons down brainstem to spinal cord and up to forebrain. Axons transmit messenger chemicals to other neurons to integrate central nervous system. Medulla oblongata appears at neural-tube first flexure.

By 28 days, forebrain telencephalon and diencephalon, midbrain, and hindbrain pons and medulla have differentiated. Guided by chemical and electrical signals (with no learning), brainstem and spinal-cord ventral motor cells have grown axons to trunk, limb, and viscera muscles. Embryo has first coordinated movements.

By 32 days, pons and cerebellum have appeared at neural-tube third flexure. Spinal ganglia begin forming.

By 35 days, paleocortex, cerebral medulla, and basal ganglia have appeared at neural-tube fourth flexure. Sensory nerve tracts begin, as epithelial and head sensory receptors project to brainstem and spinal-cord dorsal half. Embryo has first sensory information.

By 42 days, arms and legs have appeared. Sympathetic ganglia start to form segmental masses. Hypothalamus and epithalamus begin. Cerebral hemispheres start. Embryo has first reflex arcs.

By 49 days, thalamus, corpus striatum, and hippocampus begin. Embryo has first spatial and temporal information.

By 52 days, spinal reflexes work, and limbic lobe begins. Embryo has first emotional, success, and goal information.

By 56 days, brainstem has many projections into cerebral cortex to guide cortical-neuron migration and differentiation. Embryo has first whole-body motor and sensory integration.

2.2. Fetus

By 63 days (two months), anterior commissure appears.

By 66 days, hippocampal commissure appears.

By 70 days, corpus callosum appears. Fetus has first right-left motor and sensory integration.

By 74 days, spinal cord has internal structure. Neocortex parietal lobe starts. Fetus has localized movements.

By 84 days, cortical layers five and six appear. Fetus has first high-level motor and sensory coordination.

By 88 days, spinal cord has organized. Connections from neocortex to hippocampus start. Fetus has first high-level spatial and temporal information coordination.

By 98 days, long sensory tracts have appeared in spinal cord. Flocculonodular lobe has appeared. Fetus has first balance, position, and position-change information.

By 112 days, spinal-cord ventral-root myelination begins. Cerebellar vermis is in position. Corpora quadrigemina have appeared. Neocortex has first layering stage. Parietal and frontal lobes have separated. Fetus has first spatial maps and temporal sequences.

By 126 days (four months), occipital lobe and temporal lobe have separated. Fetus has first high-level vision and hearing.

By 136 days, tract myelination from spinal cord to cerebellum and pons begins, and dorsal-root myelination begins. Fetus has first sensory-motor coordination.

By 140 days, inner neocortex layers have matured, and pyramidal tracts from cortex begin. Fetus has voluntary movements and moves eyes. Fetus has REM sleep, indicating dreaming.

By 156 days (five months), outer neocortical layers have matured. Fetus has high-level control of motor and sensory processing. Fetus can have pain.

By 170 days, brain has all cortical layers. Ventral commissure has myelinated. Cranial nerves have myelinated through midbrain. All cerebral-hemisphere commissures are complete. Some left and right temporal-lobe and parietal-lobe regions have become asymmetric, so fetus has first handedness.

By 196 days, spinal cord to thalamus tract has myelinated through midbrain, and spinal cord to cerebellar vermis tract has myelinated. Cerebellum has final configuration. Cerebral convolutions and fissures start. Fetus has integrated motor and sensory processing.

By 224 days, secondary and tertiary brain sulci start.

2.3. Newborn

Almost all central-nervous-system neurons are present at birth, and most brain connections are present. Cell dendrites are still growing, and synapse number is still increasing. (During maturation, 80% of neurons die.)

Brainstem, thalamus, amygdala, and deep cerebellum are active.

All reflex arcs are active. Newborns can cry, suck, swallow, thrash, yell, cough, vomit, lift chin while lying on stomach, smack lips, chew on fingers, flex limbs, extend limbs, creep, shiver, jerk, arch back, draw in stomach, twist, and turn in touched-cheek direction. If spoken to, newborns can smile, coo, and make hand gestures.

All senses are present:

Newborns can have pain.

In first few days, newborns can distinguish people by odor.

Newborns react to loud sounds. If newborns are alert, high sound frequencies cause freezing, but low ones soothe crying and increase motor activity. Rhythmic sounds quiet newborns.

Newborns have wide-open eyes that can move stepwise to fixate on bright places or to track stimuli in motion. Gaze focuses only at 20 centimeters away. Newborns can follow slowly moving lights, react to brightness changes, and turn away from light stimuli. Newborns have blurry vision but can distinguish dark and light.

Newborns can learn and have short-term memory of times and locations.

Newborns have emotions and express puzzlement, surprise, pleasure, and displeasure. They can try to avoid experiences.

2.4. Infant

In the first four months, infants look only at positions (because they do not know size, shape, color, or objects). Eyes and head can follow object position changes, so infants have vision and muscle coordination.

From 1 to 4 months, infants have undifferentiated fear reactions to light flashes, fast object-position changes, noises, and falling, so infants have elementary perceptions.

From 1 to 6 months, infants have undifferentiated excitement (associated with tension or need), undifferentiated relaxed quiescence, or no emotion, so they coordinate vision and emotions.

By 1.5 months, infants can change pupil size, so they can account for light intensity. They can yawn, frown, sneeze, and close hand.

By 2 to 3 months, eye accommodation begins, and infants have good visual attention. Two-to-three-month-old babies can distinguish red and then green.

By 3 months, infants look back and forth between objects and hands and have good visually directed arm movements. They can put objects or hands in mouth.

By 3 to 4 months, infants have eye convergence and eye coordination and know depth and orientation, so they perceive three-dimensional space. They have size constancy, shape constancy, completion, motion parallax, and binocular parallax.

By 4 months, infants perceive size, shape, and color and have object perception. Infants never confuse objects with themselves. They can distinguish human faces and so realize whether people are family or strangers. Four-month-old babies can distinguish blue and yellow.

They have undifferentiated fear reactions to strange objects and to people and animals associated with pain, and so coordinate vision and pain perceptions.

By 5 months, babies have clear vision and can distinguish colors similarly to adults. At that time, they also have the beginnings of consciousness and memory.

By 6 months, most synapses have developed. Neocortex layers one, two, and three have myelinated. Frontal lobes have become active, and theta waves have appeared. Infants can place objects in three-dimensional space.

By 7 months, delta and alpha waves have appeared. Infants can give meaning to perceptions.

By 7.5 months, brain-wave pattern is continuously present. Infants can integrate perceptions.

By 9 months, layer 3c and layer 4 cortical neurons are functional. Sensory tracts between thalamus and primary sensorimotor cortex, and motor tracts between motor cortex and cerebellum, are functional. Thalamus and lower gyrus-cinguli layers are in intraorganismic short circuit.

Brain-wave waking and sleeping patterns are like adult ones.

Infants first predict other's movements and realize that people are pointing at or looking at something, so they can follow other's eye direction toward distant objects. Infants first know relation of objects to self (intentionality) and have a body image.

By 9 to 12 months, infants can reach everywhere in space without looking, so they have a cognitive map.

By 12 months, pyramidal tract has myelinated. Infants can accurately reach for objects and are aware that objects exist even when not seen, so they have a complete cognitive map and can use symbols. Visual cortex has complete space and color information. Infants understand household-object names, so they coordinate language and perception.

By 14 to 18 months, meaningful speech begins, and infants can make two-word phrases.

By 18 months, pre-frontal lobes and language areas become active. Self-consciousness begins. Visual-cortex structures and functions for sensations have begun.

2.5. Toddler

By 24 months, toddlers have 75% of adult brain, and cerebellum is 80% of adult weight. Cortex layer 3b, thalamus and association-area tracts, and association-area outer sections are functional. Fissures between brain lobes are present. Vertical exogenous fibers connect cortex with subcortex. Subcortical association fibers, layer-one tangential fibers, and horizontal exogenous fibers develop. Visual-cortex structures and functions for sensations are complete.

Toddlers use names plus verbs (grammar), so they know that things have actions. Toddlers use words that refer to self ("I", "my", "mine", "me", and "myself"), so they have good self-consciousness. Personality begins.

By 24 to 36 months, toddlers learn tense, number, and other word-association rules and build simple sentences from words. Toddlers express locations, desires, goals, feelings, and thoughts, so they have integrated self and world.

By 36 months, hippocampus becomes mature. Toddlers can remember sensations and have first long-term memory. They begin to know about minds and mental states. They cannot distinguish appearance and reality.

By 36 to 48 months, toddlers can compare and reason, can connect sentences into paragraphs, can fantasize and pretend, and have first value systems.

2.6. Child

By 4 to 7 years, according to Piaget, children can use intuitive thought, make classes, realize that objects belong in classes, and use quantifiers, but use no logic and are still centering. Children cannot use interaction or no interaction between two variables to determine object properties, such as number, quantity, time, or space (conservation concept).

At 5 years, temporal-pole fissure is complete.

At 5 years, children can pick up tiny pellets and place them, have good balance, show handedness, act independently, dress themselves, use toilet alone, play alone unsupervised, and imitate activities. They can draw squares and triangles but not diamonds, trace long straight lines, and draw people.

At 5 years, children can organize memories, so they can recover from speaking interruptions. They can recall 4 to 5 numbers immediately, solve small problems, and talk and think to themselves. They have well-defined personalities.

At 6 years, frontal-lobe, parietal-lobe, and occipital-lobe fissurations are mostly complete. Brain is 90% of adult weight. Parietal-lobe and occipital-lobe surface areas are complete. Layer 2 is functional. Neural connections maximize at age 6.

At 6 years, children have 2500-word vocabularies, can read, and can count to 9.

At 6 years, children can throw balls well, go through mazes, and react to orientation changes, because they have learned up-down and right-left.

At 7 years, children can recreate mental states and reverse logical processes or transformations (reversibility concept).

At 7 to 8 years, children can understand metaphors. Speech articulation becomes as good as adults.

At 7 to 11 years, according to Piaget, children can perform concrete operations (serialization), relate objects by scale or quantity, include objects in classes, relate parts to wholes, play using logical rules, use flexible thinking, consider other views, communicate extensively, make mental representations, know relations, and understand relations among relatives.

At 10 years, frontal-lobe surface area is complete, and all fissurations are complete.

At 10 years, children can recall 6 to 7 numbers immediately and can tell objects from outlines.

At 11 years, according to Piaget, children can perform formal operations, think abstractly, analyze, judge, reason, deduce, consider problems not related to reality, apply rules to things, and combine rules and generalize (combinative structure). They can think about thinking, possibilities, self, future, and things not in experience.

2.7. Adolescent

At 11 to 14 years, brain-wave pattern is like adult.

At 11 to 15 years, gonadotropin-releasing-hormone (GnRH) release from hypothalamus triggers puberty, causing pituitary to secrete hormones that affect testes or ovaries. Kiss-1-gene enzyme (kisspeptin) activates GPR54, which also affects puberty.

At 12 years, layer two functions (except for prefrontal lobe). Alpha waves dominate EEG. Central association areas and frontal lobes can function. Reticular formation completes myelination at puberty.

At 12 to 14 years, adolescents understand metaphors.

At 13 years, cerebellum is at adult weight. Substantia-nigra dark pigmentation is complete.

At 14 years, prefrontal-lobe lower layers function. Cortex layers one and two function (except in prefrontal lobe). These layers affect association areas and short-and-long circuit reorganizations. Waking and sleeping EEGs show sudden transitions.

At 17 to 20 years, prefrontal-lobe layers three, four, five, and six function. Association areas are still developing.

At 18 years, myelin is still forming in reticular formation and association areas.

2.8. Adult

Speed and ability peak between late teens and late thirties and then gradually decline.

At age 35, all brain parts are complete. Frontal lobes have myelin.

3. Summary of development

Before one year old:

Three-month-olds seem to realize where things are in space, like hands and feet, music box on a cradle side, and people nearby. They can look at you, then turn away, and then look again.

Four-month-olds appear to see the world as stable and outside. They attend to some stimuli and ignore others.

Five-month-olds can recognize objects, like a bottle, and sounds, like people's voices and water running for the bottle.

Seven-month-olds know where a thing goes if it is moving or dragged.

Eight-month-olds can remember that pulling a cord made a light go on and off and that flowing faucet water means time for food.

Nine-month-olds can remember where something was, for example, where a toy is that had gone out of sight.

Ten-month-olds can remember in what direction a thing is.

These do not prove consciousness or awareness before one year old.

Between one year old and two years old:

One-year-olds know in and out, near and far, and open and closed, and can do them.

One-and-a-third-year-olds appear to know that seen and/or remembered items are similar, such as doilies.

One-and-two-thirds-year-olds can say the names of colors red and blue, and they realize that the parts of a doll body are the parts of a human body.

These do not prove consciousness or awareness before two years old.

Between two years old and three-and-a-half years old:

After two years old, toddlers have concepts of numbers. For example, if a father mentions that his shirt sleeve has one stripe, toddlers can notice that there is one stripe on the other sleeve, and say "two stripes". They can then say that their dress has no stripes.

Three-year-olds can put together a puzzle and memorize the pieces. When they see a sign, they can recognize some letters.

At three-and-a-half years old, when a father put stuffed animals in a car first, and said he was number 3, a girl can say Strawberry was first, Snoopy was second, and she was third. Then she can say that the three were 1, 2, and 3. Three-and-a-half-year-olds can realize that 2 and 1 are 3, by counting books.

These do not prove consciousness or awareness before two years old between two years old and three-and-a-half years old.

Living-Thing Properties

Living organisms result from cell, organ, and organ system hierarchical structures and processes. They require energy and metabolism. They develop, grow, and have regulation and homeostasis. They have sensitivities and behaviors.

Living things:

Move (movement) (behavior) and have internal motions that transform parts.

Have a hierarchy of structures (hierarchical organization) and processes (metabolism).

Act in response to internal and external conditions and changes (sensitivity).

Build and copy structures and processes (reproduction).

Develop (development) and grow (growth).

Regulate processes (regulation).

Keep processes steady/constant/in-balance (homeostasis).

Gather and use energy (energy processing) and conserve or shed heat (energy balance).

Section about How Vision Experiences Light, Brightness, and Color in Space

Experiencing programming constructs/synthesizes/experiences light/dark/brightness/color in space. Experiencing programming comes from "deep learning", which includes transforming, splitting, recombining, associating, interacting, analyzing, and synthesizing sense information and programming.

Experiencing programming first has information about space and color. Experiencing programming then uses graphical processing and spatial computation to perform analog/continuous three-dimensional geometric/spatial operations on its spatial datatypes of its space/color model in its three-dimensional registers, processors, memories, and information channels. Experiencing programming then observes/synthesizes all sense information to have experiences in space.

Experiencing programming has information about space and color

Experiencing programming transforms vision (and all other sense) input into information about space, color, and their parameters, coordinates, properties, and categories.

1. Opponent-process inputs transform to color and space information

For each space direction from coordinate origin, experiencing programming receives six inputs, from three independent opponent processes:

Spot luminance minus surround luminance

Surround luminance minus spot luminance

Spot blue-wavelength intensity minus yellow-wavelength intensity

Spot yellow-wavelength intensity minus blue-wavelength intensity

Spot red-wavelength intensity minus green-wavelength intensity

Spot green-wavelength intensity minus red-wavelength intensity

Note: For each of the three opponent-process pairs, if one luminance/intensity is greater than the other, one input is above baseline, and the other input is baseline. (If the two luminances/intensities are equal, both inputs are baseline.)

2. Information about space coordinates, categories, and properties

From opponent-process inputs, experiencing programming calculates information about space coordinates, categories, and properties. It knows about distances, directions, coordinates, locations, points, features, objects, scenes, boundaries, regions, spatial relations, space, space properties, space categories, and motions ("where" information).

3. Information about color coordinates, categories, and properties

From opponent-process inputs, experiencing programming calculates information about color coordinates, categories, and properties, including brightness, contrast, hue, saturation, color, and light ("what" information).

3.1. From color coordinates to information about six brightness/color categories

Experiencing programming makes the three opponent processes into three independent color-coordinates/gradients, one for brightness and two for hue, each with opposing/contrasting experiencing, optimized to match what vision does.

Experiencing programming analyzes and synthesizes the three brightness/hue color coordinates to maximize color-category contrasts and so define color categories at the extreme values of the coordinates: light/white and dark/black, not just one range of luminance from very low to very high; blue or yellow, not just one range of hue lightness from darker to lighter; and green or red, not just one range of hue temperature from cooler to warmer. Black and white have the greatest contrast between brightnesses. Blue and yellow have high contrast between brightnesses. Red and green have middling contrast between brightnesses.

Note: In physics, a gradient causes a flow. Perhaps experiencing programming makes neuron flows into sensory gradients, to use as sense coordinates.

3.2. From color coordinates to information about three brightness/color properties

Experiencing programming transforms (rotates and translates) the three (independent) brightness/hue color coordinates to construct three brightness/hue color properties:

Brightness property, with values ranging from dimmest/darkest to brightest/lightest (and with no information about hue)

Lightness, denseness, deepness, strength, dominance, and coverage hue property (hue lightness)

Activity, temperature, vividness, salience, texture, and danger hue property (hue temperature)

Experiencing programming maximizes color-property contrasts and so defines extreme values for color properties: dimmest to brightest color, darkest to lightest hue, and coolest to warmest hue. Hue and no hue have the greatest contrast. The brightness property has very high contrast between the extremes of dim and bright. The hue lightness property has high contrast between extremes of darkness and lightness. The hue temperature property has middling contrast between extremes of cool and warm.

The six main color-categories, and all distinct colors, have a unique set of values of the three color-properties.

4. Coherent color, space, and vision

Experiencing-programming color information evolves/develops to best discriminate brightnesses, hues and colors. Experiencing-programming space information evolves/develops to optimize surface boundaries, region differences (to sharpen region distinctions), and regularities (to create regions). Experiencing-programming space and color information evolves/develops to best discriminate among objects, classify objects, integrate features/objects/scenes/space from all viewpoints and perspectives, and define spatial relations.

Experiencing programming makes enough connections, associations, and relations to find the most probable vision configurations and construct coherent color and coherent space. Coherent vision includes information about light radiators/emitters/reflectors, light paths, and objects' shapes, surface textures, physical states, hardness, and other material properties. Note: In language learning, people finally have a feeling of understanding/comprehension. Similarly, people finally have vision perception. Also, people can learn any language. Similarly, people can learn any sense.

5. Colors and their coordinates, categories, and properties

Vision has space and colors, with coordinates, categories, and properties.

Experiencing vision is about contrasts, so colors in space have contrasting dark/black or light/white, blue or yellow, and/or red or green and have contrasting properties. Experiencing programming constructs both degrees of light and degrees of dark, not just brightness from very low to very high, making contrasts.

Experiencing synthesizes space, brightness, hue, and saturation, so different brightnesses have different qualities: relatively low brightness is blacker, and relatively high brightness is whiter and/or more colorful. As another example, aural experiencing synthesizes space, loudness, tone, and harmonics, so different loudnesses have different qualities: relatively low loudness is a whisper, and relatively high loudness is a blare.

5.1. No-hue colors: black, white, and grays

Black has no hue, and grays and white have no net hue, so they all have no hue activity, temperature, vividness, salience, texture, or danger, and have no hue lightness, denseness, deepness, strength, dominance, or coverage. Black, grays, and white have only brightness.

Gray

Radiators/emitters/reflectors with equal spot and surround intensities, such as when you close your eyes in the dark or look at a blank wall up close with no shadow, have baseline output from both the white-black and the black-white opponencies, so people experience medium brightness and medium darkness. If the other opponencies all have baseline output, so there is no hue, people experience gray. Grays look like mixtures of black and white, because spot brightness is relative to surround brightness. Gray can mix with hues, black, and white; gray contrasts with black and white; and adding hue reduces gray.

Black

Radiators/emitters/reflectors with very low intensity compared to surround have very low output from white-black opponency (so people experience that brightness is very low and that there is no hue, light, or white) and have very high output from black-white opponency, so they look dark/black. Black is pure dark. (Cone outputs are so low that all hue opponent processes have only baseline outputs, so there is no hue and no hue lightness or temperature.) Spot black/dark is relative to higher adjacent-surface brightness, not absolute spot intensity, so, for example, the night sky and its ground lights and stars set up different-brightness adjacent surfaces for the black-white and white-black opponent processes, so the sky looks black and the lights look white.

Black can be dimmer, but is only slightly darker, with no change of character: it is closed-ended.

Lower relative brightness is darker. Darker is the same as added black, so, for example, darker hues are blacker. Less added black is not as dark.

Black contrasts with white, black can mix with hues and white, and adding hue or white reduces black/darkness.

Even for black, visible points emit/reflect a sheaf of same-phase light rays that the eye lens converges to a focus on the retina.

Why are the points on the path from a color/surface to the retina transparent, rather than black/dark? One light ray does go from color/surface point along path points straight through lens to retina. However, different-phase light rays from color/surface points surrounding the color/surface point leave the surface, at angles, and go through path points, at angles, so that, at each path point, electromagnetic-wave superposition of the different-phase waves makes wave interference such that total wave amplitude adds to zero, so path points have no net electromagnetic waves, making them transparent. By the same reasoning, all path points from color/surface to eye are transparent.

White

Radiators/emitters/reflectors with highest intensity compared to surround have highest output from white-black opponency (so people experience highest brightness, light, and, with no hue, white) and have lowest output from black-white opponency (so they have no dark/black). White is pure light. (Cone outputs are so high that all hue opponent processes have only baseline outputs, so there is no net hue and no hue lightness or temperature.) Spot white/light is relative to lower adjacent-surface brightness, not absolute spot intensity.

White can be so bright that it transcends white to be dazzling: it is open-ended.

Higher relative brightness is lighter. Lighter is the same as added hue or white, so, for example, lighter hues are brighter or whiter. Less added white is not as light.

White contrasts with black, white can mix with hues and black, and reducing hue reduces white/lightness.

Note: Spot brightness depends on both the black-white and white-black opponent-process outputs, so black and white are relative. With no hue, if a point has medium brightness and its adjacent surface has higher brightness, the point looks dark and has black. If the same point is put adjacent to a surface with lower brightness, the point looks light and has white.

5.2. Hues: blue, red, green, and yellow

Hues have brightness; hue activity, temperature, vividness, salience, texture, and danger; and hue lightness, denseness, deepness, strength, dominance, and coverage.

Blue, red, green, and yellow, respectively, make a range of brightness between black and white.

People experience blue, cyan, and green as similar, because they are hues with low activity, coolness, dullness, recession, smoothness, and soothing. People experience red, orange, and yellow as similar, because they are hues with high activity, warmness, vividness, salience, roughness, and warning.

People experience blue, magenta, and red as similar, because they are hues with darkness, denseness, deepness, strength, dominance, and coverage. People experience green, chartreuse, and yellow as similar, because they are hues with lightness, sparseness, shallowness, weakness, recessiveness, and transparency.

Hue-lightness and hue-activity determine hue. For example, warmest temperature and medium lightness determine red. As another example, hue-lightness and hue-activity values between the values of red and yellow determine orange (which synthesizes red and yellow).

Note: The complementary colors blue and yellow, and red and green, have opposite hue activity, temperature, vividness, salience, texture, and danger, and have opposite hue lightness, denseness, deepness, strength, dominance, and coverage. Their equal mixtures have no net hue and no hue properties.

Blue

At their highest intensity, blue-wavelength radiator/emitter/reflector experiencing has low brightness, low hue lightness/high hue darkness, and low hue vividness/high hue dullness. Both hue properties contrast with yellow's hue properties. The low lightness contrasts with green high lightness but is like red medium lightness. The low vividness contrasts with red high vividness but is like green low vividness. Blue can mix with red, green, black, and white; blue contrasts with yellow; adding yellow reduces hue; and blue and green have a similarity. For hues, blue has shortest brightness range of mixtures with black and longest brightness range of mixtures with white.

Red

At their highest intensity, red-wavelength radiator/emitter/reflector experiencing has medium brightness, medium hue lightness/medium hue darkness, and high hue vividness/low hue dullness. Both properties contrast with green's properties. The low lightness contrasts with yellow high lightness but is like blue low lightness. The high vividness contrasts with blue low vividness but is like yellow high vividness. Red can mix with blue, yellow, black, and white; red contrasts with green; adding green reduces hue; and red and yellow have a similarity. For hues, red has short brightness range of mixtures with black and long brightness range of mixtures with white.

Green

At their highest intensity, green-wavelength radiator/emitter/reflector experiencing has high brightness, high hue lightness/low hue darkness, and low hue vividness/high hue dullness. Both properties contrast with red's properties. The high lightness contrasts with blue low lightness but is like yellow high lightness. The low vividness contrasts with yellow high vividness but is like blue low vividness. Green can mix with blue, yellow, black, and white; green contrasts with red; adding red reduces hue; and green and blue have a similarity. For hues, green has long brightness range of mixtures with black and short brightness range of mixtures with white.

Yellow

At their highest intensity, yellow-wavelength radiator/emitter/reflector experiencing has very high brightness, very high hue lightness/very low hue darkness, and high hue vividness/low hue dullness. Both properties contrast with blue's properties. The high lightness contrasts with red medium lightness but is like green high lightness. The high vividness contrasts with green low vividness but is like red high vividness. Yellow can mix with red, green, black, and white; yellow contrasts with blue; adding blue reduces hue; and yellow and red have a similarity. For hues, yellow has longest brightness range of mixtures with black and shortest brightness range of mixtures with white.

5.3. Color mixtures

Each distinct brightness/color has a unique set of amounts of the six main colors. As examples, maroon (dark red) synthesizes red and black, and light orange synthesizes white, red, and yellow.

Each distinct brightness/color is a unique set of values of the three color-properties. For example, maroon (dark red) has darkness, warm temperature, and medium lightness. As another example, light orange has brightness, hue-lightness, and hue-activity values between the values of light red and light yellow.

5.4. There is only one color-system

Black and white, blue and yellow, and red and green are asymmetric, not opposites, so that there is only one system of colors, with no possible exchanges. Therefore, there is only one system of color properties.

Experiencing programming is spatial computation on spatial datatypes of a space model

Experiencing programming has a central processor (with central registers, memories, and channels) for a macroscopic spatial programming language that can read and write from/to all brain macroscopic regions to connect/integrate macroscopic brain activities and so coordinate all senses and actions using space/sensations as the medium.

Spatial-intensity/quality computation/graphical-processing performs analog/continuous three-dimensional geometric/spatial operations on light/brightness/color spatial datatypes of a space/light/brightness/color model in its three-dimensional registers, processors, memories, and information channels.

1. Model and datatypes

Using perceptual knowledge about, and analysis of, space/relations, brightness/intensity, color/quality, and light/rays (and their coordinates, categories, and properties), experiencing programming constructs a computational/virtual continuous three-dimensional "space/light/brightness/color model" using "light/brightness/color spatial datatypes" that are macroscopic "vision spatial-intensities/qualities" emitting/reflecting/radiating (transferring forms of energy, because sense receptors absorb energy) from surfaces at distances in directions from coordinate origin.

1.1. Light/brightness/color spatial datatypes

Spatial-intensity/quality computation constructs computational/virtual, continuous, three-dimensional, macroscopic light/brightness/color spatial datatypes, with vision spatial-intensities/qualities/light emitting/reflecting/radiating from surfaces at distances in directions from coordinate origin, for the space/light/brightness/color model.

Datatypes merge/unify brightness/hue/color/"what" and space/spatial relations/"where" information, with information about surface brightness, hue, color, location, orientation, and physical properties (such as hardness, texture, and physical state).

1.2. Space/light/brightness/color model/register/processor/memory

Spatial-intensity/quality computation constructs a computational/virtual continuous three-dimensional space/light/brightness/color model, with a sheaf, from coordinate origin, of light/brightness/color spatial datatypes.

During development, experiencing programming "twins" three-dimensional perceptions, motions, and space to help construct the space/light/brightness/color model.

The model has continuous spherical/egocentric and rectangular/allocentric coordinate systems, with the same coordinate origin/viewpoint. The model has categories of directions (such as right, left, up, down) and distances (such as near, far). Note: To model infinite space, perhaps brain coding uses hyperbolic space, so ever smaller brain-coordinate differences represent ever greater physical distances.

2. Model/register/processor/memory

The model is its own computational/virtual continuous three-dimensional register, processor, and memory, with three-dimensional information channels, for processing computational/virtual spatial datatypes.

The model/register/processor/memory is continuous, with no bits or bytes. Note: Whereas current computers have 64-bit processors, underlying brain processes could use the equivalent of hundreds, thousands, or millions of bits, or even variable numbers of bits.

3. Experiencing programming works with model points, features, objects, and scenes

Collections/extensions of color/brightness/light spatial datatypes model perceptual features and objects, such as light sources and reflectors and their materials, physical states, densities, and textures.

Collections/extensions of perceptual objects model scenes, such as skies, waters, grounds, forests, and meadows.

Points, features, objects, and scenes have locations, directions, distances, and spatial and temporal relations, as well as adjacency, overlapping, nesting, and boundaries. (Perhaps experiencing programming makes a network of spatial relations by cross-referencing. Perceptions are at network nodes, and space is network connections.)

The model indexes objects, spatial relations, and temporal relations.

The model labels figure and ground, foreground and background, marked and not-marked, salient and not-salient, and significant and not-important.

Experiencing-programming geometric/spatial surface and region operations

Spatial-intensity/quality computation on point, line, and surface spatial datatypes performs geometric/spatial operations. Experiencing programming geometrically/spatially calculates surface boundaries and regions, compares boundaries and regions for similarity and congruency, splits regions into features, merges regions and joins features to make objects, marks regions for attention and importance, orients/aligns, and accounts for temporal and spatial binding. Spatial-intensity/quality computation can spatially/geometrically add vectors, make projections, find perpendiculars, extrapolate, and interpolate, and so calculate/represent distances, directions, and angles spatially/geometrically.

1. Adding vectors

To add vectors, spatial-intensity/quality computation adds line segments.

2. Projection and dot product

Projection moves points linearly from original figure onto projection line, surface, or volume. To perform projections (find dot products), spatial-intensity/quality computation uses two intersecting lines and a line segment on one line. Lines from line-segment endpoints intersect the second line at right angles, to make two right triangles that share the point where the two lines intersect. Using line-segment scaling, spatial-intensity/quality computation finds the length of the line segment between the right-angle vertices on the second line.

3. Perpendicular and cross product

Two vectors can share a perpendicular vector. To find the perpendicular (cross product), spatial-intensity/quality computation uses two intersecting line segments, with known lengths a and b, and with angle A between them. It uses line-segment scaling to find the product: a * b. It uses similar-triangle corresponding-side ratios to find the sine of the angle: sin(A). Length = a * b *sin(A). At the intersection, it constructs a perpendicular with that length to both line segments.

4. Geometric product

To calculate multivector geometric product, spatial-intensity/quality computation uses the sum of the dot product and cross product.

5. Extrapolation/interpolation

Spatial-intensity/quality computation can extrapolate and interpolate, including filling in. As examples, it extrapolates to find surfaces at distances in directions and interpolates to find points in surfaces at distances in directions.

Experiencing-programming spatial mathematical operations

Spatial-intensity/quality computation on point, line, and surface spatial datatypes performs spatial mathematical operations.

1. Arithmetic operations

Spatial-intensity/quality computation performs arithmetic operations geometrically/spatially.

To add (and subtract), it adds line segments.

To multiply (and divide), it scales line segments. For example, it uses similar triangles to find length ratios. Two right triangles are similar triangles, with all corresponding sides having the same ratio of side lengths. In the first right triangle, the vertical side has unit length, and the horizontal side has length b. In the second right triangle, the vertical side has length a, and the horizontal side has unknown length x. For the two triangles, the vertical-side length ratio is a/1, and the horizontal-side length ratio is x/b = a/1, so the second-triangle horizontal side has length a*b.

2. Set operations

Spatial-intensity/quality computation performs set operations geometrically/spatially. Sets (such as empty and universal sets) are spatial. The negation, union, and intersection set operations are spatial operations. Also, set operations are equivalent to arithmetic operations:

0 is the empty set.

1 is the universal set.

Adding adds sets.

Subtracting subtracts sets.

Multiplying and dividing are multiple additions or subtractions.

3. Logical operations

Spatial-intensity/quality computation does logical operations geometrically/spatially, because logical operations are equivalent to set operations:

Logical negation relates to complementary set (and Boolean-algebra negation).

Logical conjunction relates to set intersection (and Boolean-algebra subtraction).

Logical disjunction relates to set union (and Boolean-algebra addition).

Conditionals are equivalent to NOT(p AND NOT q).

4. Algebraic, calculus, and linguistic (string) operations

Using arithmetic, set, logic, and geometric operations, spatial-intensity/quality computation can perform all algebraic, calculus, and linguistic/"string" operations spatially.

Experiencing-programming three-dimensional information transfer

Spatial-intensity/quality computation on point, line, and surface spatial datatypes performs three-dimensional information transfers. Spatial-intensity/quality computation computationally/virtually writes/reads three-dimensional spatial-configuration information to/from three-dimensional registers, processors, and memories, along three-dimensional information channels. Spatial-intensity/quality computation uses synchronized (same frequency and phase) firing to group all points of spatial objects. Signal flows have a longitudinal component, with a cross-section, and can have a transverse component, with a longitudinal-section.

Three-dimensional information transfer can use stacking, skewing, or interleafing.

1. Stacking three-dimensional information flows

At each time step, stacking puts the contents of the three-dimensional register into the flow, so the flow is a series of three-dimensional-register contents. The left side of the illustration (Figure 7) shows a 27-point three-dimensional register, with a vector. Information flow sends the register contents down the information channel, one step at a time.

The vector does not change. The register's and the flow's cross-sections have the same dimensions.

The receiving register's contents are the same as the sending register's contents.

2. Skewing three-dimensional information flows

At each time step, skewing puts the contents of the three-dimensional register into a two-dimensional layer one element thick (Figure 8). The register's left vertical plane is the layer's left side, the register's middle vertical plane is the layer's middle, and the register's right vertical plane is the layer's right side.

The vector changes. The register's and the flow's cross-sections have different dimensions.

Reversing the above process makes the receiving register's contents the same as the sending register's contents. Alternatively, the receiving register's contents can be the same as the information channel's contents.

3. Interleafing three-dimensional information flows

At each time step, interleafing puts the contents of the three-dimensional register into a two-dimensional layer one element thick (Figure 9). The register's top horizontal plane is the layer's left side, the register's middle horizontal plane is the layer's middle, and the register's bottom horizontal plane is the layer's right side.

The vector changes. The register's and the flow's cross-sections have different dimensions.

Reversing the above process makes the receiving register's contents the same as the sending register's contents. Alternatively, the receiving register's contents can be the same as the information channel's contents.

Experiencing-programming spatial motion operations

Spatial-intensity/quality computation performs analog/continuous three-dimensional geometric/spatial motion operations.

1. Experiencing programming tracks model motions and flows

Experiencing programming has motion methods/procedures, defined by input/output interfaces, to model feature/object/scene motions.

Experiencing programming geometrically/spatially tracks/marks/labels motions and flows.

Experiencing programming geometrically/spatially models projections, translations, rotations, transformations, vibrations, expansions, contractions, compressions, expansions, torsions, curving, scaling (zooming), shape changes, trajectories, axial and transverse flows, emissions, reflections, transmissions, refractions, and accelerations, including eye, head, and body motions.

Note: Smooth motions and trajectories occur only in three-dimensional space (and require one-dimensional time).

2. Experiencing programming maintains space/light/brightness/color-model stationary space

Experiencing programming maintains space/light/brightness/color-model stationary space using reverse coordinate transformations, around coordinate origin, to cancel eye, head, and body motions.

Experiencing-programming meta-model for all senses

Experiencing programming contrasts/differentiates/splits, as well as correlates/integrates/fuses, all senses and their spatial computations, spatial-intensities/qualities (forms of energy transfers), spatial datatypes, and models. Contrasts separate and distinguish, by finding and imposing boundaries. Experiencing programming maximizes contrasts among senses by evolving/developing sense spatial-datatype spatial-intensities/qualities, in the computational/virtual space/light/brightness/color model, away from each other toward extremes that express sense coordinates, properties, and categories, and evolving/developing each sense's spatial-datatype subtypes away from each other towards extremes that make the colors, for example, clear and distinct. Experiencing programming correlates/integrates by unifying/synthesizing space (from model); brightness, loudness, and so on (from sense spatial-intensities); and color, sound, and so on (from sense spatial-qualities). Experiencing in space unifies all senses while making them clear and distinct.

Experiencing programming constructs a meta-model that unifies all sense and space experiencing. Experiencing programming and the meta-model know what modeling is, what space is, and what senses are and do. Vision, hearing, touch, smell, taste, pain, knowing, associations, memory, recall, will, dreaming, imagination, and motor system send inputs to, and receive outputs from, the same experiencing programming and space/meta-model/register/processor/memory.

Experiencing programming, decompiling, and observing

Experiencing programming has decompiling and observing.

1. Decompiling

Experiencing programming has a decompiler, for each sense, that works on brain code to discover the code's meaning and to recreate the sensation source as high-level code ready for experiencing. Decompiling uses parsing, metadata, program analyses, data-flow analysis, type analysis, structuring, and code generation.

2. Experiencing programming combines writing and reading to/from the space/light/brightness/color model

Experiencing programming combines putting/writing and getting/reading of spatial datatypes to/from the space/light/brightness/color model, unifies output and input, and unifies making, knowing, and using for behavior. (Compare to language-learning listening and speaking, and compare to inner speech, which has simultaneous speaking, hearing, and understanding.)

3. Observing

Experiencing programming constructs an experiencing interface to read from/write to space surfaces and to define/experience surfaces, spatial relations, and space. Combined writing/constructing and reading/knowing is observing/experiencing unified space/light/dark/brightness/color from a perspective.

Observing makes objective become subjective, and unifies object and rendering, by inverting/turning-inside-out scenes, spatial-relationships, and space.

4. Consciousness

Experiencing programming evolves/develops to consciousness. Being conscious requires being alert and aware:

Being alert is being awake, having experiences, and being ready to pay attention and/or respond to stimuli.

Being aware is observing and knowing experiences, which may depend on memory and perhaps on emotion.

Being conscious has the following functions:

Maintaining continuity of surrounding space and continuity of timed sequences.

Differentiating regions, objects, locations, times, and sensations from different senses at the same location.

Integrating, associating, correlating, and classifying all senses and objects in space, perception, attention, memory, and emotion.

Understanding what is happening in space and time.

Increasing response quickness for actions and perception.

Having awareness of self.

Experiencing in space

Experiencing programming compactly represents the different forms of energy transfers of the senses in space and so has experiencing.

1. How experiencing programming experiences color in space

Experiencing programming uses spatial-computing transformations/interactions to smooth, filter, amplify, expand, and multiply sense signals to represent vision forms of energy transfers in space.

Spatial-computing smoothing and filtering transform digital/discrete signals to analog/continuous fluids.

Spatial-computing amplification and expansion transform microscopic sizes to macroscopic sizes. (Cone-cell diameter is one micron. Average angle of view is 0.5 arc minutes, the angular size of one pixel.)

Spatial-computing geometric multiplications transform signals to vectors, bivectors, and trivectors, which have dimensions, so multiplicative products make three-dimensional lines, surfaces, regions, and objects.

Experiencing programming has interactions/multiplications (by multivectors, or by inverse multivectors for divisions) between space/sensation meta-model and consciousness/observing objects and interfaces. Interactions make computational/virtual analog energies, with microscopic energy patterns, so that model spatial-datatype spatial-intensities/qualities are dynamic/kinetic/active and can represent vision energy transfers for different macroscopic brightness and color experiences in space. Interactions do computational/virtual work on model spatial datatypes to put brightness/color at distances along directions from model coordinate origin.

Experiencing-programming consciousness/observing/spatial-programming-language has a three-dimensional model, with one coordinate system, that twins space and colors with the physical world. Experiencing programming knows spatial-datatype brightness/color and direction/distance, as well as spatial-datatype spatial relations, scene textures, and object shapes. It knows viewpoint and orientation, as well as projection techniques and the viewing frame. It knows illumination sources and how light transmits, reflects, refracts, diffracts, and makes shadows. Experiencing programming can therefore use spatial computing and graphical processing to render, rasterize, ray-trace, and map experiences to space. The "rasterization" algorithm puts experiencing-programming brightness/color at distances along directions from model coordinate origin.

2. Experiencing in space

Thus, experiencing programming graphically interacts observing/consciousness and computational/virtual vision spatial-intensities/qualities, spatial datatypes, and models to construct/display/visualize/render/experience space-filling, continuous, three-dimensional, macroscopic light/dark/brightness/color, emitted or reflected from surfaces at distances in directions from coordinate origin and connected to observer (although nothing is between observer and observation).

Making space and making experiences in space are the same writing/reading process. Experiencing unifies space extension (direction/distance/area/volume) and space filling (vision, hearing, touch, smell, taste, and pain sensations), so visual experiencing unifies brightness, color, and space.

Visual experiencing requires three-dimensional space, and three-dimensional space requires experiencing/contrasting/marking. Brightness/color (which require each other) makes the model visible as space. Space/experiencing must have shapes (points, lines, objects, and scenes) and their spatial relations.

Quality is spatial and is only in space. Space is quality without intensity. Intensity in space must have quality. Intensity/quality implies space, and space implies intensity/quality.

Light behaves in the model/space like light rays/waves do in physical space. The first light is like recalled/token/represented brightness and darkness, and the first color is like remembered/imagined color. Light evolves/develops to light and dark colors and then to white, black, and hues.

Perhaps experiencing is faster because macroscopic has less (only important) information.

Appendices about Physics and Mathematics

Physical and Mathematical Properties

Physical and mathematical properties are about surfaces, geometric figures, surface textures, geometric configurations, and physical and chemical properties.

1. Surface and geometric properties

Spatial datatypes can represent all surface and geometric properties.

Surfaces and geometric figures have spatial properties.

Surfaces and geometric figures have patterns.

Surface textures have pattern, fill, spacing, roughness, and waviness.

Geometric properties can be scalars or vectors.

Spatial datatypes can use the following to define surfaces:

Bivectors

Intersections of two three-dimensional regions

Cross-sections of fields, flows, or pressures

Perpendiculars to lines

Extensions of lines

Faces of solids

Inside closed lines

Spatial datatypes can use the following to define regions:

Trivectors

Perpendiculars to surfaces

Extensions of surfaces

Inside closed faces

1.1. Physical quantities

Physical quantities are extensive or intensive:

Extensive quantities are sums/products, or differences/quotients/ratios, over time and/or space. Extensive quantities are scalars. Number, length/distance, area, and volume; time interval; mass, moles, energy, power, and work; vibration and wave; charge; heat and entropy; and plane and solid angle.

Intensive quantities are instantaneous local values at times and places. Scalar intensive quantities include coordinate position/location, clock time/time instant, temperature, frequency, rate, viscosity, concentration, chemical potential, electric potential, capacitance, resistance, and conductance. Vector intensive quantities include direction, distance in a direction, velocity, acceleration, impulse, intensity, momentum, force, pressure, spin, torque, radiation, current, flow, intensity, light flux, and illuminance. Matrix and tensor intensive quantities include fluid flow, field flux, and relativistic gravity.

Integrating an intensive quantity over time or space makes an extensive quantity. For example, to find total charge, integrate current over time.

Differentiating an extensive quantity over time and/or space makes an intensive quantity. For example, velocity is the differential of distance over time. Temperature is the differential, at constant volume, of internal energy over entropy.

2. Spatial properties

Surfaces and geometric figures can have, and spatial datatypes can represent, shapes, symmetries, scale, curvature, and orientation.

Shapes

Bivectors, polygons (typically triangles), spline areas, parallel splines, spline grids, and shape Boolean operations can represent surfaces.

Trivectors, tetrathedrons, spline regions, and shape Boolean operations can represent three-dimensional figures.

Symmetries

Ring (hollow circle), filled circle, ellipse, triangle, square, and hexagon are two-dimensional geometric figures with symmetry.

Disc, sphere, torus, ellipsoid, tetrahedron, and regular polyhedron are three-dimensional geometric figures with symmetry.

Scale

Radius, area, and volume give size (scale) to geometric figures.

Note: Only paraboloids with parabola equation y = x^2 have height y proportional to circular cross-sectional area pi * x^2.

Curvature

Geometric figures have curvature:

Straight lines have zero curvature.

Circles and ellipses have positive curvature.

Hyperbolas have negative curvature.

Parabolas have positive curvature if concave upward and negative curvature if concave downward. Curvature absolute value is 1 / (latus-rectum / 2) at maximum or minimum point.

Vectors represent curvature.

Surfaces can have concave or convex regions. Surfaces have two principal curvatures: maximum and minimum:

Spherical surfaces have positive maximum principal curvature, positive minimum principal curvature, and positive surface curvature. For example, surface tension causes liquid surfaces to have negative, zero, or positive curvature.

Cylindrical surfaces have positive maximum principal curvature, zero minimum principal curvature, and positive surface curvature.

Saddle surfaces have positive maximum principal curvature, negative minimum principal curvature, and negative surface curvature.

Flat surfaces have zero maximum principal curvature, zero minimum principal curvature, and no surface curvature.

Orientation

Lines, surfaces, and regions have spatial orientation. Orientation can be up or down, left or right, backward or forward, and so on.

Vectors represent orientations.

3. Patterns

Patterns have points, lines, angles, and surfaces, and the parts have sizes, positions, and spatial relations that make shapes. Spatial datatypes can represent all patterns.

Shapes have varying and transformation-invariant features, such as lengths, angles, areas, and textures, with repeating and non-repeating elements.

Patterns can be radial or have other symmetries.

Patterns are not additive.

Patterns can have a dimension for magnitude, such as number or area.

Patterns can have internal motions, with frequencies.

Microgeometries and surface-wave patterns

Surfaces can have different three-dimensional polygon (typically triangle) patterns (microgeometries), with uniform or non-uniform spatial distribution. Polygons have surface orientation and curvature.

Microgeometries can mix without overlap or displacement.

Microgeometries can have transverse and longitudinal waves (but no rotations or translations).

Patterns made by changing point states

If a point can have two states, two adjacent points can alternate states to appear oscillating. Any number of points, in any pattern, can have off-on patterns.

If points have any number of states, two adjacent points can have any number of patterns of state pairs. Any number of points, in any pattern, can have state patterns.

If something can detect the whole point array, simultaneously or serially, it has a succession of patterns.

4. Surface textures

Surfaces have surface texture, with elements, waviness, and roughness. Spatial datatypes can represent all surface textures.

Surface-texture elements and element spatial properties

Surface textures have elements: points, line segments, and/or areas. Elements vary in shape, angle, length, width, area, holes, orientation, number, density, gradients, and spatial distribution. For example, surface textures can have only line segments [Julesz, 1987], of same or different widths, and line segments have ends and have crossings, such as corners and intersecting lines.

Surface texture has one or more element shapes/types. Number density (or number) determines element average spacing.

Elements have area. Area density (or total area) determines element average spacing.

Elements can have random or regular patterns. Spatial-distribution regularity level above baseline random spatial distribution determines element minimum, average, and maximum spacing. Elements can be in grid, radial, transverse, or spiral patterns. Surfaces have foreground and background patterns of elements.

Elements can have clustered or dispersed spatial distributions with spacing and periodicity. Surface textures are sparse or dense. Spatial-distribution clustering level above baseline even/dispersed spatial distribution determines element minimum, average, and maximum spacing.

Elements can be repeating or non-repeating.

Surfaces have fill, so surface textures have colors, gradients, shading, and transparency.

Surface textures can diffuse, fade, blur, blend, flood, merge, offset, blacken or whiten, make linear or radial gradient, and change lighting direction and intensity.

Surface-texture spatial properties

Surface textures can be two-dimensional or have spatial depth and be three-dimensional.

Surface-texture spatial properties and features can be uniform across the surface, and so look the same along dimensions, or differ along different dimensions.

Surface textures can have temporal properties, such as translations, rotations, and vibrations.

Microscopic surface textures

Surface texture can be at microscopic scale.

Perhaps color and brightness relate to microscopic surface texture. If so, it must be evenly distributed, because space is isotropic. Surface-texture-element sets have three independent variables. Perhaps spatial density accounts for brightness. Perhaps spatial activity, such as radial, circumferential, transverse, and/or longitudinal vibrations/resonances (that balance in all spatial directions, because space is isotropic) account for hue and saturation. Note: Because space and time are isotropic, opposite directions have no real differences.

Physical surface textures

Physical surface textures have surface peaks and valleys with spacing, roughness, and waviness:

Spacing is about average distances between small and large peaks and valleys (periodicities = spatial frequencies) and their numbers per unit area (spatial densities).

Roughness is about non-periodic short-wavelength waves with maximum and average small and large peak and valley deviation from baseline.

Waviness is about long-wavelength waves with maximum and average small and large peak and valley deviation from baseline.

Combinations of spacing, roughness, and waviness are about slopes and average deviation from baseline. More and/or bigger peaks and valleys make bigger slopes and locally curvier surfaces.

5. Simple geometric figures

Spatial datatypes can represent all geometric figures. Simple geometric figures have a spatial configuration like the following:

Point with vector distance (spacing) to another point, line, or plane. Note: Distances can be discrete values, such as electron-orbital radii.

Point vector motion, such as spin.

Point scalar or vector parameter, such as charge.

A spinning charge is a simple geometric figure.

Points

Points have only one property, magnitude.

Points are radially symmetric, so points have only one structure.

Points can translate and oscillate, but are not distinguishable if they rotate, librate, or vibrate. Because spatial directions are isotropic, surface points have only one oscillation type.

Vectors

Vectors have magnitude and direction, with one dimension.

Vectors can translate, oscillate, rotate, and librate, but have no transverse vibrations. Because spatial directions are isotropic, surface vectors have only one oscillation type, only one rotation type, and only one libration type.

Note: An appendix is about alternative representations of vectors.

Bivectors

Bivectors have magnitude and two directions, with two dimensions.

Bivectors can translate, oscillate, rotate, and librate, and have transverse vibrations.

Bivectors have two oscillation types: both vectors extend or contract together, or one extends while one contracts.

Bivectors have two rotation types: in the same plane, or through the third dimension.

Bivectors have two transverse vibration types: the planar angle widens and narrows, or the skew angle widens and narrows.

Surfaces

Continuous surfaces have curvature.

Continuous surfaces can have three different vibrations: longitudinal vibration along normal, radial transverse vibration, and circumferential transverse vibration.

6. Negative, zero, and positive values, and opposites

Spatial datatypes can represent negative, zero, and positive magnitudes and represent opposites.

Negative, zero, and positive values can be because of direction from a reference point in a coordinate system. For example:

Vectors in opposite directions have opposite signs. Displacement, velocity, acceleration, force, momentum, and fields are vectors whose direction in a coordinate system can be in the positive direction or in the negative (opposite) direction. Note: Adding vectors with opposite directions adds a positive magnitude to a negative magnitude (and so is the same as subtracting the second magnitude from the first magnitude).

Clockwise rotation is conventionally negative, and counterclockwise rotation is conventionally positive. Bosons have spin -1, 0, or +1.

+ and - are in opposite directions from zero along one dimension. Negative and positive spatial directions, spins, and curvatures have the same physics, because three-dimensional space is isotropic.

Unit-vector cross-products are i x j = -k, i x k = -j, and j x k = -i, so k is opposite of (i x j), j is opposite of (i x k), and i is opposite of (j x k), each along one dimension.

x^-1 and x^1 are reciprocals. x^-1 has log -1, x^0 has log 0, and x^1 has log +1, so the logarithms are in opposite directions from zero along one dimension. Note that x^0 = 1 = x^-1 * x^1. Reciprocals do not balance around the value 1. For example, 4^-1 = 1/4, 4^-0.5 = 1/2, 4^0 = 1, 4^0.5 = 2, and 4^1 = 4.

7. Physical and chemical properties

Physical and chemical properties, changes, and events are about matter, mass, matter phases (solid, liquid, gas), phase properties, translations, kinetics, momentum, acceleration, force, energy, dynamics, and chemical changes.

7.1. Mediums

Mediums/ethers/substrates can be transparent, translucent, or opaque. They can have a kind of density. They can vibrate.

7.2. Particle configurations

The same or different particles in different numbers and spatial configurations make geometric figures.

Two states

Stereoisomers are particle configurations with left-handed and right-handed states:

Two different tetrahedra, with four atoms of four different types

Two different triangular bipyramids, with five atoms of five different types

Three states

Particle configurations with three states are:

Three atoms, each with the same two parameters that can have value 0 or 1. The three atoms are 00, 01, and 11.

Three compounds, each with two atoms that can be one of two types 0 or 1. The three compounds are 00, 01, and 11.

Three spatial configurations of compounds with four same-type atoms. The configurations are linear (four in a row), iso (four in a T), and cyclic (four in a circle) isomers.

Three octahedral compounds, each with six atoms of five different types

Three two-tetrahedra compounds, each with eight atoms of four different types

Chemicals

Atoms have exact ratios of electrons, neutrons and protons. Molecules have exact ratios of atoms.

Mixtures

Mixing adds some or many type-2 molecules to many type-1 molecules. Mixtures do not make new molecules.

Volatile molecules leave the liquid and mix with air.

Alloys

Alloying combines some or many type-2 molecules with many type-1 molecules to make new materials.

Solutions

Dissolving some or many type-2 molecules in many type-1 molecules surrounds non-ionic type-2 molecules or splits ionic type-2 molecules. For example, aromatic compounds dissolve in benzene, and acid and base solutions dissolve ionic molecules.

Reactions

Chemical reactions split a molecule or join two molecules to make new molecules.

7.3. Particle physical properties

Particles have physical properties. For example, particles can have mass and charge.

Polarity

Particles can have polar or non-polar polarity.

Electric dipoles: Positive and negative electric charges are poles (electric monopoles). Separated charges are electric-charge dipoles. (Moving electric charges have relativistic effects that separate charges, making magnetism and relativistic electric-charge dipoles. Accelerating electric charges have electric-and-magnetic-field dipole interactions that generate electromagnetic waves, with planar electric waves and perpendicular planar magnetic waves.)

Mass dipoles: Masses are poles (gravitational monopoles). Separated masses make a dipole. The whole system has a center of mass, at which dipole moment is zero, because mass can only be positive (with no opposite). (Moving masses have relativistic effects that separate mass, but mass center still has no dipole moment, so gravity has no counterpart to magnetism. For accelerating masses, mass center still has no dipole moment, so there are no planar gravitational waves.)

Quadrupoles: Masses separated, from center of mass, differently longitudinally and transversely make a quadrupole moment (which is a tensor). For example, uniform-density spheres have no quadrupole moment, but rods, disks, and two spheres have a quadrupole moment. (Two spheres vibrating longitudinally or transversely, and relativistically accelerating masses, have an oscillating quadrupole moment, which makes gravitational waves. Gravitational waves have two orthogonal linear-polarization states, at 45-degree angle, and make gravitational-field surfaces, whereas gravitational fields have only field lines. Gravitational-wave gravitons have spin 2, required by the 45-degree-angle orthogonal linear-polarization states, which is invariant under 180-degree rotation around motion direction.)

Spin and phase

Atoms and molecules can spin:

Spherical particles with no surface marker have no observable spin, because they look the same whether spinning or not.

Ellipsoids with no surface marker have no observable spin around the long direction, because that looks the same whether spinning or not. They have observable spin around any axis through the center perpendicular to the long direction, because of the two protrusions.

Atom p electron orbitals project along the x, y, and z axes. They have observable spin around any axis through the center, because of the six protrusions.

Spherical particles with an observable marker, such as a mass or charge density change or something sticking out, can have observable spin.

Ellipsoids can have an observable marker and/or can have a variability along the long axis (so they look like a pear or dumbbell, for example). Solids can have variability along any axis.

For particles with two kinds of spin, the two rotations can be in-phase or out-of-phase, in many different ways.

Intrinsic spin and spinors

Particles can have intrinsic angular momentum (spin). Fermions have half-integer spins, and bosons have integer spins.

Points look the same in all three dimensions and have no observable spin. Points cannot have markers. Particles with spin cannot be point-like.

Vectors have length and look the same in two dimensions but different in one dimension, so they have no observable spin around the long axis, but can spin around any axis perpendicular to the long axis and through the center. Particles with spin are vector-like.

Vectors can have a marker to make spin observable around the long axis. A line or flag perpendicular to the longitudinal axis can start at the axis or go through the axis and be at the ends or in the middle. A surface, with same length as the vector, perpendicular to the longitudinal axis can start at the axis or go through the axis.

Marked vectors have two kinds of spin: flipping of the long axis and rotation around the long axis. The two rotations can be in-phase or out-of-phase. For example, one flip of the long axis can correspond with a half rotation around the long axis, so two flips (one rotation) of the long axis, with two half rotations around the long axis, bring both back to original position. Alternatively, two flips (one rotation) of the long axis can correspond with a half rotation around the long axis, so four flips (two rotations) of the long axis, with one rotation around the long axis, bring both back to original position.

Marked vectors can be spinors. Imaginary numbers can be markers, so vectors with complex or hypercomplex numbers can be spinors. Spinors can have two different rotations that are in-phase or out-of-phase, in many different ways. Spinors model fermions.

Handedness

Tetrahedra with four different objects (such as atoms) at the four vertices have right-handed and left-handed forms. Right-handed and left-handed forms have the same composition, but different configurations.

Particles can have handedness. For example, particle spin can be clockwise or counterclockwise with respect to motion direction.

Helicity is right-handed if particle's spin vector and motion vector have same direction, or left-handed if particle's spin vector and motion vector have opposite direction.

Parity is the result when a particle or event transforms (parity transformation) to its mirror reflection. Parity is even when the transformation results in the same thing. Parity is odd when the transformation results in the opposite handedness.

Chirality is about a particle or event and its mirror reflection. An event is chiral if the right-handed or left-handed states are different. Chiral symmetry is invariance under parity transformation.

Note: If motion direction changes for observer, spin flips. For example, if a stationary observer sees right-handed electron spin, an observer moving at velocity greater than the electron's velocity sees left-handed electron spin (opposite helicity). For massive particles, helicity is not preserved, and massive particles have chiral asymmetry. Massless particles move at light speed, so observers cannot have higher velocity than massless particles, so helicity is preserved, and massless particles have chiral symmetry.

Note 2: Gravitation, electromagnetism, and the strong nuclear force use the same equations for particles with right-handed or left-handed spin. The weak nuclear force only acts on particles with left-handed spin and has no effect on particles with right-handed spin, so its physical equations must be in pairs, one for right-handed spin and one for left-handed spin.

Physical fields

Electric (or gravitational) forces connect charges (or masses) through quantum-mechanical wavefunctions, which mediate photon (or graviton) particle exchanges that transfer energy and momentum and cause acceleration (attraction or repulsion). The number of photons (or gravitons) exchanged determines force strength.

Around charges (or masses), electromagnetism (or gravitation) has fields that represent space-time curvature, which causes acceleration, because all particles must travel at light speed through space-time. Fields represent potential energy that can give particles kinetic energy. Space-time curvature (field-line number through unit area) determines force strength.

Space-time curvature and particle exchange are both local. Because all physics is local, objects know nothing of other objects or themselves. They only have accelerations.

Relativistic fields have no medium. Relativistic fields have a source, which can be external (such as the electron for an electromagnetic field) or internal (such as the condensate of virtual Higgs particles for the Higgs field). Relativistic quantum fields are condensates of virtual bosons, and have excitations and so real quanta, which are the (massless or massive) particles.

Thermodynamics and statistical mechanics

Thermodynamic properties, such as temperature, arise from statistical mechanics, such as molecule average random translational kinetic energy. Gases have molecule average speed less than one meter per second.

7.4. Physical properties with negative, zero, and positive values and opposites

Physical properties can have negative, zero, and positive values because of electric force and its quanta. For example, quarks have electric charge plus, or minus, one-third or two-thirds. Electrons, neutrons, and protons have electric charge -1, 0, and +1, respectively. Negative and positive charges have the same type of physics.

Physical properties can have negative, zero, and positive values because of direction from a reference point in a coordinate system. Displacement, velocity, acceleration, force, momentum, and fields are vectors whose direction in a coordinate system can be in the positive direction or in the negative (opposite) direction.

Physical situations can have opposites. + and - are in opposite directions from zero along one dimension. Negative and positive spatial directions, spins, and curvatures have the same physics, because three-dimensional space is isotropic.

8. Types and type theories

Types/datatypes include numbers, logical values, strings, dates, times, and arrays.

In type theory, terms (constants or variables) must have a type. For example, the term n has the real-number type, and the term word has a text type.

A dependent term/name depends on the type or a parameter value. An independent term can be a constant, such as a person's name, or a variable with constant type and no parameter, such a clock time.

9. Homotopy

Mathematical homotopy classifies geometric regions by their path types. For example, paths with constant endpoints are homotopic if one can be continuously deformed into the other while remaining in the geometric region.

Perhaps logical steps, and set theory, correspond to defined-geometric-region homotopies.

Perhaps programming data structures, data types, and programs are the foundation of mathematical elements, sets, and functions.

10. Light rays

Light rays do not use electromagnetic-wave phenomena: amplitude, frequency, and wavelength. Beams are collections of independent light rays.

Light rays are comparable to, and have contrasts with, flows. They have one direction. They are continuous. They have energy. They have homogeneous composition. They have laminar flow (with no eddies or vortices, because varying flow takes time). Their velocity is constant, with no flow pulses or oscillations (because varying flow takes time). Their density and pressure are constant (because varying flow takes time). They have no viscosity (because they have no particles). They have no rotations/spins (because they have no particles).

Motions

Geometric figures can move around in space and have internal motions, which can have modes. Geometric figures can have translation, expansion-contraction, curving, rotation, vibration, and coordinate transformations.

1. Motion tracking

Computing can spatially:

Track trajectories, as object, eye, head, or body movements change relative lengths, orientations, and angles.

Transform coordinates and perform projection, translation, rotation, oscillation, scaling, and skewing.

Use on-off patterns to model cellular automata [Wolfram, 1994].

2. Coordinate transformation

Coordinate transformations translate, rotate, scale, and skew the coordinate system. Coordinate transformations are vector and tensor operations. Use line-segment scaling to find the product, and use similar-triangle corresponding-side ratios to find the sine of the angle.

3. Translation, zooming, and skewing

Translations move points linearly and have velocity. Use line-segment scaling for translating, zooming, and skewing/warping points or line segments. For two translations, add motion vectors.

4. Expansion-contraction

Expansion increases surface area, with radial outward movement.

Contraction decreases surface area, with radial inward movement.

5. Curving

Curving increases or decreases surface curvature.

6. Rotation

Rotation is turning, or orbiting around an axis, counterclockwise or clockwise. Rotations change angle and have frequency. Start with the known rotation angle, use line-segment scaling to find the product, and use similar-triangle corresponding-side ratios to find the sine of the angle. For two rotations, add spin/orbit/rotation vectors.

Spins and orbits can wobble around rotation-axis direction. An example is gyroscope wobble.

Librations are small rotations that reverse direction and have frequency and amplitude.

7. Vibrations and waves

Vibrations have amplitude, frequency, and wavelength.

Linear vibrations: Linear vibrations are along a line (longitudinal) or across a line (transverse). Transverse vibrations can rotate and oscillate.

Oscillations: Oscillations (longitudinal vibrations) are small translations that reverse direction and have frequency and amplitude.

Surface vibrations: Surfaces can have two-dimensional vibrations. Surface vibrations go up and down transversely to surface and/or forth and back longitudinally along surface. Vibrations can be radial from center outward. For example, surface center goes up and down transverse to surface, while surface circumference is stationary. Vibrations can be circumferential around perimeter. For example, circumference has transverse or longitudinal vibrations, while center is stationary. Surface waves can have stationary or moving nodes.

Amplitude: Maximum sideways displacement is transverse-vibration amplitude. Maximum lengthening/shortening is longitudinal-vibration amplitude.

Temporal frequency and wavelength: Vibrations go up and down, or back and forth, a number of cycles per second, to make temporal frequency. Wavelength times temporal frequency is wave speed.

Superposition: Two vibrations independently add to make a new vibration. For example, waves with frequencies 2 and 3 have superposed wave with frequency 1. Waves with frequencies 3 and 5 have superposed wave with frequency 2. Waves with frequencies 2 and 5 have superposed wave with frequency 3. (Note that wave packets are infinitely many waves that interfere constructively only at a small space region, because of their relative phases and amplitudes.)

Position: Surface waves can have stationary or moving nodes.

Phase: Two same-frequency vibrations can go up and down, or back and forth, together, to be in phase. In-phase vibrations enhance amplitude. Two same-frequency vibrations can be in opposite directions, to be out of phase. Out-of-phase vibrations cancel amplitude.

Spatial frequency: Resonating waves can have a number of waves from point to point, or around circumference, to make spatial frequency. Wavelength is inversely proportional to spatial frequency.

Resonance: Vibrations can bounce back and forth between two stationary points, in phase, to make a standing wave. Vibrations can go around a perimeter in phase. Physical objects have internal electromagnetic forces, among protons and electrons, across diameters, so that, when they vibrate, they resonate at a fundamental frequency (and its harmonic frequencies):

If same-frequency vibrations collide, or interact, with the object, the waves resonate (resonance), and the object can absorb vibration energy by increasing vibration amplitude.

If different-frequency vibrations interact with the object, there is wave interference, decreased vibration amplitude, and low energy absorption.

Particle creation: Hadrons have internal strong nuclear forces, among quarks, across diameters (cross-sections), so that, when they vibrate, they resonate at a fundamental frequency. When protons or neutrons collide, if net kinetic energy equals a hadron's rest-mass energy, the hadron appears. If rest-mass energy is high, cross-section is large. Collision vibration frequency, which determines kinetic energy, equals (resonance) hadron vibration frequency, which determines rest mass. (However, the strong nuclear force pulls such hadrons apart in the time, about 10^-23 second, that it takes for light to travel across particle diameter, which is one cycle of the quantum-mechanical-particle-wave frequency. Hadrons with larger cross-sections, and higher particle-wave frequencies, survive longer.)

Circuits: Series and/or parallel circuits have resistances, inductances, and capacitances. In series circuits, equal inductance and capacitance make minimum impedance, and resonance at a voltage frequency increases current. In parallel circuits, equal inductance and capacitance make maximum impedance, and resonance at a voltage frequency decreases current.

Bivector vibrations: Bivector vibrations can model all surface vibrations, including radial and circumferential vibrations, with the following types:

Planar angle between vectors increases, then decreases, and so on. (If the two vectors both point left, then right, and so on, the angle stays the same.)

One vector points up then down while the other vector points down then up, and so on, skewing the angle. (If the two vectors both point up, then down, and so on, the angle stays the same.)

Both vectors increase length, then decrease length, and so on, so vectors have same-phase longitudinal vibrations.

One vector increases length while the other vector decreases length, and so on, so vectors have opposite-phase longitudinal vibrations.

Both vectors have transverse waves that reverberate to make standing waves.

8. Flows

Three-dimensional physical-fluid laminar or turbulent flows have compositions, directions, velocities, densities, pressures, viscosities, and spins:

Longitudinal motions have different rates and rate changes over their cross-section.

Transverse motions have different rates and rate in different radial and/or rectangular directions.

Laminar flows have constant direction, with constant or varying velocity, including flow-rate pulses and oscillations. Turbulent flows change directions and velocities. Turbulent flows can have eddies, vortexes, and their combinations differing in size, scale, shape, and energy. When kinetic energy of a part of a fluid flow overcomes viscous internal friction, typically in low viscosity fluids, fluid has non-stable interacting vortices, with different sizes and frequencies, as fluid layers have different velocities. Turbulence is like white noise, with all frequencies. Turbulence includes oscillations, skewed varicose instabilities, cross-rolling, knots, and zigzags. Higher viscosity lowers turbulence. (Because space is isotropic in all three dimensions, all transverse directions are equivalent.)

Compositions can be homogeneous or nonhomogeneous.

Density is mass per volume. Flows have particles with different masses, sizes, and shapes. Flows can have different densities, making compressions or expansions.

Pressure is force per area. Flows depend on pressure gradients.

Viscosity is internal friction, resistance to shear stress between gas or liquid parallel layers (or to tensile stress due to gas expansion). Intermolecular force causes liquid viscosity. (Gas diffusion contributes to gas viscosity.)

Flow particles can rotate.

Flows can have oscillations, librations, vibrations, rotations, and starting/stopping.

Flows are vector fields. Mathematical tensors can model fluid flows. Tensors transform longitudinal and transverse pressure-and-velocity components.

9. Vector-field properties

Vector fields have intensive properties/operations/operators: densities, gradients, divergences/convergences, and rotations/curls.

Vector-field densities: Flows have numbers of vectors per cross-sectional area. Surface points have different area densities and/or curvatures. Vector fields have vector densities and magnitudes.

Vector-field gradients: Flows have different speeds at different places in cross-sections. Surface points have directions of maximum descent or ascent. Vector fields have varying vector magnitudes.

Vector-field divergences/convergences: Flows have divergences, linear flows, and convergences. Vector fields have sources and sinks.

Vector-field rotations/curls: Flows are linear or have torques. Vector fields are conserved or have rotations.

Mediums and Ethers

Mediums are physical substances that occupy volume and have physical and mathematical properties.

A special medium is ether.

1. Media

Mediums can be gases, liquids, or solids. Mediums can be geometric only.

Mediums can be continuous or have particles. For example, water has closely packed molecules that can act like continuous manifolds macroscopically. Geometric mediums must be continuous or discrete.

Mediums can be stationary, have parts or regions that move, or move as a whole.

Mediums can have density. Geometric mediums have no density.

Mediums and geometric mediums can have elasticity. Solids can be rigid or quasi-rigid. Mediums with elasticity can vibrate. Gases have no elasticity.

Mediums can fill space, fill matter, or be local.

2. Caloric and heat

To explain heat in matter and heat flows from hotter bodies to colder bodies, caloric was supposed to be a continuous, highly-elastic, weightless gas, pervading matter. It did not change density, state, or substance. It was transparent and had no other sensible properties. Because it had no parts, it could not be broken up or put together. Its regions repelled each other, making heat flow.

There is no need for caloric because heat is random translational kinetic energy.

3. Phlogiston and burning

To explain burning and rusting, phlogiston was supposed to be a gas pervading matter. Burning and rusting released phlogiston. The remaining ashes are always denser than the original material, so phlogiston was supposed to have negative mass. Phlogiston did not change density, state, or substance, and was transparent and had no other sensible properties.

There is no need for phlogiston because burning and rusting are chemical oxidations.

4. Luminiferous ether and electromagnetic waves

To explain how electromagnetic waves traveled through space, luminiferous ether was supposed to be a stationary, continuous, inert, transparent, low-density, quasi-rigid and highly-elastic solid, distributed evenly through space and in matter. It could vibrate and so carry electromagnetic waves.

Fizeau (1849) measured light speed through the atmosphere near earth (stationary and moving air). He later (1851) measured light speed in stationary and moving water. He found that light speed is finite and constant in stationary or moving air or water. He concluded that luminiferous ether must be stationary and have no internal motions.

Hertz (1888) transmitted radio waves through air and showed that electromagnetic waves have finite velocity, indicating that luminiferous ether must be like matter.

Lorentz's theory (1892) linked electromagnetic radiation and matter, which both have momentum and energy, so both are distributed energy, and luminiferous ether must have constant geometry.

The special theory of relativity (1905) equates mass and energy and states that all constant-velocity reference frames are equivalent. Therefore, luminiferous ether can move at any constant velocity, has no particles or internal motions, and is continuous, deformable, fluid, evenly distributed, and inert.

Luminiferous ether had to be solid because electromagnetic waves are transverse waves (with polarization), which are not possible in a fluid, because fluids have relative motions of parts.

Luminiferous ether had to have no parts, because parts could have internal motions and flows, making a fluid.

Luminiferous ether had to be quasi-rigid, because such materials can have small deformations and so can vibrate. Completely rigid material cannot vibrate. Note: Lenses and mirrors can have material distortions that change density, making light rays from a point not focus at a single point (aberration), and only quasi-rigid material allows aberration.

Luminiferous ether could not compress or expand, and so did not change density, change state, or disperse.

Luminiferous ether was inert to chemical or physical change.

Luminiferous ether was not sensible or perceptible, and so was transparent.

There is no need for luminiferous ether because electromagnetic-wave electric and magnetic field interactions cause wave propagation, and the fields need no physical medium.

5. Space

Newton (1685) assumed space to be absolute, with unchanging coordinates and with no translations, rotations, or vibrations. Time is also assumed absolute. Accelerations, including rotations, are relative to absolute space. Newton also assumed that immovable absolute space had physical properties, but they were not perceptible (making a physical ether, and, at least, a geometric ether).

To avoid absolute space, Mach (1883) proposed that accelerations are relative to universe total mass distribution, which causes the inertia of masses. However, that seems to require action at a distance. (Universe total mass distribution makes a kind of physical ether.)

Einstein's special theory of relativity (1905) equates mass and energy and states that all constant-velocity reference frames are equivalent, so space-time is relative. In the general theory of relativity (1915), space-time geometry changes depending on masses/energies at space locations, and masses/energies move through changing space-time geometry, making apparent accelerations. Gravitational potentials determine space metrics. Empty space-time has Cartesian geometry, and masses/energies make curved space-time. There is no action at a distance, because gravitational force is curved space-time. (Space-time always has geometric properties, and is a geometric ether.)

6. Electromagnetic fields

Electromagnetic fields are mediums.

Electromagnetic fields do not affect space-time geometry.

Making New Things

Color and brightness experiences are new things. Also, brains can build new senses: starting with touch, people working with magnets learn to build a sense of magnetism.

Splitting, joining, and rearranging existing things can make new things. Joining includes multiplying/finding a product. Splitting includes dividing/finding a quotient. Note: Adding or subtracting does not make new things, but only larger or smaller numbers of the same thing.

Transforming existing things can make new things. Compression, expansion, and torsion can make new things.

Maximizing, finding the limit of, and converging can help make new things.

Scaling can make new things, and extrapolation and interpolation are scalings. Scaling makes microscopic become macroscopic (like statistical mechanics is microscopic and thermodynamics is macroscopic, while describing the same thing). Scaling can make things continuous. For example, digital information becomes continuous.

Creating something physical requires energy and/or information transfer.

1. New mathematical things by joining/multiplying or splitting/dividing

Multiplying or dividing two same or different independent continuous quantity types makes a new quantity type.

Geometric operations make new geometric figures by joining or splitting existing geometric figures. For example:

Polygons are connected lines.

Geometric projection links points to projected points.

Geometric drawing operations can construct points, lines, and surfaces.

Line generators can make ellipsoids.

Lines and vectors connect two points.

2. New computational things by joining or splitting

Computer programs join or split two same or different independent continuous datatypes to make a new datatype. For example:

Arrays can be series of strings.

A model and its parts and activities correspond to physical objects and events.

3. New physical things by physical interactions of joining or splitting

Physical interactions can make new things.

New physical things can result from joining existing things to make larger and more-complex things. For example, features combine (by spatial and/or logical configuration) to make objects, and molecules combine (by chemical reaction) to make atoms. (However, most pairs of things do not have a way to join.)

New physical things can result from multiplying two same or different independent continuous properties. For example, gravitational force multiplies two masses (scalar quantities), and torque multiplies force and radius (vector quantities). (However, most pairs of quantities do not have a way to multiply.)

New physical things can result from splitting existing things to make smaller things. For example, atomic nuclei can split (by nuclear fission) to make smaller nuclei, and molecules can split (by chemical reaction) to make smaller molecules. (However, most things do not have a way to split.)

New physical things can result from dividing existing things to make smaller things. For example, time divides into electric charge (scalar quantities) to make current, and radius divides into torque (vector quantities) to make force. (However, most things do not have any dividers.)

New physical things can result from symmetry breaking. Perturbations can move systems from equilibrium. Vacuum fluctuations can reduce quantum-system symmetry. Transformation variabilities can reduce symmetry. Merging symmetric (or opposite) states breaks symmetry. Perhaps symmetry breaking gives rise to experiences. Opponent processes have two symmetric states, so perhaps their breakdown makes asymmetric states, as the states differentiate into black and white, yellow and blue, and red and green. Note that symmetry can be approximate.

New physical things can result from symmetry making. States can move from an unstable to a stable (equilibrium) state. Reducing/damping fluctuations increases symmetry. Transformations can reduce variables or join two variables as opposites or end points. Splitting a state can make symmetric (or opposite) states. Perhaps symmetry making gives rise to experiences, by uniting different things into a symmetric whole. Using only descriptions has ambiguities, but images are unambiguous. For example, calculations using inputs to two ears result in two space locations, but sound comes from one location.

4. Physical creation at a distance

To create something physical at a distance requires energy transfer from existing to new and/or sending and receiving information signals. Some examples are:

An object can have a field around it.

Two objects can exchange particles and/or energy.

An object can send a signal to another object and receive the same or modified signal back.

Electrons in atoms can change orbital level.

Electromagnetic induction can propagate electromagnetic waves.

Two objects can have entanglement in a single quantum-mechanical wave.

Wave interference can make local wave resonances, and quantum-mechanical-wave interference can make local particles.

Quantum-mechanical-wave discrete-energy-level fluctuations can make virtual particles.

Quantum-mechanical-wave energy jumps/transfers can propagate charges, such as positive-charge "holes", through crystals.

5. Transformations

Transforming existing things can make new things.

Transformations can change shapes, emphasize something different, associate or disassociate parts, and change viewpoint.

6. Energy and/or information transfer

Creating something new requires energy and/or information transfer.

In cortical neural networks and neural assemblies, input opponent-process pairs and non-opponent processes have brightness, color, and space-location information. Geometric/spatial computing has constructor functions/methods. Computation joins/finds-products-of and splits/finds-quotients-of information structures and processes to construct the display and viewpoint functions and make light and space.

7. Alternative vectors and their representations

Vectors represent force, velocity, and other intensive quantities that have magnitude and direction. Vectors indicate magnitude by their length.

Vectors can have alternative representations that may relate to color and brightness.

7.1. Width-vectors

A new vector type could have unit length in the direction of force, velocity, or other intensive quantity and indicate magnitude by width. Such "width-vectors" have all vector properties:

Adding vectors: Add the two widths by vector addition. Resultant width-vector has unit length in the direction at a -90-degree angle (rotating clockwise) to resultant-width vector direction.

Vector dot product: Calculate the dot product of the two widths.

Vector cross product: Calculate the cross product of the widths. Resultant-width-vector has unit-length direction in the same direction as the cross product of the two traditional vectors.

7.2. Extended-width-vectors

A new vector type could have non-unit length in the direction of force, velocity, or other intensive quantity and indicate magnitude by width. Such "extended-width-vectors" can represent areas, bivectors, or right-angle electromagnetic values.

For example, an extended-width-vector could have length 1, width 1, and area 1; length 0.5, width 2, and area 1; or length 0.33, width 3, and area 1. The extended-width-vector lengths have ratio 1:2:3. The extended-width-vector widths have ratio is 3:2:1. The areas are equal.

7.3. Area-vectors

A new vector type could have cross sections and area, with two widths as vectors. Such "area-vectors" can represent volumes and trivectors. Area-vectors can have area with any shape: triangle, circle, and so on.

7.4. Vector-termini shapes

A new vector type could have shapes with area at vector terminus or vector origin.

Algebras, Algebraic Geometry, and Vector Algebras

Algebras have a set of elements and operations over elements. Elements may be numbers, variables, or vectors. Operations can be addition, subtraction, multiplication, and division.

Algebras include groups, fields, and rings.

A group is a set of elements that has an associative operation that combines two elements to make an element in the set. The set has an identity element. Each element has an inverse. For example, addition over the integers forms a group, with identity element 0. For geometric objects with symmetries, specific geometric transformations, such as rotation by an angle, over the elements form a group. The roots of a polynomial equation form a symmetry group.

A field is a set of elements with addition (subtraction) and multiplication (division). A field has associative addition, commutative addition, an additive identity element, additive inverses, associative multiplication, commutative multiplication, a multiplicative identity element, multiplicative inverses, and distribution of multiplication over addition. The rational numbers, real numbers, and complex numbers are fields. The rational functions form a field.

A ring extends fields to allow non-commutative multiplication and no multiplicative inverses. An example is multiplication over integers.

1. Algebraic geometry

Polynomials, which can have one or more variables, can represent geometric objects, including straight lines, curves, and surfaces. Algebraic geometry is about finding the variable values at which polynomials have value zero, which may be at singular points, inflection points, or points at infinity. Algebraic geometry uses commutative algebra.

Algebraic geometry is also about systems of polynomial equations, geometric-object relations, and geometric-object topologies.

2. Vector algebras

Vector algebras have vectors, as elements, and scalar and vector addition and multiplication, as operations.

2.1. Linear algebra

Linear algebra is about linear equations and functions and their addition and multiplication operations. Linear algebra is also about linear mappings. One kind of vector algebra is about linear mappings among vectors in vector spaces, which involve vector addition and scalar multiplication.

Matrices and their operations involve linear algebra, and matrices can represent vectors.

2.2. Vector calculus

Vector calculus has dot-product and cross-product algebraic operations on vectors in three-dimensional space, making a vector algebra (which generalizes to geometric algebra).

2.3. Algebra over a field

A vector space with vector addition and a bilinear product forms a field with a vector algebra.

2.4. Hypercomplex numbers

The vector algebras of quaternions, biquaternions, hyperbolic quaternions, and other hypercomplex numbers have specific sums and products.

3. Differential forms

Expressions (integrand) that have integrations of functions over curves, surfaces, and solids (manifold) have two parts. The integration is over an interval of the domain of the function. The differential form is the function over the curve, surface, or solid. For example, a 1-form (linear form, covector) is f(x) * dx, with a one-variable linear function. A 2-form is a multi-variable function over a surface. A 3-form is a multi-variable function over a region. Note: A 0-form is a function.

Differential forms have an algebra, featuring the exterior product (wedge product). For example, the exterior product of two vectors is a bivector. Note: 1-forms are duals to vector fields.

Differential forms represent multivariable calculus without using coordinates.

have an orientation.

Differential forms have an orientation.

To relate function integration and differentiation, the function has an exterior derivative, which, for example, maps df(x) to f'(x) * dx. To illustrate, a 1-form is the flux through an infinitesimal surface at domain points, and the exterior derivative is the net flux through the infinitesimal volume at domain points.

To relate function integration and differentiation, differential forms have an interior product (interior derivative), which contracts the form with a vector field.

Geometric Algebra

Vector graphics can use geometric algebra, whose expressions do not use coordinates and do not depend on a specific coordinate system, and so allow any choice of coordinates. Geometric algebra extends the real-number system by adding direction (the basis of geometry), making the directed-real-number system. (The imaginary number i can have direction, and, in a directed-complex-number system, there can be three different imaginary numbers i1, i2, and i3, or there can be a chosen or resulting direction.)

Geometric algebra unites all the algebraic systems used to describe geometric relations: quaternions, complex analysis, calculus of differential forms, and vector, tensor, spinor, and matrix algebras.

Point, line, plane, the incidence relation meet (for example, point of intersection of two lines), the incidence relation join (for example, all points on the line between two points), and duality (for example, of points and planes), of synthetic/axiomatic geometry, have specific representations in geometric algebra.

Geometric algebra [Clifford, 1878] [Grassmann, 1862] is Clifford algebra applied to real fields.

Geometric algebra extends vector algebra and projective geometry.

1. Multivectors in Euclidean three-dimensional space

Euclidean three-dimensional space has real-number coordinates, whose algebraic structure is about inner products. For Euclidean three-dimensional space, geometric algebra GA(3) is a six-dimensional vector (multivector) space. This geometric algebra is a three-dimensional vector calculus.

Three-dimensional points are scalars (zero-length vectors). Scalars can represent numbers.

Lines are vectors. Vectors have length, direction, and direction sense. Vectors can represent complex numbers.

Surfaces are bivectors. Angles are a unit bivector times angle size. Bivectors can represent quaternions.

Regions are trivectors. Trivectors can represent hypercomplex numbers.

Multivectors are linear combinations of scalars, vectors, bivectors, and trivectors. Multivectors have:

Dimension (grade) three or less, defining three dimensions.

Scalar quantity (magnitude).

Space orientation angle (direction) and relative direction (direction sense): up or down, inside or outside, or positive or negative.

Integrals can reduce to scalars. Vectors can represent derivatives. Bivectors can represent fields. For example, in electromagnetism with space-time, the electromagnetic field is a bivector, and the charge and current density together, which is a derivative of the field, is a vector. The electric charge over a domain is an integral. Note: For space-time, points have a four-dimensional vector with three dimensions for space and one dimension for time.

Multivectors define quantities and surfaces, including orthogonals, tangents, paths, trajectories, flows, fields, potentials, translations, torsions, compressions, tensions, accelerations, and spins. For example, in electromagnetism with separate space and time, the electromagnetic field is a multivector due to electric-field scalar charge and magnetic-field vector current, and current is a multivector due to charge density and flow. Note: With separate space and time, points have a three-dimensional vector for space and a one-dimensional scalar for time.

Geometric algebras represent points, lines, planes, regions, and all geometric figures by multivectors. For example, circles are geometric products of three points. Multivectors do not use coordinates and do not depend on a specific coordinate system, and so allow any choice of coordinates.

Geometric algebras represent translations, reflections, rotations, line and plane intersections, and all motions, kinematics, and transformations, without requiring coordinates or a choice of coordinate system. For example, for rotations, bivectors define the rotation plane, and bivector magnitude is rotation angle.

2. Operations

Geometric-algebra operations are multivector addition and multivector geometric product. Geometric-algebra operations are linear.

Same-grade multivectors add to make a new multivector of that grade. For example, adding two vectors results in a vector.

Scalars multiply multivectors to change magnitudes but maintain orientations. For example, multiplying a scalar and a vector changes scale but not direction.

Multivectors of same or different grade multiply (geometric product) to make a multivector of lower, same, or higher grade, with different magnitude and orientation:

The geometric product of two vectors is the inner product (like the dot product) plus outer product (like the cross product), a scalar and a bivector (which has an orientation). The geometric product of a vector with itself is a scalar: absolute value squared.

The outer product of three vectors is a trivector (which has an orientation).

The outer product of a bivector and a vector is a vector. In two dimensions, the outer product of a bivector and a bivector is a scalar. In three dimensions, the outer product of a trivector and a vector, or a bivector and a bivector, is a scalar.

In three dimensions, the outer product of trivector and vector, or bivector and bivector, is a scalar.

Geometric products allow dividing multivectors, so they have the inverse operation (invertibility). For example, dividing trivectors by vectors makes bivectors.

3. Properties

Because inner and outer products are associative, geometric products are associative, and geometric algebras are associative algebras. Geometric algebras have an identity element.

Geometric products are invertible and so have multiplicative inverses.

4. Dimension

Geometric algebra GA(3) has one scalar (0-vector) basis, three 1-vector bases, three 2-vector bases, and one 3-vector basis, so it has 8 bases. Its dimension is 2^3 = 8.

Geometric algebra GA(n) has 2^n bases and so dimension 2^n. One scalar is the lowest-grade basis. One n-vector is the highest-grade basis.

Geometric algebras have subspaces with subsets of the bases. Subspaces have lower dimension but the same space structure. Subspaces have orthogonal dual subspaces.

5. Space geometries

Geometric algebras can represent transformational, affine, conformal, or projective space geometries.

5.1. Transformational geometry

Transformational spaces have invariant distances, areas, and angles. Transformational spaces have linear transformations: rotation, reflection, scaling, and shear.

Matrix multiplications can account for all linear transformations.

Three-dimensional geometric algebra represents reflections, rotations, and all linear transformations.

5.2. Affine geometry

Affine spaces have invariant distances, areas, and angles. Affine spaces have linear transformations and have translations.

A transformational space can extend to have one more point, along a dimension orthogonal to all other dimensions, at position coordinate 1. Then its vectors (x,y,z) become (x,y,z,1). Its matrices add a row or column with values 1. Matrix multiplications in such extended spaces can account for all linear transformations and for translations, because linear transformations in such extended spaces are equivalent to affine transformations in the original dimensions.

Three-dimensional geometric algebra with an additional (homogeneous) coordinate represents all linear transformations and translation.

5.3. Conformal geometry

Conformal spaces have invariant angles, but not invariant distances or areas. Conformal spaces have linear transformations and translations and have conformal operations.

A transformational space can extend to have two more points, in opposite directions along a dimension orthogonal to all other dimensions. For example, conformal space can have a point at infinity and a second point at negative infinity. Matrix multiplications in such extended spaces can account for all conformal operations.

Conformal geometric algebra uses five-dimensional Minkowski space and represents all linear transformations, translation, and conformal operations. Geometric objects can have up to five dimensions: points, curves (including splines), surfaces (including polygons), volumes (including polyhedrons), and geometric objects made of parts with same or different dimensions (simplicial complex). Conformal geometric algebra describes geometric-object intersections (meet) and unions (join). Geometric algebras GA(n) can represent (n-1)-conformal space that includes an extra point.

5.4. Projective geometry

Projective spaces do not have invariant lengths, areas, or angles. Projections are not affine, because values depend on distance from observer. Projective spaces have linear transformations and translations and have projections.

A transformational space can extend to have a dimension orthogonal to all other dimensions. Using homogeneous coordinates, its vectors (x/w, y/w, z/w) become vectors (x, y, z, w). Note that if w = 0, (x, y, z) is at infinity (point at infinity). Its matrices add a row or column. Matrix multiplications using homogeneous coordinates in such extended spaces can account for linear transformations, translations, and projections, because linear transformations using homogeneous coordinates in such extended spaces are equivalent to projective transformations in the original dimensions.

Three-dimensional geometric algebra with two additional (homogeneous) coordinates represents all linear transformations, translation, and projection. Geometric algebras GA(n) can represent (n-1)-dimensional projective space.

Networks and graphs

Networks and graphs have lines that intersect at points. Figure 10 shows a planar graph for the 8 vertices and 12 edges of a box. A planar graph has vertices (nodes) connected by lines (edges).

Other examples of two-dimensional networks and graphs are tree diagrams, electrical circuits, and woven cloth, with vertical warp threads and horizontal woof (weft) threads that intersect perpendicularly.

Networks and graphs can have any number of dimensions.

1. Nodes and edges

Network points (nodes) can represent geometric-object types, states, particles, fields, forces, energies, momenta, angular momenta (spins), or positions.

Network lines (edges) connect nodes. Edges can have a direction (orientation), with a beginning node and an ending node. Edges can have discrete or continuous values. Edges can represent geometric-object surfaces, particle-state transitions, or field, force, energy, momentum, or angular-momentum changes.

As a physical geometric example, in loop quantum gravity, spin networks describe particle quantum states and their transitions and so model both quantum mechanics and general relativity. Spin networks have nodes, for all possible particle states, and edges, for all possible transitions from first state to second state. Edges have discrete spin-change values. (Spin foams model electronic transitions and particle interactions. They are spin networks that evolve in quantum time. Spin-foam nodes are space-time lines, and edges are space-time surfaces.)

2. Loops

Going from a network node to other nodes and back to the starting node (loop) typically goes around the perimeter of one planar geometric figure. At any node, a loop can also take a path out and back to that node (kink). If two planar geometric figures intersect (knot), a loop can go around two planar geometric figures.

Going around loops can describe forces, energies, fields, or space-time curvatures. As a physical geometric example, in loop quantum gravity, spin-network loops represent (transferable) spin-1 or spin-2 bosons that mediate quantum-mechanical particle interactions or general-relativistic attractive and repulsive force fields. Spin-network-loop areas are multiples of Planck area (never zero) and represent quantum energy levels, from ground state up.

3. Duals

In many geometries, theorems about a geometric-figure type, such as lines, have corresponding (dual) theorems about another geometric-figure type, such as points. For example:

In two-dimensional projective geometry, points and lines are duals.

In three-dimensional projective geometry, planes and points are duals.

In three-dimensional space, lines bound surfaces, and surfaces bound lines.

As a physical geometric example, in loop quantum gravity, a spin-network's dual is general-relativistic local space-time curvature, which changes particle spin as particle passes through space.

4. Dual networks

For a network/graph that can have a dual, constructing a dual transforms network nodes into dual-network edges and network edges into dual-network nodes, or transforms network node-and-edge combinations into dual-network edge-and-node combinations, while maintaining the same topology.

Two networks are duals of each other if:

Both networks have exactly two element types. For example, the first network has only points and lines, and the second network has only volumes and surfaces (or lines and points).

Each first-network-element type, such as lines, corresponds to a second-network element of different type, such as points.

Both networks have the same topology.

A first-network property-or-activity value, such as length or angle, is proportional to a second-network property-or-activity value, such as length or angle.

For example, in graph theory, all graph nodes become dual-network edges, and all graph edges become dual-network nodes. The dual graph has a vertex for each edge cycle (face) and has lines between vertices of adjacent regions (across planar-graph edges). Figure 11 shows the non-planar dual of the planar graph for a box. It has 6 vertices for the 6 faces, and 12 edges for the 12 face adjacencies.

A network and its dual can have different dimensions:

A planar graph can transform into a three-dimensional geometric figure (Figure 11). Nodes represent figure faces, and edges represent adjacency of faces. Associating values with nodes specifies face areas. Alternatively, associating values with edges specifies face diameters/widths/heights.

A three-dimensional network can transform into a three-dimensional spatial layout of points, lines, surfaces, and regions, and vice versa (Figure 12).

Figure 12 shows a point (zero dimensions) with three perpendicular line segments (one dimension) through it. This graph can represent a box (three dimensions) with six faces (two dimensions), if graph line segments end at face center points. Associating a value with the point (node) specifies box volume. Alternatively, associating values with lines (edges) specifies face areas.

Spatial datatypes can represent all networks and graphs.

Related Topics in Table of Contents

1-Consciousness

Drawings

Drawings

Contents and Indexes of Topics, Names, and Works

Outline of Knowledge Database Home Page

Contents

Glossary

Topic Index

Name Index

Works Index

Searching

Search Form

Database Information, Disclaimer, Privacy Statement, and Rights

Description of Outline of Knowledge Database

Notation

Disclaimer

Copyright Not Claimed

Privacy Statement

References and Bibliography

Consciousness Bibliography

Technical Information

Date Modified: 2025.0826