Information from light, sound, liquid chemicals, air chemicals, temperature, pressure, and motion stimulate sense receptors, which change neurons, which affect brain states {sensation}|. Sensation is local and does not establish current environment or organism state.
types
Senses {sense} are carotid body, defecation, hearing, hunger, kinesthesia, magnetism, nausea, pleasure, pain, smell, taste, thirst, touch, urination, vestibular system, and vision.
properties
Sensations have intensities, qualities, times, and locations. Vision spectrum has one octave with no higher harmonics, colors mix, and area fills. Hearing spectrum has ten octaves, pitches do not affect each other, and area does not fill. Touch uses cell translation to indicate pressure and stress. Smell and taste use vibrations to indicate bonding.
Sense-property matrix shows properties that senses share and how property values vary among senses. Matrix columns are senses: vision, hearing, touch, temperature, kinesthesia, vestibular system, smell, and taste. Matrix rows are sense space, time, intensity, and frequency categories. For space, categories are inside-body/outside-body and continuous/discrete. For time, categories are fade/not-fade and continuous/discrete. For intensity, categories are low-magnitude/middle-magnitude/high-magnitude. For frequency or quality spectrum, categories are blending/not-blending and one-octave/more-octaves. Sensations relate two or more separated points within one psychologically simultaneous time interval and so are non-local.
similarities
Different senses have similar sense qualities. Sound and vibration are similar, because sound is fast vibration. Hearing, temperature, and touch involve mechanical energy.
Whites, grays, and blacks relate to temperature, as do warm and cool colors. White relates to vibration as noise. Sight affects balance.
Smell and taste mix. Sight, taste, and smell use chemical reactions. Smell and fluid-like touch mix. Taste and fluid-like touch mix.
causes
Sensations depend on physical light-frequency ranges; sound-frequency ranges; taste-molecule acidities and polarities; smell-molecule shapes, sizes, and vibrations; temperature increases and decreases; or tension, torsion, and compression changes.
biology
Sensation involves cerebellum, inferior occipital lobe, inferotemporal cortex, lateral cerebellum, and ventral system. Sensation can vary neuron number, diameter, length, type, molecules, membranes, axons, recurrent axons, dendrites, cell bodies, receptors, channels, and synapses. Sensation can vary neuron spatial arrangement, topographic maps, neuron layers, and neuron networks. Neurons can vary firing rates, sums, thresholds, neurotransmitter packets per spike, packet sizes, synapse shapes, and synapse sizes. Dendrites and axons can have different numbers, lengths, connections, and patterns to detect sequences, shapes, functions, and relations.
biology: sensors
Sensor properties match stimuli, and sense-surface events mirror physical-object events. Light sensors form pigment surface, and physical surfaces have pigments. Sound sensors vibrate at same frequencies as source vibrations. Touch receptors have strains, and skin surfaces have strains. Taste and smell receptors are molecules that are complementary to sensed molecules.
biology: network
Human nervous systems have integrated central and peripheral nerves that form a three-dimensional network, a space lattice. Variable lattice spacing can make space continuous. Perhaps, lattices have write and read connectors, like touch screens or magnetic-core memories.
biology: carrier wave
A carrier wave with constant amplitude and frequency can have frequency modulation (at higher frequencies) or amplitude modulation (at smaller amplitudes). Perceptual cortex appears to have physical carrier waves with frequencies of 20 to 80 Hz, on which amplitude-modulation patterns occur to represent perception. Sensory inputs form the carrier wave.
space
Skin-surface touch receptors can detect space contours. Muscle and tendon proprioception receptors can detect space distances and angles. Smell and taste systems work with touch skin-surface receptors. Touch, proprioception, smell, and taste systems make body-periphery space. Hearing can locate sounds in space outside body. Vision can locate objects in space outside body and measure distances and angles. Human brains connect outside space to body-periphery space, to make egocentric space.
evolution
Senses evolved to detect energy types. Sense receptors evolved to capture the most-useful stimuli. Brain evolved to represent the most-useful information. Body structures and processes evolve from previous designs, which constrain evolution. Evolution has no plan or pattern. First sense responded to high-intensity physical energy, was undifferentiated, and caused avoidance, withdrawal, or approach behavior.
Perception attributes have active neuron groups {activity principle, perception}.
Specific sense qualities need specific brain regions {essential node} [Adolphs et al., 1999] [Zeki, 2001].
Senses can work alone {unimodal perception}. At most neuraxis levels, sense inputs converge {intermodal perception}. Object relationships depend on intermodal connections, not just vision. Taste and smell, and touch and kinesthesia, have strong connections.
animals
All animals use intermodal and unimodal perception. Humans and apes recognize objects through fast intermodal processes and slower unimodal processes.
effects
Intermodal is better than unimodal for response reliability, impulse number, peak impulse frequency, and discharge-train duration. Intermodal sense associations can anticipate sequential stimuli from different sense modes.
learning
Learning in one sense does not transfer to another sense.
development
At human, ape, and monkey birth, object perception does not separate input into separate senses, uses one process involving all senses, and does not analyze features. Later, humans separate stimuli into different senses by cerebral-cortex inhibitory mechanisms, analyze sense features using symbols, and then combine features intermodally. For example, vision-cortex lip-movement analysis and auditory-cortex tone-and-sound-location analysis coordinate.
intelligence
Mentally retarded and dyslexic children have more difficulty with multisensory stimuli than with unimodal stimuli.
Visual and phenomenal spaces {space and senses} are bounded three-dimensional manifolds, with objects and events.
length units
Distances between retinal ganglion cells make fundamental visual-length units. See Figure 1.
angle units
Fundamental length units establish angle units.
triangulation and distances
Using length and angle units, triangulation can find planar distances. See Figure 2.
intensities and distances
Perhaps, stimulus intensity versus distance graphs to sigmoid curve. See Figure 3.
convergence and distance
For all senses, stimuli are in larger space, and signals converge on smaller neuron arrays. See Figure 4.
translation matrix and distance
From distance information, topographic-map local neuron assemblies calculate translation matrices that place oriented surfaces away from brain at space points.
timing mechanism
Brain timing alternates excitation and inhibition. See Figure 5.
mass center
For flexible structures with only internal forces, mass center does not move. Outside forces move mass center. See Figure 6.
topographic maps
Retinal ganglion cells, thalamic neurons, and cortical neurons form arrays with equal spacing between neurons. See Figure 7.
surface orientation
Surfaces perpendicular to sightline have highest intensity. Surfaces at smaller angles have lower intensities. See Figure 8.
Surfaces perpendicular to light-source direction have highest intensity. Surfaces at smaller angles have lower intensities. See Figure 9.
processes: spatial and temporal relations
Modified ON-center and OFF-center neurons can detect spatial and temporal relations. For example, neuron can have horizontal band at center to detect space between two objects, band above to detect object above, or band below to detect object below.
processes: spatial layout
For positions, features, objects, scenes, and events, observing systems use object and object-property placeholder configurations to represent spatial layouts. Object and object property placeholders include smooth texture, rough texture, enclosed space, and open space. Observing systems replace object and object property placeholders with values.
Mathematical functions can represent spatial layouts. Functions with parameters or roots can describe surface and region boundaries. Waves with parameters and/or samples can describe functions and repeating or cyclic perceptions. Distributions with samples can describe surfaces and regions. Space distances and angles can describe shapes and patterns.
processes: space and time development
Body movements cause correlated sensations. As babies move body and limbs, they encounter air, fluid, and solid surfaces, including own body. For example, walking and running establish airflow gradient from front to back. Correlating sensations and movements, brain builds position and relation memories and places surfaces in body-centered space. From surfaces, brain builds horizontal ground, front and back, up and down, right and left, vertical, straight-ahead, and across. From directions and coordinates, brain learns what happens when body moves from place to place and so locates body parts and surfaces in space.
From length information, brain builds before and after memories and makes event sequences. From sequences, brain builds overall sequence and absolute beginning and end and past and future. From time coordinates, it learns what happens when it moves from time to time and locates body parts and surfaces in time.
properties: three dimensions
Midbrain tectum and cuneiform nucleus map three-dimensional space using multimodal neurons, whose axons envelop reticular thalamic nucleus and other thalamic nuclei. Spatial processing involves frontal lobe.
properties: continuity
Perceptual space never breaks into discrete parts during movement or blinking. Space has no twinkling, vibration, or oscillation. Perceptual space has no discontinuities. Visual processing is neuron size, but perceptions are much greater size. Neuron assemblies overlap and represent different sizes. Visual processes add and decay over time. Visual processing averages over time and space.
Sensation type depends on special neurons {doctrine of specific nerve energies} {specific nerve energies doctrine} {specific nerve energies law} {law of specific nerve energies}, not on what stimulates them. The applied physical energy does not matter. Stimulating retina with light or pressure makes only sights. Sending sense receptor signals to, or electrically stimulating, nerve fibers makes only one sensation.
Perception evolved, from Protista to humans {perception, evolution} {evolution, perception}.
protozoa
Stimulus Detection: Cell-membrane receptor molecules respond to pressure, light, or chemicals.
Potential Difference: Cell-membrane ion channels actively transport ions across membrane, to build concentration gradients and set up electric-voltage differences, and open and close to vary membrane potential locally.
marine metazoa
Neurons and Glands: Ectoderm develops into sense receptors, nerves, and outer skin. Mesoderm develops into muscles and glands, which release hormones to regulate cell metabolism. Endoderm develops into digestive tract.
Neuron Coordination: Sense receptors and neurons have membrane electrical and chemical connections, allowing information transfer and cell coordination.
Nerve Excitation: Excitation raises membrane potential to make reaching impulse threshold easier or to amplify stimuli.
Nerve Inhibition: Inhibition damps competing weaker stimuli to leave stronger stimuli, or more quickly damps neuron potential back to resting state to allow timing.
Bilateria
Bilateral Symmetry: Flatworms have symmetrical right and left sides and have front and back.
Ganglia: Neuron assemblies are functionally organized.
deuterostomes
Supporting Systems: Flatworm embryos have enterocoelom; separate mouth, muscular gut, and anus; and circulatory system. Embryo inner tube opens to outside at anus, not head.
Chordata
Body Structure: Larval and adult stages have notochord and elongated bodies, with distinct heads, trunks, and tails and repeated body structures.
Nervous System: Chordates have head ganglion, dorsal hollow nerve, and peripheral nerves.
Reflexes: Sense receptors send electrochemical signals to neurons that send electrochemical signals to neurons that send electrochemical signals to muscle or gland cells, to make reflex arcs.
Interneurons: Interneurons connect reflex arcs and other neuron pathways, allowing simultaneous mutual interactions, alternate pathways, and networks.
Association: Interneurons associate pathway neuron states with other-pathway neuron states. Simultaneous stimulation of associated neurons modifies membrane potentials and impulse thresholds.
Attention: Association allows input acknowledgement and so simple attention.
Circuits and Sequences: Association series build neuron circuits. Outside stimulation causes electrochemical signal flows and enzyme releases. Circuit flows calculate algorithms and spread stimulus effects over time and space. Circuits have signal sequences. Circuit sets have signal patterns.
Receptor and Neuron Arrays and Feature Detection: Sense-receptor and neuron two-dimensional arrays detect spatial and temporal stimulus-intensity patterns, and so constancies, covariances, and contravariances over time and/or space, to find curvatures, edges, gradients, flows, and sense features.
Topographic Maps and Spatial and Temporal Locations: Neuron arrays are topographic, with spatial layouts similar to body surfaces and space. Electrochemical signals stay organized spatially and temporally and so carry information about spatial and temporal location. Topographic maps receive electrochemical-signal vector-field wave fronts, transform them using tensors, and output electrochemical-signal vector-field wave fronts that represent objects and events.
Memory: Secondary neuron arrays, maps, and circuits store associative-learning memories.
Recall: Secondary neuron arrays, maps, and circuits recall associative-learning memories, to inhibit or excite neuron arrays that control muscles and glands.
vertebrates/fish
Brain: Hindbrain has motor cerebellum and sleep, wakefulness, and sense ganglia. Midbrain has sense ganglia. Forebrain has vision occipital lobe, hearing-equilibrium temporal lobe, touch-temperature-motor parietal lobe, and smell frontal lobe.
Balance: Vestibular system maintains balance.
fresh-water lobe-finned fish
Hearing: Eardrum helps amplify sound.
amphibians
Early amphibians had no new sense or nervous-system features.
reptiles
Cortex: Paleocortex has two cell layers.
Vision: Parietal eye detects infrared light.
anapsids, diapsids, synapsids, pelycosaurs, pristerognathids
Early anapsids, diapsids, asynapsids, pelycosaurs, and pristerognathids had no new nerve or sense features.
therapsids
Hearing: Outer ear has pinna.
Thermoregulation: Therapsids have thermoregulation.
cynodonts, Eutheria, Tribosphenida, monotremes, Theria
Early cynodonts, Eutheria, Tribosphenida, monotremes, and Theria had no new nerve or sense features.
mammals
Neocortex: Neocortex has four cell layers.
Vision: Vision sees color.
Stationary Three-Dimensional Space: Vision has fixed reference frame and stationary three-dimensional space.
insectivores
Vision: Forward vision has eyes at face front, and eye visual fields overlap.
primates, prosimians, monkeys
Early primates, prosimians, and monkeys had no new nerve or sense features.
Old World monkeys
Vision: Vision is trichromatic.
apes
Vision: Chimpanzees and humans over two years old can recognize themselves using mirror reflections and can use mirrors to look at themselves and reach body-surface locations.
anthropoid apes
Frontal Lobes: Neocortex frontal lobes are about memory and space, planning and prediction.
hominins
Multisensory Cortex: Neocortex has multisensory regions and two more cell layers, making six layers.
humans
Brain: Frontal lobes have better spatial organization. Parietal lobes have better communication. New associational cortex is for perception and motion coordination. Language: Neocortex has language areas.
Posture, movement, and pain perception {inside sense, field} detect stimuli from inside body.
Sight, hearing, touch, taste, and smell perception {outside sense, field} detect stimuli from outside body.
Brain processes make sensations {sense, physiology}. Intensity is about amplitude, flux, and energy. Spatial location and extension are about size, shape, motion, number, and solidity. Time interval is about sequences, frequency, and before and after. Quality is about timbre.
physiology
Senses measure intensive quantities (pressure, temperature, concentration, sound, and light) using receptors that accumulate energy, an extensive quantity, on small surfaces over time intervals. Absorbed energy displaces mass and electric charge and becomes potential energy. Sense-cell altered-molecule potential energies can transfer energy to other molecules. Light-energy absorption changes retinal-receptor-molecule atom arrangements. Sound-energy absorption moves inner-ear hair-cell hairs and basilar membrane. Mechanical energy absorption stretches skin touch receptors. Heat energy absorption or loss moves cell receptor membrane in skin hot-or-cold receptor cells. Mechanical-energy absorption by smell and taste receptors bonds molecules to receptors and alters molecule atom arrangements.
Senses analyze signal-wave amplitude, phase, and frequency differences and ratios to make spatial, temporal, intensity, and frequency patterns. Information flows represent intensive quantities.
To detect, neurons can sum inputs to add and pass thresholds. To sum, neurons can take continued sums and so perform integration. To model physical interactions, neurons can adding logarithms to multiply. To find solutions, factors, probabilities, combinations, and permutations, neurons can sum logarithms to find continued products. To perform algebra and calculus operations, neuron assemblies calculate sums, differences, products, divisions, mu operations, differentials, integrals, exponentials, and logarithms. To perform geometric operations, neuron assemblies calculate rays, splines, lines, lengths, distances, angles, boundaries, areas, regions, region splits, region joins, volumes, triangulations, and trilateralizations. To use spaces, neuron assemblies detect coordinates, directions, coordinate origins, spatial positions, vectors, matrices, tensors, symmetries, and groups. To use objects, neuron assemblies detect self, not-self, patterns, features, objects, and object relations.
signals
Electrical signals can vary in amplitude, speed, frequency carried, rate, noise, sensitivity, threshold, attack and decay slope, phase, integration, dissemination, feedback, feedforward, control, querying, alternation, regulation, filtering, and tuning. Chemical signals can vary in type, concentration, diffusion, active transport, release, packet size, reactivity, and energy release.
signals: continuous/discrete
Brain has discrete neurons, neurotransmitter packets, nerve impulses, and molecules. Discrete processes can transfer and store information without degradation, perform logic operations, and represent categories.
Sense stimuli are discrete. Light is a photon stream. Sound is a phonon stream. Smells and tastes have individual molecule binding. Temperature and pressure are individual molecule movements. Receptors convert stimulus energy into ion and molecule motions. However, particles are small and many, and act on millisecond time scales. Over macroscopic space and time, stimuli appear continuous in intensity, spatial location and extension, time location and duration, and quality.
signals: vibrations
Touch receptors can detect mechanical vibrations up to 20 to 30 hertz, which are also the lowest frequency vibrations detected by hearing receptors. Below 20 Hz, people feel pressure changes as vibration, rather than hearing them as sound. Images flashed at 20-Hz rate begin to blend. 20-Hz is also maximum breathing, muscle-flexing, and harmonic-body-movement rate. Muscle contractions up to 20 times per second make "butterflies" in tummy, trembling with anger or fear, damping of depression, or excitations of joy. Animals can have spring-like devices that allow higher muscle-vibration rates.
effects
Sensations tend to cause reflex motor actions, which brain typically suppresses. Sensations excite and inhibit brain processes.
Sensations from voluntary muscles provide feedback after actions, for reward and punishment [Aristotle, -350].
measurement
Brain can measure relative and absolute distances, times, masses, and intensities. Measurements have accuracy, precision, reproducibility, selectivity, and sensitivity.
measurement: units
Mass, length, and time are fundamental measurements. During development, brain measures intensity ratios to build measurement units. Brains calculate distances using triangulation, linear perspective, and geometry [Staudt, 1847] [Veblen and Young, 1918]. Brain can detect distance difference of one degree arc. Brains can measure mass by linear or angular acceleration or by moment around axis, using combined sight and touch. Brain can detect mass difference of 100 grams. Perhaps, some neurons signal at millisecond and longer intervals to provide brain clocks for time measurement. Brain can detect time difference of 0.03 milliseconds.
measurement: accumulator
To measure extensive quantities, chemical or electrical accumulators can sum an intensive quantity sampled over time or space.
measurement: contrast
Neurons perceive relative intensity differences and intensity ratios. For example, eye receptors respond mainly to illumination changes, not to steady light. Receptors detect change over time. Receptor pairs detect differences over space.
processes
Perception factors stimuli into irreducible features, objects, and events.
processes: paths
Complex systems have enough parts, connections, and subsystems to have and regulate internal flows. Brain has a central flow and many other pathways and circuits. Central processing stream uses synchronized sequential signals, with feedback, feedforward, and other regulatory signals. Reticular activating system and brainstem start depolarization streams and so are basic to consciousness. Cerebrum constructs streams of consciousness.
processes: test signals
Like radar or sonar, brain scanning sends parallel signals through brain regions to obtain return-signal patterns.
processes: space
Neurons detect constants, variables, first derivatives, and second derivatives to determine distances and times and so create space and time, using extrapolation, interpolation, differentiation, integration, and optimization.
processes: motion minimization
Brain spatial and time coordinates minimize and simplify object motions, and number of objects to track, using fixed reference frames. Fixed reference frames make most object motions two-dimensional straight-line motions, which aid throwing and catching. In moving reference frames, more objects appear to move, and motions are three-dimensional curves.
processes: nulling
In size-weight illusions, mass discrimination seems to use nulling. Nulling can explain Weber-Fechner stimulus-sensation law.
processes: operations
Local sensory operations involve finding boundary, determining boundary orientation, increasing contrast, decreasing similarities, and detecting motion [Clarke, 1995]. Global sensory operations involve head and body movements, object trajectories, feature comparisons, and event sequences.
processes: resonation
To resonate, neuron pairs excite interneuron, which excites both neurons equally, while each paired neuron inhibits other paired neuron. If paired neurons fire asynchronously, interneuron signal has low amplitude and no frequency. If paired neurons fire synchronously, interneuron signal has high amplitude at input-signal frequency. Changing number of neurons and synapses traversed, or changing axon lengths, changes frequency.
Resonance detects synchronicity and so association. Interneurons can send resonating signals forward to other neurons.
processes: sampling
Body moves sense organs to sample different space regions over time. Directed movements gain information about critical features in critical locations at critical times. Birds and other animals move and then pause, every few seconds, to gather information [Matthews, 1973].
Perhaps, sampling uses attention mechanisms to decide to which location to move. Perhaps, sampling uses production systems to decide what to sample next. Perhaps, sampling uses template matching to recognize or categorize samples.
processes: statistics
Sense processing uses many neurons and so uses statistics.
processes: synchronization
Resting neurons send signals that adjust synapse properties and axon lengths, to coordinate timing among neuron sets. Synchronized signals lengthen or shorten pathways and quicken or slow synapses, to align time and space metrics.
processes: tensor
Sense-organ-receptor-, neuron-, and motor-neuron-array inputs are scalar or vector fields. Array uses a tensor function to transform field to output new vector field. Output vector field goes to cortical analysis or muscle and gland cells. Muscle cells contract in one direction with varying strength. Muscle-contraction vector fields have net contraction.
processes: timing
Brain neurons can send time signals at regular millisecond and/or longer intervals to act as clocks. Brain-timing-mechanism oscillation phases or periods can time perceptual events and body movements. At different times and positions, brain clocks run at different speeds for different purposes [Bair and Koch, 1996] [Bair, 1999] [Marsálek et al., 1997] [Nowak and Bullier, 1997] [Schmolesky et al., 1998].
Accumulation processes, such as adding energy units, can record time passage. Decay processes, subtracting energy units from total, can record time passage. Cycles can measure intervals between peaks. Tracking times requires processes that persist over time and whose later states causally depend on earlier states.
processes: wave modulation
Nerve signals can use wave-frequency modulation and wave-amplitude modulation to represent frequency and intensity.
processes: whole body
Brain, peripheral nervous system, and motor system interconnect, and sense qualities involve brain and body. For example, stroking skin can make people feel sense qualities in other body locations. Music and visual patterns can evoke whole body changes. Moods integrate senses, motor system, and body into overall feelings. Surprised people draw in breath and pull back, because drawing in breath helps one pull back, and body pulls away from what is in front.
speed
Brain processes sounds faster than sights. Brain processes colors faster than shapes. Action pathway is faster than object-recognition pathway. Brain calculates eye movements faster than voluntary movements [Revonsuo, 1999].
speed: information processing rate
Neuron information-processing rate is 40 bits per second. Ear information capacity is 10,000 bits per second. Eye can see 50 images per second, so eye information capacity is 500,000 to 600,000 bits per second.
Previous cell stimulation {adapting stimulus} reduces cell response {adaptation, sensation} {sensory adaptation} {sense adaptation}. Receptors have fewer biochemical reactions {receptor adaptation}, because cell has fewer energy storage molecules and cells make energy molecules slower than they use them. Receptors have lower cell-membrane potential gradients, because ions have flowed through membrane channels and active transport is slower than ion flow through open ion channels. After adapting stimulus ceases, cells increase sensitivity and responses.
Monitoring heart rate electronically {biofeedback}| allows learning voluntary heart-rate control.
Neurons can receive from two eye, ear, or other-sense neurons and detect time, space, or intensity differences. For two spatial positions, cells detect ear time difference {characteristic delay}, eye spatial difference, or smell, taste, or touch concentration or pressure difference.
Sense qualities {quality, sense} depend on opponent and categorization processes.
sum
ON-center neuron can add inputs from two neurons. Brightness depends on adding.
opponent processes
ON-center neuron can receive input from two neurons. Input from one neuron subtracts from input from other neuron. Human color vision uses such opponent processes. (Opponent-process opposites have same information as opponent process.)
continuous
Sum and opponent processes make continuous scales. For example, values can range from +1 to -1.
categorization
To divide ranges into intervals and make discrete categories, neurons use thresholds [Damper and Harnad, 2000]. Comparing different opponent processes can filter to make categories.
Stimuli tend to cause muscular or glandular responses. By attending to stimuli or responses, animals can learn to inhibit muscular or glandular responses, so signals only affect brain {response internalization}.
Brain can sense simultaneous stimuli at different times {sensory onset asynchrony} (SOA).
Perceptions have relative intensities {intensity, sense physiology} at locations.
coding
Axon-hillock membrane potential, axon current, average nerve-impulse rate, or neurotransmitter release can represent intensity.
receptors
Mechanical strains, temperature changes, chemical bonding, cell-hair vibration, and photon absorption change receptor membrane-molecule configurations. Configuration rearrangement changes molecule potential energy. Molecule steady-state configurations have lowest potential energy. Receptors transduce molecule potential-energy change into neurotransmitter-packet release at synapses onto neuron dendrites and cell bodies. Neurotransmitters open or close membrane ion channels to change synaptic neuron-membrane electric potential.
neurons
Synaptic membrane potentials spread to neuron axon hillock, where they add. Every millisecond, if hillock-membrane depolarization exceeds threshold, hillock membrane sends nerve impulse down axon.
threshold
Previous activity and neurohormones change neuron thresholds, so neurons detect current relative intensity, not absolute intensity. Perceptual intensities can be transient or sustained.
Small stimuli, such as gentle touch, can trigger sense response {irritability, sense}.
Sense receptors {sensory transducer} convert kinetic or potential energy from mechanical-force touch, temperature, and hearing translations and vibrations, or electrical-force light, liquids, or gases into cell-membrane depolarizations, whose electrical effects pass to neurons.
Machine computation is for stepwise analysis. Brain computation is for synthesis over time. Unlike computer programs, sensations can cause ongoing excitation {sustained response} at same location. Sustained responses are like steady states, not equilibrium states or transient states. Sustained responses use invariants and transformations to reach steady state. Neural assemblies have evolved to develop sustained responses. Sustained responses can serve as symbol grounds.
Objects have shape, texture, color, spatial location, distance, surface orientation, and motion. Brain processes object information in separate brain regions at different times and different processing speeds. Perception neural activities associate {binding} all feature and object information at all times [Domany et al., 1994] [Lisman and Idiart, 1995] [Malsburg, 1981] [Malsburg, 1995] [Malsburg, 1999] [Milner, 1974] [Robertson, 2003] [Treisman, 1996] [Treisman and Schmidt, 1982] [Treisman, 1998] [Tsal, 1989] [Wojciulik and Kanwisher, 1998] [Wolfe and Cave, 1999]. Color, shape, depth, motion, and orientation unify into objects and events [Treisman, 2003]. Same-spatial-location features associate. Simultaneous features associate.
attention
Binding typically requires attention. Perhaps, attention enhances attended-object brain processing. Simultaneous attention to features associates them. With minimum attention, adjacent-object property can bind to half-attended object. With no attention, non-conscious information processing can have perceptual binding [Treisman and Gelade, 1980].
short-term memory
Binding requires short-term memory, which holds all object features simultaneously. Short-term memory processing has EEG gamma waves. Perhaps, reverberating brain activity causes gamma waves. However, short-term memory involves more than synchronous or phasic firing [Tallon-Baudry and Bertrand, 1999].
brain processes
Perhaps, binding uses neuron labels, gene patterns, development patterns, frequently repeated experiences, space location, or time synchronization [Malsburg, 1999]. Learned associations link similar features.
Mammal superior colliculus can integrate same-spatial-location multisensory information, but reptiles use only separate sense processes [O'Regan and Noë, 2001]. Strongly firing cortical and thalamic neurons link temporarily. Medial-temporal-lobe system, especially hippocampus, is for binding. Visual-cortex neuron-assembly synchronous firing can represent object images [Engel and Singer, 2001] [Engel et al., 1991] [Engel et al., 1999] [Gray, 1999] [Gray et al., 1989] [Kreiter and Singer, 1996] [Laurent, 1999] [Laurent et al., 2001] [MacLeod et al., 1998] [Malsburg, 1981] [Malsburg, 1999] [Shadlen and Movshon, 1999] [Singer, 1999] [Singer, 2000] [Stopfer et al., 1997] [Thiele and Stoner, 2003]. Perhaps, master maps or central information exchanges synchronize topographic maps.
From one stimulus source, brain processes different feature types in separate brain regions, at different times and processing speeds. How does brain associate object features {binding problem}|? Perhaps, brains use common signals for all processes.
Moving spot triggers different motion detectors. How does brain associate two stimulus sources with one moving object {correspondence problem, binding}? Perhaps, brain follows spot from one location to next unambiguously.
Turning one spot on and off can trigger same motion detector. How does brain associate detector activation at different times with one spot? Perhaps, brain assumes same location is same object.
From many stimulus sources, brain processes different objects' feature types in separate brain regions, at different times and processing speeds. How do brains associate object features to objects {parsing problem}|? Perhaps, brains use common signals for processes.
Perhaps, background field {perceptual field} links perceptual locations, synchronizes times, and associates features to objects and events. During development, space and time correlations among sense features and motor movements build perceptual field. First, neurons note other-neuron states and store feature correlations. Next, neuron assemblies note other-neuron-assembly states and store object and movement correlations. Then, larger neuron assemblies work together to store scenes and stories [Desimone and Duncan, 1995] [Flohr, 2000] [Freeman, 1975] [Harris et al., 2003] [Hebb, 1949] [Palm, 1982] [Palm, 1990] [Rowland and Blumenthal, 1974] [Szentagothai and Arbib, 1975] [Varela et al., 2001].
Reduced blood volume decreases blood pressure and stimulates left-atrium, aorta, and carotid low-pressure stretch receptors {baroreceptor}. Baroreceptors stimulate glossopharyngeal and vagus cranial nerves to hypothalamus, which causes pituitary-nerve terminals to secrete arginine vasopressin to constrict blood vessels to increase blood pressure.
Increased plasma concentration and higher osmolality stimulate hypothalamus receptors {osmoreceptor}. Hypothalamus causes pituitary-nerve terminals to secrete arginine vasopressin to constrict blood vessels to decrease kidney water loss.
Internal carotid-artery receptors {carotid body}| {carotid sinus} measure blood oxygen and carbon dioxide concentrations, and send signals to control breathing rate and breath-holding response.
Rectum sensors {distension receptor, rectum} measure distension and send signals to control discomfort feeling {defecation, sense}.
Using ampullae of Lorenzini or tuberous receptors, electric fish can detect electric-field-change information and electric waves {electroreception}|, and send along lateral-line nerve to brain. Rays, skates, sawfish, electric rays, sturgeons, lungfish, sharks, and ratfish or chimaera combine electroreceptor system with other sense modes.
Sharks, skates, electric rays, rays, lungfish, sawfish, sturgeon, and ratfish {chimaera} have skin pores that open into electrically charged gel tubes, which go to ampullae {ampullae of Lorenzini} (Stefano Lorenzini) [1678]. Ampullae have one sensor layer, with calcium ion inflow and potassium ion outflow, that sends to neurons that send along lateral-line nerve to brain.
Elephant-nose fish and other mud dwellers emit electric fields and have electric-field receptors {tuberous receptor} that detect electric-field disturbances caused by other-organism movements.
People have inner-ear cochleas {hearing, sense} {audition, sense}, with sense receptors for mechanical compression-and-rarefaction longitudinal vibrations {sound, hearing}. Sounds have loudness intensity and tone frequency. Hearing also analyzes sound-wave phases to locate sound space directions and distances. Hearing qualities include whisper, speech, music, noise, and scream. Hearing can perceive who is speaking, what their emotional state is, and whether they are lying.
physical properties
Hearable events are mechanical compression-and-rarefaction longitudinal vibrations in air and body tissues, with frequencies 20 Hz to 20,000 Hz. Sound-wave frequencies have intensities, amplitude, and phase.
Two frequencies can have harmonic ratios, with small integers in numerator and denominator.
Sound waves ultimately vibrate cochlea hair cells.
neurons
At low frequencies, sound and neuron activity have same frequency. At high frequencies, nerve-fiber activity distribution represents pitch. Neuron firing rate and number represent sound intensity.
properties: aging
Aging can shift tone sequence.
properties: analytic sense
Tones are independent and do not mix. People can simultaneously hear different frequencies at different intensities.
properties: beats
Sound waves can superpose to create lower-frequency beats.
properties: habituation
Hearing does not habituate quickly.
properties: hearing yourself speak
Bone attenuates higher frequencies, so people hear their own speech as more mellow than others do.
properties: individual differences
Sound has same physical properties for everyone, and hearing processes are similar, so hearing perceptions are similar. All people hear similar tone spectrum, with same tones and tone sequence.
properties: memory
Melodies ending in harmonic cadence are easier to remember than those that end otherwise.
properties: opposites
Tones have no opposites.
properties: precision
People easily distinguish tones and half tones and can distinguish quarter tones after learning. Adjacent-quartertone frequencies differ by several percent.
properties: tempo
People can perceive sound presentation speed: slow, medium, or fast.
properties: time
Hearing is in real time, with a half-second delay.
properties: tone relations
Tones have unique tone relations. A, B, C, D, E, F, and G tone-frequency ratios must be the same for all octaves. Tones, such as middle A, must be two times the frequency of same tone, such as lower A, in next-lower octave. Without constant in-octave and across-octave frequency ratios, tone A becomes tone B or G in other octaves. For normal hearing, tones relate in only one consistent and complete way. Tones cannot substitute and can never be other tones.
properties: tone similarities
Similar tones have similar frequencies or are octaves apart.
properties: waves
Tones directly relate to physical sound-wave frequencies and intensities. Sound waves have emissions, absorptions, vibrations, reflections, and transmissions.
properties: warm and cool
Warm tones have longer and lower attack and decay, longer tones, and more harmonics. Cool tones have shorter and higher attack and decay, shorter tones, and fewer harmonics.
evolution
Hearing evolved from fish lateral line, which has hair cells. Hearing uses one basic receptor type. Reptile hair cells have oscillating potentials from interacting voltage-gated-calcium and calcium-gated-potassium channels, so hair vibrations match sound frequencies. Mammal hair cells vibrate at sound frequencies and have sound-frequency oscillating potentials, but they add force to increase vibration amplitude. Perhaps, the first hearing was for major water vibrations.
development
By 126 days (four months), fetus has first high-level hearing.
Newborns react to loud sounds. If newborns are alert, high sound frequencies cause freezing, but low ones soothe crying and increase motor activity. Rhythmic sounds quiet newborns.
animals
Animals can detect three pitch-change patterns: up, down, and up then down. Bats can emit and hear ultrasound. Some moths can hear ultrasound, to sense bats [Wilson, 1971] [Wilson, 1975] [Wilson, 1998]. Insects can use hearing to locate mates [Wilson, 1971] [Wilson, 1975] [Wilson, 1998].
relations to other senses
Hearing, temperature, and touch involve mechanical energy. Touch can feel vibrations below 20 Hz. Hearing can feel vibrations above 20 Hz. Sound vibrates eardrum and other body surfaces but is not felt as touch.
Vision seems unrelated to hearing, but both detect wave frequency and intensity. Hearing detects longitudinal mechanical waves, and vision detects transverse electric waves. Hearing has ten-octave frequency range, and vision has one-octave frequency range. Hearing has higher energy level than vision. Hearing is analytic, but vision is synthetic. Hearing can have interference from more than one source, and vision can have interference from only one source. Hearing uses phase differences, but vision does not. Hearing is silent from most spatial locations, but vision displays information from all scene locations. Hearing has sound attack and decay, but vision is so fast that it has no temporal properties.
Smell and taste seem unrelated to hearing.
Some people can name heard tones {absolute pitch}|, and this correlates with learning note names when young.
People can listen to one speaker when several speakers are talking {cocktail party effect}. Hearing attends to one message stream by localizing sounds using binaural hearing and sound quality and by inhibiting other message streams.
Seeing lip movement aids auditory perception {McGurk effect}. In humans, sight dominates sound.
Things can be about ears {otic}.
Equal-temperament tones can form mathematical groups {sound dodeconion}. The twelve octave tones and half-tones have equally spaced frequencies. A regular 12-vertex dodecagram has points separated by 30 degrees and can represent the twelve tones, and rotations by 30-degree multiples result in same geometric figure.
frequency ratios
Tone pairs have frequency ratios. Octave from middle-C to high-C has tone frequency ratio 2/1. Middle tone, such as middle-G, makes reciprocal tone ratios, such as middle-C/middle-G, 3/2, and middle-G/high-C, 4/3.
A tube {Eustachian tube}| goes from middle ear to pharynx, to equalize pressure inside and outside eardrums. Pharynx valves close tube when talking but open tube when swallowing or yawning or when outside air pressure changes.
Area {belt area} adjacent to area-A1 primary auditory cortex can receive from area A1 and respond to complex sound features.
Area {parabelt area} laterally adjacent to belt area can receive from belt area and respond to complex sounds and multisensory features.
Cortical frequency-sensitive auditory neurons align from low to high frequency {tonotopic organization}.
Hearing neurons {auditory neuron} receive input from 10 to 30 hair-cell receptors.
frequency
Auditory neurons respond to one frequency, within several percent. Frequencies are between 20 Hz and 20,000 Hz.
intensity
Auditory neurons respond to low, medium, or high intensity. Low-spontaneous-firing-rate neurons {low-spontaneous fiber} are for high-intensity sound and have narrow-band frequency tuning. With no stimulation, their firing rate is less than 10/s. Firing rate rises with intensity {rate-intensity function, neuron}.
High-spontaneous-firing-rate neurons {high-spontaneous fiber} are for low-intensity sound and have broad-band frequency tuning. With no stimulation, their firing rate is greater than 30/s. Firing rate rises with intensity to maximum at low intensity.
Mid-spontaneous fibers are for intermediate-intensity sound. With no stimulation, firing rate is greater than 10/s and less than 30/s.
Free intracellular calcium ions modulate cricket hearing interneurons {omega interneuron} [Huber and Thorson, 1985] [Sobel and Tank, 1994].
Human hearing organs {ear} have outer ear to catch sounds, middle ear to concentrate sounds, and inner ear to analyze sound frequency and intensity.
Pinna and ear canal {outer ear}| gather and focus sound on eardrum.
Only mammal ears have a cartilage flap {pinna}| {pinnae}, to catch sounds.
A 2.5-centimeter tube {auditory canal}| {ear canal}, from outside pinna to inside tympanic membrane, protects tympanic membrane from objects and loud sounds.
Auditory canal has wax {earwax}|. Perhaps, earwax keeps ear canal moist and/or sticks to insects.
Thin connective-tissue membrane {tympanic membrane} {eardrum}| is across ear-canal inner end. Tympanic membrane is 18 times larger than oval window.
Eardrum connects to air cavity {middle ear}|.
Middle ear has three small bones {ossicles}|: hammer, anvil, and stirrup. Two middle ear bones evolved from reptile lower jawbones [Ramachandran, 2004].
Eardrum connects to middle-ear bone {hammer bone}| {malleus}, which connects to anvil.
Hammer bone connects to middle-ear bone {anvil bone}| {incus}, which connects to stirrup. Anvil bone is smaller than hammer bone to concentrate sound pressure.
Anvil bone connects to middle-ear bone {stirrup bone}| {stapes}, which connects to oval window. Stirrup bone is smaller than anvil bone to concentrate sound pressure.
Muscles {tensor tympani muscle} attached to malleus can tense to dampen loud vibration.
Muscles {stapedius muscle} attached to stapes can tense to dampen loud vibration.
A coiled trumpet-shaped fluid-filled organ {inner ear} {cochlea}|, 4 mm diameter and 35 mm long, is in temporal bone.
Inner ear, nearer auditory nerve, has one straight row of 3500 inner hair cells {hair cell, cochlea} and has three S-curved rows with 3500 outer hair cells each (10,500 total). Outer-hair-cell cilia poke through tectorial membrane. Hairs have long part, medium part, and short part, linked by hairs from small tip to medium middle and from medium tip to large middle. Cochlea hair-cell receptors microscopic fibers and microscopic cross-fibers cause resonance between frequencies.
Oval-window movement makes pressure waves, down vestibular canal, which cause middle-canal vertical movement, which slides tectorial-membrane gel horizontally over upright cilia. If pushed one way, hair-cell-membrane potential increases from resting potential. If pushed other way, potential decreases. Inner hair cells send to 10 to 30 auditory neurons.
Outer hair cells can receive brain signals to extend cilia, to stiffen cochlear partition and dampen sound. This reduces signal-to-noise ratio, lowers required input intensity to sharpen tuning, or sends secondary signals to inner hair cells.
Stapes connects to membrane across opening {oval window, hearing}| at cochlea beginning. Oval window is 18 times smaller than tympanic membrane, to concentrate sound pressure.
At base, tympanic canal has soft tissue {round window} that absorbs high pressure.
Cochlea outside has a canal {tympanic canal} {scala tympani}. Tympanic membrane is over tympanic-canal end. Round window is over tympanic-canal base.
Cochlea outside has a canal {vestibular canal} {scala vestibuli}.
Tympanic and vestibular canals join at cochlea point {helicotrema}.
Cochlea middle has a canal {middle canal} {scala media}.
Cochlea inside has a canal {cochlear canal}.
Membrane {Reissner's membrane} separates middle canal and vestibular canal.
In cochlear canal, a coil {basilar membrane} also separates middle canal and tympanic canal. Close to oval window {base, basilar membrane}, basilar membrane is stiff and narrow. At other end {apex, basilar membrane}, basilar membrane is wider and less stiff.
Basilar-membrane structures {organ of Corti} have 30,000 hair-cell receptors, with stereocilia and fibers. Organ-of-Corti base detects high frequencies, and organ-of-Corti apex detects low frequencies (place code).
Gel membrane {tectorial membrane} attaches to end of, and floats in, middle canal and touches outer hair cells.
Basilar membrane, tectorial membrane, and organ of Corti together {cochlear partition} detect sounds. Cochlear partition is in middle canal.
Sounds affect many hair-cell receptors {hearing, physiology}. Hearing finds intensities at frequencies and frequency bands (sound spectrum).
properties: fundamental missing
If people hear harmonics without the fundamental frequency, they hear the fundamental frequency, probably by temporal coding. Amplifying a chord tone causes hearing both tone and its fundamental tone, though fundamental frequency has zero intensity.
properties: octave
Animals conditioned to respond to pitch respond almost equally to its above and below octaves.
properties: phase differences
People cannot hear phase differences, but hearing can use phase differences to locate sounds.
properties: rhythm
Hearing can recognize rhythms and rhythmic groups.
properties: timing
People perceive two sounds less than three milliseconds apart as the same sound.
processes: contrast
Hearing uses lateral inhibition to enhance contrast to distinguish sounds.
processes: damping
Later tones constrain basilar membrane. Lower-frequency later tones constrain basilar membrane more. If later tone is more than 1000 Hz lower than earlier tone, to hear first tone requires high loudness. If later tone is more than 300 Hz higher than earlier tone, to hear first tone requires moderate loudness.
processes: filtering
Hearing integrates over many neurons to filter frequencies to find their individual intensities. Hearing performs limited-resolution Fourier analysis on sound frequencies [Friedmann, 1979].
processes: important sounds
Important sounds use more neurons and synapses.
processes: memory
Previous sound experiences help distinguish current sound patterns.
brain
Because brain is viscous, sound cannot affect brain tissue.
For short sounds in noisy backgrounds, hearing can complete missing sounds or sharpen noisy sounds {continuity effect} {perceptual restoration effect}. Hearing does not fill in short silences with sounds, but sharpens temporal boundaries. Hearing does not know when it fills in.
Sound radiates in all directions from sources and reflects from various surfaces back to ears {echo perception}. Hearing can distinguish echoes from their source sounds. Hearing uses binaural signals to suppress echoes.
Body and head, including pinnae and ear canals, transmit and absorb different-frequency, different-elevation, and different-azimuth sounds differently {head-related transfer function}.
People can perceive sound frequency {pitch, sound}|.
frequency
People can hear ten frequency octaves, from 20 Hz to 20,000 Hz. Lowest frequencies, 20 Hz to 30 Hz, are also highest vibrations detectable by touch.
Shortest hair-cell hair lengths detect highest frequencies. High-frequency tones vibrate basilar-membrane stiff narrow end, far from oval window. Above 3000 Hz, higher hearing neurons respond to frequency, tone pattern, or intensity range.
Low-frequency tones activate all hair cells, with greater activity near oval window and its long-hair hair cells.
sensitivity
People are most sensitive at frequency 1800 Hz.
neuron firing
Maximum neuron firing rate is 800 Hz. After sound frequency and firing rate reach 800 Hz, firing rate drops abruptly, and more than one neuron carries sound-frequency information. After sound frequency and firing rate reach 1600 Hz, firing rate drops abruptly.
Auditory neurons have frequency {characteristic frequency} (CF) at which they are most sensitive. The characteristic frequency is at the maximum of the frequency-intensity spectrum (threshold tuning curve). For CF = 500 Hz at 0 dB, 1000 Hz is at 80 dB, and 200 Hz is at 50 dB. For CF = 1100 Hz at 5 dB, 1500 Hz is at 80 dB, and 500 Hz is at 50 dB. For CF = 2000 Hz at 5 dB, 3500 Hz is at 80 dB, and 500 Hz is at 80 dB. For CF = 3000 Hz at 5 dB, 3500 Hz is at 80 dB, 700 Hz to 2000 Hz is at 50 dB, and 500 Hz is at 80 dB. For CF = 8000 Hz at 5 dB, 9000 Hz is at 80 dB, 1000 Hz to 3000 Hz is at 60 dB, and 500 Hz is at 80 dB. For CF = 10000 Hz at 5 dB, 10500 Hz is at 80 dB, 5000 Hz is at 80 dB, 1000 Hz to 2000 Hz is at 60 dB, and 500 Hz is at 80 dB.
Auditory-nerve channels carry frequency-range {critical band} information.
For 100-Hz to 6000-Hz sound stimuli, basilar membrane has electric pulses, with same frequency and intensity, caused by potentials from all hair cells, that do not fatigue.
For 20-Hz to 900-Hz sound stimuli, auditory-neuron axons have electric pulses {microphonic electric pulse}, measured in cochlear nerve, with same frequency and intensity [Saul and Davis, 1932]. For 900-Hz to 1800-Hz sound stimuli, auditory-neuron axons have electric pulses with same frequency and one-half intensity. For 1800-Hz to 2700-Hz sound stimuli, auditory-neuron axons have electric pulses with same frequency and one-third intensity. For above-2700-Hz sound stimuli, auditory-neuron axons have electric pulses that do not correlate with frequency and intensity. Perhaps, auditory nerve uses summed potentials of microphonic-electric-pulse envelopes.
For below-500-Hz sound stimuli, auditory-neuron-axon signals have same frequency and phase {phase locking, hearing}.
Similar frequencies group together to make increasing loudness {recruitment, hearing}.
Tones that share one octave have perceivable sound features {tone chroma}.
Tone frequency determines low or high pitch {tone height}.
Noise or tones within two octaves of stimulus frequency can interfere with stimulus perception {critical band masking}. Pure tones mask high frequencies more than low frequencies, because higher frequencies activate smaller basilar-membrane regions. Complex tones mask low frequencies more than high frequencies, because lower frequencies have more energy than higher frequencies [Sobel and Tank, 1994].
Previous-tone {preceding tone} intensity-frequency spectrum affects neuron current-tone response.
Different later tone can decrease auditory-neuron firing rate {two-tone suppression}.
At each audible frequency, people have an intensity threshold {audibility curve}.
At each audible frequency, specific sound-pressure levels (SPL) cause people to hear equal loudness {equal loudness curve}.
At constant amplitude, auditory-neuron firing rate depends on frequency {isointensity curve}. For amplitude 20 dB at characteristic frequency, firing rate is 180 per second. For amplitude 20 dB at 500 Hz below or 500 Hz above characteristic frequency, firing rate is 50 per second. For amplitude 20 dB at 1300 Hz to 1400 Hz above characteristic frequency, auditory neurons have spontaneous firing rate.
At each frequency, people have a sound-intensity threshold {threshold tuning curve}.
Same-intensity-and-pitch sounds can have different harmonics {timbre, sound}|. Rapid timbre changes are difficult to perceive.
Clear tones {clarity, tone} have narrow frequency band. Unclear tones have wide frequency band.
Full tones {fullness, tone} have many frequency resonances. Shallow tones have few frequency resonances.
Shrill tones {shrillness} have higher frequencies. Dull tones have lower frequencies.
Sounds with many high-frequency components seem sharp or strident {stridency}. Tones with mostly low-frequency components seem dull or mellow {mellowness}.
People can hear sound energies as small as random air-molecule motions. {hearing, intensity} {sound intensity}. Because oval window is smaller than eardrum, sound pressure increases in middle ear. Middle-ear bones increase sound intensity by acting as levers that convert distance into force.
distortion
High sound intensities can strain materials past their elastic limit, so intensity and/or frequency change.
frequency
For same stimulus-input energy, low-frequency tones sound louder, and high-frequency tones sound quieter. Smaller hair-cell hairs have faster vibrations and smaller amplitudes.
maximum sound
Maximum sound is when physical ear structures have inelastic strain, which stretches surface tissues past point to which they can completely return.
pain
Maximum sound causes pain.
rate
For amplitude 40 dB to 80 dB at frequency between 2000 Hz below and 50 Hz above characteristic frequency, maximum firing rate is 280 per second {rate saturation, hearing}.
temporal integration
If sound has constant intensity for less than 100 ms, perceived loudness decreases {temporal integration, hearing}. If sound has constant intensity for 100 ms to 300 ms, perceived loudness increases. If sound has constant intensity for longer than 300 ms, perceived loudness is constant.
At loud-sound onset, stapedius and tensor tympani muscles contract {acoustic reflex}, to dampen stapes and eardrum vibration.
Tones can rise quickly or slowly from background noise level to maximum intensity {attack, hearing}| {onset, hearing}. Fast onset sounds aggressive. Slow onset sounds peaceful.
Tones can fall slowly or rapidly from maximum to background noise level {decay, hearing} {offset, hearing}.
Hearing perceives sound-source locations {source location} {sound location}, in space. Most space locations are silent. One space location can have several sound sources. Hearing determines sound location separately and independently of perceiving tones.
azimuth
Hearing can calculate angle to right or left, from straight-ahead to straight-behind, in horizontal plane.
elevation
Hearing can calculate height and angle above horizontal plane. People perceive lower frequencies as slightly lower than actual elevation. People perceive higher frequencies as slightly higher than actual elevation.
frequency and distance
Sound sources farther than 1000 meters have fewer high frequencies, because of air damping.
sound reflection and distance
Sound energy comes directly from sources and reflects from other surfaces. Close sounds have more direct energy than reflected energy. Far sounds have more reflected energy than direct energy. Reflected sounds have fewer high frequencies than direct sounds, because longer distances cause more air damping.
Hearing can separate complex sounds from one source into independent continuous sound streams {auditory stream segregation}.
Sound grouping has same Gestalt laws as visual grouping.
If one ear hears melody with large ascending and descending tone jumps, and other ear hears another melody with large ascending and descending tone jumps, people do not hear left-ear melody and right-ear melody but hear two melodies, different than either original melodies, that depend on alternating-tone proximities.
People separate sounds from multiple sources into independent continuous sound streams {auditory scene analysis} {source segregation}. Hearing separates sounds from different locations into independent continuous sound streams {spatial separation, hearing}.
Having two ears {binauralism} allows calculating time and amplitude differences between left-ear and right-ear sound streams from same space location.
Hearing can reject unwanted messages {focusing, hearing}, using binauralism to localize sounds.
The same sound reaches right and left ear at different intensity levels {interaural level difference} (ILD). Level difference can be as small as 1 dB. Intensity difference reflects stimulus distance, approaching or receding sounds, and body sound damping. Slight head movements are enough to eliminate direction ambiguity. Intensity differences due only to sound distance, or to approaching or receding sounds, are useful up to one or two meters. Beyond two meters, differences are too small to detect.
damping
Pinnae and head bones absorb sounds with frequencies higher than 1500 Hz, according to their frequency-related dampening function. Pinnae and head-bone damping differs on right and left, depending on source location, and hearing uses the intensity differences to determine space directions and distances beyond one or two meters.
brain
Lateral superior olive detects intensity-level differences between left-right ears and right-left ears, to make opponent systems. To find distance, two receptor outputs go to two different neurons, which both send to difference-finding neuron. Opposite-ear output goes to trapezoid-body medial nucleus, which lies beside pons lateral superior olive and inhibits same-ear lateral-superior-olive output. Interaural time difference and interaural level difference work together.
The same sound reaches right and left ear at different times {interaural time difference, hearing} (ITD), because distances from source location to ear differ, and ears have distance between them. Hearing can detect several microseconds of time difference. Slight head movements are enough to eliminate direction ambiguity. Interaural time difference uses frequencies lower than 1500 Hz, because they have no body damping.
Medial superior olive detects time differences between left-right ears and right-left ears, to make opponent systems. To find distances, two receptor outputs go to two different neurons, which both send to difference-finding neuron. Interaural time difference and interaural level difference work together.
In a cone {cone of confusion} {confusion cone} from head center into space, sounds have same intensity and timing, because ear timing differences (interaural time difference) and intensity differences (interaural level difference) are zero.
Electronic instruments {audiometer}| can test hearing.
Amplified auditory-nerve signals played through speakers sound same as stimulus sounds {microphone effect}.
People can study subjective sense qualities or psychological changes evoked by sound stimuli {psychoacoustics}.
Physical sound attributes directly relate to music attributes {music, hearing} {hearing, music}. Physical-sound frequency relates to music pitch. Music is mostly about frequency combinations. However, above 5000 Hz, musical pitch is lost. Physical-sound intensity relates to music loudness. Physical-sound duration relates to music rhythm. Physical=sound spectral complexity relates to music timbre.
However, frequency affects loudness. Intensity affects pitch. Tone frequency separation affects time-interval perception. Harmonic fluctuations, pitch changes, vibrato, and non-pitched-instrument starting noises {transient, sound} affect timbre. Timbre affects pitch.
emotion: chords
Chords typically convey similar feelings to people. Minor seventh is mournful. Major seventh is desire. Minor second is anguish. Humans experience tension in dissonance and repose in consonance.
emotion: pitch change
Music emotions mostly depend on relative pitch changes (not rhythm, timbre, or harmony).
emotion: key
Music keys have characteristic emotions. Composers typically repeat same keys and timbre, and composers have typical moods.
song: melody
Note sequences can rise, fall, or stay same. People can recognize melodies from several notes.
song: musical phrase
People perceive music phrase by phrase, because phrases have repeated often and because phrases take one breath. Children complete half-finished musical phrases using tones, rhythm, and harmony.
brain
No brain region is only for music. Music uses cognitive and language regions.
Musical pitch makes musical notes {tone, hearing}.
octave
Tones can be double or half other-tone frequencies. Octaves go from a note to similar higher note, such as middle-C at 256 Hz to high-C at 512 Hz. Hearing covers ten octaves: 20 Hz, 40 Hz, 80 Hz, 160 Hz, 320 Hz, 640 Hz, 1280 Hz, 2560 Hz, 5120 Hz, 10240 Hz, and 20480 Hz.
octave tones
Within one octave are 7 whole tones, 7 + 5 = 12 halftones, and 24 quartertones.
overtones
Tones two, four, eight, and so on, times fundamental frequency are fundamental-frequency overtones.
sharpness or flatness
Fully sharp tone has frequency one halftone higher than tone. Slightly sharp tone has frequency slightly higher than tone. Fully flat tone has frequency one halftone lower than tone. Slightly flat tone has frequency slightly lower than tone.
musical scales
Musical scales have tone-frequency ratios. Using ratios cancels units to make relative values that do not change when units change.
equal temperament scale
Pianos have musical tones separated by equal ratios. Octave has twelve equal-temperament halftones, with ratios from 2^(0/12) to 2^(12/12) of fundamental frequency. Frequency ratio of halftone to next-lower halftone, such as C# to C, is 2^(1/12) = 2^.08 = 1.06. Starting at middle-C, ratios of tones to middle-C are 2^0 = 1 for middle-C, 2^.08 = 1.06 for C#, 2^.17 = 1.13 for D, 2^.25 = 1.19 for D#, 2^.33 = 1.26 for E, 2^.42 = 1.34 for F, 2^.50 = 1.41 for F#, 2^.58 = 1.49 for G, 2^.67 = 1.59 for G#, 2^.75 = 1.68 for A, 2^.83 = 1.78 for A#, 2^.92 = 1.89 for B, and 2^1 = 2 for high-C. See Figure 1. F# is middle tone.
equal-temperament scale: frequencies
Using equal-temperament tuning and taking middle-C as 256 Hz, D has frequency 289 Hz. E has frequency 323 Hz. F has frequency 343 Hz. G has frequency 384 Hz. A has frequency 430 Hz. B has frequency 484 Hz. High-C has frequency 512 Hz. Low-C has frequency 128 Hz. Low-low-C has frequency 64 Hz. Lowest-C has frequency 32 Hz. High-high-C has frequency 1024 Hz. Higher Cs have frequencies 2048 Hz, 4096 Hz, 8192 Hz, and 16,384 Hz. From 32 Hz to 16,384 Hz covers nine octaves.
tone-ratio scale
Early instruments used scales with tones separated by small-integer ratios. Tones had different frequency ratios than other tones.
tone-ratio scale: all possible small-integer ratios
In one octave, the 45 possible frequency ratios with denominator less than 13 are: 3/2; 4/3, 5/3; 5/4, 7/4; 6/5, 7/5, 8/5, 9/5; 7/6, 11/6; 8/7, 9/7, 10/7, 11/7, 12/7, 13/7; 9/8, 11/8, 13/8, 15/8; 10/9, 11/9, 13/9, 14/9, 16/9, 17/9; 11/10, 13/10, 17/10, 19/10; 12/11, 13/11, 14/11, 15/11, 16/11, 17/11, 18/11, 19/11, 20/11, 21/11; 13/12, 17/12, 19/12, and 23/12.
tone-ratio scale: whole tones
In octaves, the seven whole tones are do, re, mi, fa, so, la, ti, and do, for C, D, E, F, G, A, and B. The seven tones are not evenly spaced by frequency ratio. Frequency ratios are D/C = 6/5, E/C = 5/4, F/C = 4/3, and G/C = 3/2. For example, C = 240 Hz, D = 288 Hz, E = 300 Hz, F = 320 Hz, and G = 360 Hz. C, D, E, F, and G, and G, A, B, C, and D, have same tone progression. Frequency ratios are A/G = 6/5, B/G = 5/4, C/G = 4/3, and D/G = 3/2. For example, G = 400 Hz, A = 480 Hz, B = 500 Hz, C = 532 Hz, and D = 600 Hz.
tone-ratio scale: halftones
Using C as fundamental, the twelve halftones have the following ratios, in increasing order. 1:1 = C. 17:16 = C#. 9:8 = D. 6:5 = D#. 5:4 = E. 4:3 = F. 7:5 = F#. 3:2 = G. 8:5 = G#. 5:3 = A. 7:4 or 16:9 or 9:5 = A#. 11:6 or 15:8 = B. 2:1 = C.
tone-ratio scale: quartertones
The 24 quartertones have the following ratios, in increasing order. 1:1 = 1.000. 33:32 = 1.031. 17:16 = 1.063, or 16/15 = 1.067. 13:12 = 1.083, 11:10 = 1.100, or 10/9 = 1.111. 9:8 = 1.125. 8:7 = 1.143, or 7:6 = 1.167. 6:5 = 1.200. 17:14 = 1.214, or 11/9 = 1.222. 5:4 = 1.250. 9:7 = 1.286. 4:3 = 1.333. 11:8 = 1.375. 7:5 = 1.400. 17:12 = 1.417, 10:7 = 1.429, or 13/9 = 1.444. 3:2 = 1.500. 14/9 = 1.556, 11:7 = 1.571, or 19:12 = 1.583. 8:5 = 1.600. 13:8 = 1.625. 5:3 = 1.667. 12:7 = 1.714, or 7:4 = 1.75. 16:9 = 1.778, or 9:5 = 1.800. 11:6 = 1.833, or 13:7 = 1.857. 15:8 = 1.875. 23:12 = 1.917. 2:1 = 2.000. Ratios within small percentage are not distinguishable.
tone intervals
Two tones have a number of tones between them. First interval has one tone, such as C. Minor second interval has two tones, such as C and D-flat, and covers one halftone. Major second interval has two tones, such as C and D, and covers two halftones. Minor third interval has three tones, such as C, D, and E-flat, and covers three halftones. Major third interval has three tones, such as C, D, and E, and covers four halftones. Minor fourth interval has four tones, such as C, D, E, and F, and covers five halftones. Major fourth interval has four tones, such as C, D, E, and F#, and covers six halftones. Minor fifth interval has five tones, such as C, D, E, F, and G-flat, and covers six halftones. Major fifth interval has five tones, such as C, D, E, F, and G, and covers seven halftones. Minor sixth interval has six tones, such as C, D, E, F, G, and A-flat, and covers eight halftones. Major sixth interval has six tones, such as C, D, E, F, G, and A, and covers nine halftones. Minor seventh interval has seven tones, such as C, D, E, F, G, A, and B-flat, and covers ten halftones. Major seventh interval has seven tones, such as C, D, E, F, G, A, and B, and covers eleven halftones. Eighth interval is octave, has eight tones, such as C, D, E, F, G, A, B, and high-C, and covers twelve halftones.
tone intervals: pairs
Tones have two related ratios. For example, D and middle-C, major second, have ratio 289/256 = 9/8, and D and high-C, minor seventh, have ratio 9/16, so high-C/D = 16/9. The ratios multiply to two: 9/8 * 16/9 = 2. E and middle-C, major third, have ratio 323/256 = 5/4, and E and high-C, minor sixth, have ratio 5/8, so high-C/E = 8/5. F and middle-C, minor fourth, have ratio 343/256 = 4/3, and F and high-C, major fifth, have ratio 2/3, so high-C/G = 3/2. G and middle-C, major fifth, have ratio 384/256 = 3/2, and G and high-C, minor fourth, have ratio 3/4, so high-C/G = 4/3. A and middle-C, major sixth, have ratio 430/256 = 5/3, and A and high-C, minor third, have ratio 5/6, so high-C/A = 6/5. B and middle-C, major seventh, have ratio 484/256 = 15/8, and B and high-C, minor second, have ratio 15/16, so high-C/B = 16/15.
The ratios always multiply to two. Tone-interval pairs together span one octave, twelve halftones. For example, first interval, with no halftones, and octave, with twelve halftones, fill one octave. Major fifth interval, with seven halftones, such as C to G, and minor fourth interval, with five halftones, such as G to high-C, fill one octave. Major sixth interval, with nine halftones, such as C to A, and minor third interval, with three halftones, such as A to high-C, fill one octave. Major seventh interval, with eleven halftones, such as C to B, and minor second interval, with one halftone, such as B to high-C, fill one octave. Minor fifth interval and major fourth interval fill one octave. Minor sixth interval and major third interval fill one octave. Minor seventh interval and major second interval fill one octave.
tone intervals: golden ratio
In music, ratio 2^0.67 = 1.59 ~ 1.618... is similar to major sixth to octave 1.67, octave to major fourth 1.6, and minor seventh to major second 1.59. Golden ratio and its inverse can make all music harmonics.
tone harmonics
Tones have harmonics {tone harmonics} that relate to tone-frequency ratios.
tone harmonics: consonance
Tone intervals can sound pleasingly consonant or less pleasingly dissonant. Octave tone intervals 2/1 have strongest harmonics. Octaves are most pleasing, because tones are similar. Tones separated by octaves sound similar.
Major fifth and minor fourth intervals are next most pleasing. Major-fifth 3/2 and minor-fourth 4/3 tone intervals have second strongest harmonics.
Major third 5/4 and minor sixth 8/5 intervals are halfway between consonant and dissonant. Minor third 6/5 and major sixth 5/3 intervals are halfway between consonant and dissonant.
Major fourth 7/6 and minor fifth 12/7 intervals are dissonant. Major second 8/7 and minor seventh 7/4 intervals are dissonant, or major second 9/8 and minor seventh 16/9 intervals are dissonant. Minor second 16/15 and major seventh 15/8 intervals are most dissonant.
Ratios with smallest integers in both numerator and denominator sound most pleasing to people and have consonance. Ratios with larger integers in both numerator and denominator sound less pleasing and have dissonance.
Three tones can also have consonance or dissonance, because three tones make three ratios. For example, C, E, and G have consonance, with ratios E/C = 5/4, G/E = 6/5, and G/C = 3/2.
Tone ratios in octaves higher or lower than middle octave have same consonance or dissonance as corresponding tone ratio in middle octave. For example, high-G and high-C have ratio 6/4 = 3/2, same as middle-G/middle-C.
Tone ratios between octave higher than middle octave and middle octave have similar consonance as corresponding tone ratio in middle octave. For example, high-G and middle-C have ratio 3/1. Dividing by two makes high-G one octave lower, and middle-G/middle-C has ratio 3/2.
tone harmonics: beat frequencies
Frequencies played together cause wave superposition. Wave superposition makes new beat frequencies, as second wave regularly emphasizes first-wave maxima. Therefore, beat frequency is lower than highest-frequency original wave.
If wave has frequency 1 Hz, and second wave has frequency 3 Hz, they add to make 1-Hz wave, 3-Hz wave, and 2-Hz wave, because every other 3-Hz wave receives boost from 1-Hz wave. Rising 1-Hz wave maximum coincides with first rising 3-Hz wave maximum and falling 1-Hz wave maximum coincides with third falling 3-Hz maximum, while first falling 3-Hz wave maximum, middle rising and falling 3-Hz maximum, and third rising 3-Hz maximum cancel.
If one wave has frequency 2 Hz, and second wave has frequency 3 Hz, they add to make 2-Hz wave, 3-Hz wave, and 1-Hz wave, because every third 3-Hz wave receives boost from 2-Hz wave. First rising 2-Hz wave maximum coincides with first rising 3-Hz wave maximum, while first falling 3-Hz wave maximum, middle rising and falling 3-Hz maximum, and third rising and falling 3-Hz maximum cancel.
Beat frequency is difference between wave frequencies: 3 Hz - 2 Hz = 1 Hz in previous example. Beat frequencies are real physical waves.
Small-integer frequency ratios have lower beat frequencies and reduce beat frequency interference with original frequencies. Two waves with small-integer frequency ratios superpose to have beat frequency that has small-integer ratios with original frequencies. Two waves with large-integer frequency ratios superpose to have beat frequency that has large-integer ratios with original frequencies.
Middle-C has frequency 256 Hz, and middle-G has frequency 384 Hz, with ratio G/C = 3/2. The waves add to make 384 Hz - 256 Hz = 128 Hz beat wave, with ratio C/beat = 2/1 and G/beat = 3/1.
Middle-C has frequency 256 Hz, and middle-E has frequency 323 Hz, with ratio E/C = 5/4. The waves add to make 323 Hz - 256 Hz = 67 Hz beat wave, with ratio C/beat = 4/1 and E/beat = 5/1.
Middle-C has frequency 256 Hz, and middle-D has frequency 289 Hz, with ratio D/C = 9/8. The waves add to make 289 Hz - 256 Hz = 33 Hz beat wave, with ratio C/beat = 8/1 and D/beat = 9/1.
Middle-C has frequency 256 Hz, and middle-A has frequency 430 Hz, with ratio A/C = 5/3. The waves add to make 430 Hz - 256 Hz = 174 Hz beat wave, with ratio C/beat = 3/2 and D/beat = 5/2.
Middle-C has frequency 256 Hz, and middle-B has frequency 484 Hz, with ratio B/C = 15/8. The waves add to make 484 Hz - 256 Hz = 228 Hz beat wave, with ratio C/beat = 9/8 and B/beat = 17/8.
Roger Shepard [1964] gradually increased or decreased all tones of a chord, keeping the tones separated by octaves. Pitch repeats when reaching the next octave, so tones rise or fall but do not keep rising or falling {Shepard tone} {Shepard scale}, an auditory illusion.
Brain recognizes music by rhythm or by intonation differences near main note {music, processing}. Brain analyzes auditory signals into tone sequences with pitches, durations, amplitudes, and timbres. First representation {grouping structure} segments sound sequence into motifs, phrases, and sections. Second representation {metrical structure} marks sequence with hierarchical arrangement of time points {beat}.
Brain can find phrasing symmetries {time-span reduction}, using grouping and metrics.
Brain can hierarchically arrange tension and relaxation waves {prolongational reduction}. In Western music, prolongational reduction has slowly increasing tension followed by rapid relaxation.
Brain-injured people can be unable to distinguish voices but can recognize other sound types {hearing, problems}. If they listen to speech recorded using different voices for different syllables, they cannot understand words.
Middle-ear bone or tendon damage decreases sound amplitude {conductive hearing loss}.
Infection causes middle-ear inflammation {otitis media}|, typically in children.
Middle-ear bones can grow abnormally {otosclerosis}|, affecting hearing.
Adverse conditions {ototoxic} can affect balance or hearing more than other systems.
Auditory-nerve or cochlea damage decreases loudness {sensorineural hearing loss}.
Perhaps, cochlea has band-pass filters {critical band theory}.
Perhaps, brain detects sounds by adding harmonic frequencies below 20 Hz, weighted by ratios {harmonic weighting}. 360 Hz uses 180/2, 120/3, 90/4, 72/5, 60/6, 51.4/7, 45/8, 40/9, 36/10, 32.7/11, 30/12, and so on. 720 Hz uses 360/2, 240/3, 180/4, 144/5, 120/6, 102.8/7, 90/8, 80/9, 72/10, 65.4/11, 60/12, 51.4/14, 45/16, 40/18, 36/20, 32.7/22, 30/24, and so on.
At frequencies above 900 Hz, brain detects stimulus frequency by cochlea-hair maximum-amplitude location {place coding} {place theory}, so pitch depends on activity distribution across nerve fibers.
At frequencies below 900 Hz, brain detects stimulus frequency by impulse timing {temporal theory} {temporal code}, because timing tracks frequency. Adjacent auditory neurons fire at same phase {phase locking, code} and frequency, because adjacent hair cells link and so push and pull at same time.
Perhaps, sound intensity depends on number of activated basilar-membrane sense cells and special high-threshold cells {threshold, hearing} [Wilson, 1971] [Wilson, 1975] [Wilson, 1998].
For frequencies less than 2400 Hz, frequency detection depends on cooperation between neuron groups firing in phase {volley theory} {volley code}. For frequencies less than 800 Hz, auditory-neuron subsets fire every cycle. For frequencies above 800 Hz and less than 1600 Hz, auditory-neuron subsets fire every other cycle. For frequencies above 1600 Hz and less than 2400 Hz, auditory-neuron subsets fire every third cycle {volley principle}. For example, three neurons firing at 600 Hz every third cycle can represent frequency of 1800 Hz.
Stomach receptors {hunger sense} measure blood-glucose concentration and send to neurons that cause slow hunger contractions.
People can feel that they have nutrient deficiency and be hungry for that nutrient {specific hungers theory} (Curt Richter). This theory is true for salt and sugar but not for vitamins.
Sense systems {kinesthesia}| {kinesthesis} {kinesthetic sense} {proprioception} use mechanoreceptors to detect relative body-part positions, angles, forces, torques, and motions, including position changes during and after movements. Kinesthetic system measures body-point displacements from equilibrium and then calculates relative point-pair distances and point-triple angles. Body movements and outside forces move body points sequentially and change body-point relations in regular and repeated ways, so brain builds and remembers motor patterns that allow muscle coordination and balance. Kinesthesia is not conscious, because it is internal.
relations: touch
Touch detects body-surface pressures and temperatures and coordinates with kinesthesia to determine true distances and times.
relations: proprioception
Kinesthesia includes proprioception.
relations: vestibular system
Kinesthetic system includes vestibular system.
relations: cerebellum
Cerebellum coordinates body movements and communicates with kinesthetic system.
problems
Proprioceptive receptor and nerve inflammation impairs body-position sensation. Nerve damage can impair movement consciousness.
Kinesthesia, touch, and vestibular system {somatosensation} provide body information.
Kinesthesia-and-touch pressure and vibration receptors {mechanoreceptor} detect relative body-part positions, including position changes caused by movements.
Muscle mechanoreceptors {annulospiral ending} code muscle length and muscle-length-change rate and send positive feedback to muscle.
Muscle mechanoreceptors {flower spray ending} code muscle length, slowly excite flexor muscles, and slowly inhibit extensor muscles.
To react to fast tendon-length change, tendon mechanoreceptors {Golgi tendon organ} measure tension above (high) threshold, detect inverse-stretch-reflex active contraction, and send negative feedback to muscles attached to tendons.
Muscle mechanoreceptors {muscle spindle} measure tension.
Muscles, tendons, joints, alimentary canal, and bladder have mechanoreceptors {stretch receptor}, such as flower-spray endings and annulospiral endings, that detect pulling and stretching. Neck muscle-and-joint stretch receptors indicate head direction with respect to body.
People can typically detect small magnetic gradients {magnetism, sense}, using receptors related to kinesthesia. For example, muscles can react to weak terrestrial-magnetism changes caused by underground water.
Stomach receptors {nausea, sense} measure toxins and send to neurons that cause slow stomach contractions.
People have acute or dull personal discomfort and avoidance feelings {pain, sense}. Some people cannot feel pain.
physical properties
Painful events include tissue strains and releases of molecules that cause chemical reactions. Molecules vary in size, shape, chemical sites, and vibration states. Chemicals vary in concentration. Painful chemicals chemically bind to tissue chemical receptors.
properties
Pains can be throbbing, burning, dull, or acute/sharp. People perceive pain at body locations and also have overall bad feelings. Lower back pains are the most common. Deviating from chemical and function equilibrium is typically not painful. People in pain can still have humor and laughter.
nature
Perhaps, pain includes dislike or avoidance. Pains are not concepts, observations, or judgments. Pain is not intentional but is only about itself.
brain
Pain uses cerebral cortex and is always conscious. Pain perception uses thalamus and is not conscious. Pain differs in species, because neocortices differ. Squid seem to feel pain.
factors
Prior experience influences pain. Pain anticipation increases pain. Body movement can lessen sharp pain and increase chronic pain. Sensitivity to pain is greatest at 9 PM. Pain sensitivity decreases with age.
senses
Temperature and nociceptive receptor systems interact. Tactile and nociceptive receptor systems interact.
evolution
Humans seem to have higher sensitivity to pain than other mammals. Lower animals have even less pain. Squid seem to feel pain.
development
By 156 days (five months), fetus can have pain. Newborns can have pain. By 4 months, infants have undifferentiated fear reactions to people and animals associated with pain, and so coordinate vision and pain perceptions.
The pain system has skin receptors with ion channels, neurons, fibers, fiber tracts, and brain regions. Pain chemical receptors send to dorsal-horn neurons, which send to cortical regions. Cortex and thalamus control pain {pain, anatomy}.
Skin and body receptors (nociceptor) chemically bind endomorphins, prostaglandins, bradykinin peptides, and protein hormones (such as nerve growth factor), molecules released by inflammation and tissue damage [Woolf and Salter, 2000].
fibers
Body organs and mesentery have pain fibers. Internuncial neurons have pain fibers. Pain fibers are A, C, III, IV, and nociceptive fibers. Large myelinated fibers detect moderate stimulation. Small myelinated fibers detect all stimulations. Myelinated fibers detect sharp localized skin pain. Unmyelinated fibers detect dull deep unlocalized body pain. Itching nerves are separate from pain nerves.
brain
Anterior cingulate gyrus, frontal lobe, Lissauer's tract, locus coeruleus, nociceptive system, protopathic pathway, raphé nuclei, reticular formation, sensory reticular formation, sensory thalamus, spinal cord, spinoreticular tract, and spinothalamic tract affect pain. Throbbing pain, burning pain, and sharp pain use different brain regions. Cingulate cortex receives pain information [Chapman and Nakamura, 1999]. Cortex has pain center connected to sense areas. Reticular formation regulates pain.
brain pathways
Feeling pain and reacting to it involve separate pathways. Spinothalamic tract and central gray-matter path carry pain fibers. Internuncial neurons have pain fibers. Body organs and mesentery have pain fibers. Lemniscal tract has no pain fibers but affects pain. Abdominal pain signals travel in subdiaphragmatic vagus nerve to nucleus tractus solitarius, nucleus raphe magnus, and spinal-cord dorsolateral funiculus [Ritter et al., 1992].
Connective-tissue dendritic cells {nerve-associated lymphoid cells} (NALC) have interleukin-1 binding sites, send to sensory vagus-nerve paraganglia, and are near macrophages, mast cells, and other dendritic cells [Goehler et al., 1999].
Connective-tissue nerve-associated lymphoid cells send to neuron groups {paraganglia} who send along sensory vagus nerve [Goehler et al., 1999].
Nociceptors can have proton ion channels {acid-sensing ion channel} (ASIC).
Nociceptors and other neurons have special calcium-ion channels {N-type calcium channel} {calcium channel, N-type}. Ziconotide (Prialt), modified cone-snail venom, inhibits N-type calcium channels to lessen pain. Gabapentin (Neurontin) anticonvulsant binds to N-type calcium channels.
Outside CNS, nociceptors and other neurons have special sodium-ion channels {TTX-resistant voltage-gated sodium channel}.
Nociceptors and all neurons have sodium-ion channels {voltage-gated sodium channel} {sodium channel, voltage-gated} that open by voltage changes.
Nociceptors can have receptors {bradykinin receptor} for small bradykinin peptides, produced by peripheral inflammation.
Dorsal-horn neurons receive input from nociceptors and have calcitonin peptide receptors {calcitonin receptor} {calcitonin gene-related peptide receptor} (CGRP receptor).
Mouth nociceptors can have pepper-molecule receptors {capsaicin receptor} {VR1 receptor}, which also react to high temperature and protons.
Peripheral pain nerves can add chemical receptors {hormone receptor}. For example, stress hormones can attach to stress-hormone receptors and cause pain [Woolf and Salter, 2000].
Nociceptors can have protein-hormone receptors {nerve growth factor receptor} (NGF receptor).
All neurons that receive input from nociceptors have glutamate receptors {NMDA receptor, pain}. Dorsal-horn neurons have glutamate receptors with a specific subunit {NR2B subunit}.
NTRK1 gene makes receptors {neurotrophin tyrosine kinase receptor type 1} (NTRK1 receptor). NTRK1-gene mutations can cause a rare autosomal recessive disease (CIPA), with pain insensitivity, no sweating, self-mutilation, fever, and mental retardation.
Skin receptors {nociceptor} can detect pain, to warn about skin damage.
Many neurons, including nociceptors, have opium-compound receptors {opioid receptor}.
Nociceptors can have endomorphin receptors {prostaglandin receptor}.
Dorsal-horn neurons receive input from nociceptors and have substance-P receptors {neurokinin-1 receptor} (NK-1 receptor) {substance P receptor}. Substance P can carry saporin toxin into dorsal-horn neurons and kill them.
Pain control is at first synapse, near spinal cord {pain, physiology}. Prostaglandins block glycine receptors and so excite dorsal-horn neurons. More and wider brain activation indicates more pain [Chapman and Nakamura, 1999]. Drugs can make pain feel pleasurable. The fundamental pain characteristic is repulsion or withdrawal, and the fundamental pleasure characteristic is attraction or advance [Duncker, 1941].
Tissue damage, inflammation, and high-intensity stimuli release chemicals that excite nociceptors. Pain detects and measures relative concentrations of pain-causing chemicals released by body inelastic strains or tissue damage. People can distinguish strength and type of pain.
High pressure, high temperature, harsh sound, intense light, and sharp smells and tastes cause neuron changes {pain, causes}. Inflammation or acute-pain aftereffects can cause pain.
Pain involves too much small-nerve-fiber activity, uninhibited by large neurons. Blows to body release histamines, bradykinin, and prostaglandins, which excite neurons. Gut distension causes pain, but gut squeezing, cutting, and burning do not. Infection can amplify pain. Tissue damage can amplify pain. Damaged tissue activates immune cells, which release molecules that excite nerves and glia. Arginine vasopressin, encephalin, endorphin, and substance P can affect pain.
Randomly placed brainstem electrodes produce pain 5% of time. Direct cerebral-cortex stimulation can cause other sense qualities but never causes pain. Cortex stimulation does not decrease pain.
Pain causes people to push painful object farther away or to move farther from pain source {pain, effects}. Sharp pain causes withdrawal reflexes, writhing, jumping away, and wincing as people try to alleviate pain. Writhing escapes stimulus or pushes away stimulus. Painful skin stimuli cause flexion reflexes. Muscle contractions inhibit blood flow and squeeze out poisons. To avoid reinjury and allow body to rebuild rather than use, dull and chronic pain reduces overall activity. People can have no reaction to pain.
Pain causes attention to object. People cannot ignore pain caused by high-intensity stimulus. Pain makes other goals seem unimportant. To allow recovery from tissue damage, pain causes attention to damage, such as wounds. To avoid future pain causes, pain triggers learning about possibly painful situations. People also learn pain responses.
Pain can cause anxiety, increase breathing rate, increase blood pressure, dilate pupils, increase sweat, and make time appear to flow more slowly.
Spinal-cord dorsal-horn substantia-gelatinosa neural circuits receive signals from brain and inhibit nerve-impulse flow from spinal cord to brain {gate control theory of pain}|. Large-fiber inputs, such as from gentle rubbing {counterstimulation}, stimulate substantia-gelatinosa neurons to inhibit signal flow, closing the gate. Small-fiber inputs, such as from pinching {diffuse noxious inhibitory control} {counterirritation}, inhibit substantia-gelatinosa neurons to release signal flow, opening the gate. Direct brain signals also inhibit flow and close the gate [Melzack, 1973] [Melzack, 1996].
Pain-activated microglia (immune cells) release pro-inflammatory cytokines, which activate glia {glial activation} and cause pain, but other glia types do not release cytokines in response to pain. Spinal glial activation affects nociceptive neurons at NMDA receptors.
Blocking glial activation with drugs blocks pathological pain. Blocking neuron pro-inflammatory-cytokine receptors with drugs does not affect normal pain responses but does decrease exaggerated pain responses. Intrathecal drugs {fluorocitrate} can inhibit glial metabolism. Acids {kynurenic acid} {2-amino-5-phosphonovaleric acid} (AP-5) can prevent such inhibition. Amines {6,7-dinitroquinoxaline-2,3-dione} (DNQX) {picrotoxin} and strychnine do not prevent such inhibition [Ma and Zhao, 2002] [Watkins et al., 2001].
Chemicals, biofeedback, distraction, and imagery can lessen pain {pain relief}. Hypnosis can relieve pain.
Endorphin and dynorphin inhibit pain pathways. Flight-or-fight responses use endorphin neurotransmitters to suppress pain. Aspirin and nitrous oxide alleviate pain. Opiate drugs, such as morphine, are similar to endorphin and suppress pain. Ziconotide (Prialt), modified cone-snail venom, inhibits N-type calcium channels to lessen pain.
Adaptation, distraction, or drugs can decrease pain {analgesia, pain}|.
Drugs can make pain be felt but not remembered {hyoscine sleep}|. Twilight-sleep drug, from thorn apples, binds to acetylcholine receptors and affects long-term memory recall.
Inserting large needles at skin locations {acupuncture}| can reduce pain. Acupuncture-needle stimulation activates brain area that makes endorphin and dynorphin to inhibit pain pathways. Traditional acupuncture-needle insertion sites correspond to myofascial-nerve locations. Traditionally, acupuncture makes energy {qi} travel along body meridians.
Massaging with ice {ice massage} reduces pain.
Stimulating brain area that makes endorphin and dynorphin {transcutaneous electrical nerve stimulation} (TENS) inhibits pain pathways.
In undamaged areas, receptors and neurons can have sensitization, so people feel pain from stimuli that are not typically painful {allodynia}.
Intra-uterine devices can cause uterine pain {dysmenorrhoea}.
People can perceive pain {extra-territorial pain} in undamaged tissue near damaged tissue.
Without tissue damage or infection, peripheral pain nerves can increase spontaneous activity and cause pain {false pain}.
People can be sensitive to touch and have low pain threshold {hyperaesthesia, pain}.
Receptor or nerve sensitization can cause greater than normal reaction to pain stimuli {hyperalgesia}.
Tabes dorsalis has shooting pains {lightning pain} [Charcot, 1890].
People can perceive pain {mirror pain}| in undamaged tissue on body side opposite damaged tissue.
Chronic pain {neuropathic pain} can persist after nervous-system injury. Injury can change skin receptors {peripheral neuropathic pain}. Injury can change spinal-cord dorsal horn {central neuropathic pain}.
People that lose limbs often feel like they still have limb or feel sense qualities from former region {phantom limb}| [Melzack, 1992] [Ramachandran and Blakeslee, 1998] [Weir-Mitchell, 1872].
Pleasure feels different in different senses {pleasure, sense}.
causes
Pleasure can result from satisfying desire, overcoming body deficiency or excess, realizing potential, euthumia, eudaimonia, or having pain-free and tranquil state. Pleasure results from intermediate intensity, energy, or concentration on intermediate-size area. Pleasurable stimuli have simple pattern, low contrast, slow variation, slow movement, and relaxed time flow. Light touch, slight warmth or cooling, soothing sound, soft light, and mild smells and tastes can cause pleasure. Absolute intensity, simple or complex pattern, high or low contrast, fast or slow variation, fast or slow movement, physical location, and time of day do not associate with pleasure.
behavior
The fundamental pleasure characteristic is attraction or advance toward stimulus [Duncker, 1941]. Pleasure causes attention to object. Pleasure causes motivation to draw object nearer to increase pleasure. People can ignore pleasure.
effects
Pleasure increases blood flow. Pleasure causes time to appear to flow more rapidly. Pleasure causes liking, preferring, or desiring. Pleasure is rewarding.
To give same pleasure amount later, stimulus intensity must increase.
brain
Medial forebrain bundle runs from forebrain to brainstem and sends to ventral-tegmentum dopamine neurons, which affect forebrain. Pleasure differs in different species, because neocortex differs.
Randomly placed brainstem electrodes produce pleasure 35% of time. Randomly placed brainstem electrodes produce neither pleasure nor pain 60% of time.
nature
Perhaps, pleasure is a cognition that comes after another sensation. Pleasure is not intentional but is only about itself. Desire for pleasure is hard to understand, because desire is for objects, but pleasure is inside oneself.
Pleasure can come from being virtuous {eudaimonia}.
Pleasure can come from being cheerful {euthumia}.
Chemicals dissolved in air chemically bind to upper-nose odor receptors {smell, sense}| {olfaction}. Smell qualities depend on molecule electrical and spatial-configuration properties, such as shape, acidity, and polarity. Smell is a synthetic sense, with some analysis. People can distinguish 20 to 30 primary odors and more than 10,000 different odors.
physical properties
Smellable molecules include many types of typically hydrophobic volatile substances with molecular weights between 30 to 350. Air-borne molecules vary in size, shape, chemical sites, and vibration states. Air-borne chemicals vary in concentration. Smellable chemicals chemically bind to upper-nasal-passage chemical receptors.
primary-odor receptors
Some people cannot smell camphorous, fishy, malty, minty, musky, spermous, sweaty, or urinous odors (primary odor). Camphorous molecules have multiple benzene rings. Fishy molecules are three-single-bond monoamines. Malty molecules are aldehydes. Minty molecules have a benzene ring and an oxygen-containing side group. Musky molecules have multiple rings. Spermous molecules are aromatic amines. Sweaty molecules are carboxylic acids. Urinous molecules are steroid ketones. Fruity molecules are organic alcohols.
types
Odors can be acidic, acrid or vinegary, alliaceous or garlicy, ambrosial or musky, aromatic, burnt or smoky, camphorous or resinous, ether-like, ethereal or peary, floral or flowery, foul or sulfurous, fragrant, fruity, goaty or hircine or caprylic, minty, nauseating, peppermint-like, pungent or spicy, putrid, spearmint-like, sweaty, and sweet.
Perhaps, the first smells were mating, food, or poison signs.
qualities
Smells can be sweet, acidic, or sweaty. For example, musk, ether, ester, lowery, fruity, and musky are dull, sweet, and smooth. Vinegar and acid are sharp, sour, and harsh.
Smells can be cool, like menthol, or hot, like heavy perfume. For example, menthol is cool, and perfume is hot.
Aromatic, camphorous, ether, minty, musky, and sweet are similar. Acidic and vinegary are similar. Acidic and fruity are similar. Goaty, nauseating, putrid, and sulphurous are similar. Smoky/burnt and spicy/pungent are similar. Camphor, resin, aromatic, musk, mint, pear, flower, fragrant, pungent, fruit, and sweets are similar. Putrid or nauseating, foul or sulfur, vinegar or acrid, smoke, garlic, and goat are similar. Vegetable smells are similar. Ethers are vegetable. Animal smells are similar. For example, caprylic acid and carboxylic acids are animal. Halogens are mineral
Acidic and sweet smells are opposites. Sweaty and sweet smells are opposites.
Smell always refers to object that makes smell, not to accidental or abstract property nor to concept about smell. In contrast, color always refers to object property.
Odors have same physical properties, and smell physiological processes are similar, so odor perceptions are similar, with same odors and odor relations, for people with undamaged smell systems. Smells relate in only one consistent and complete way. Smells do not have symmetric smell relations, so smells have unique relations. Smells cannot substitute or switch.
People can smell specific odors and not others. People can smell sweet as putrid and have other smell exchanges. People can always smell something.
mixing
Smells blend in concordances and discordances, like music harmonics. Pungent and sweet can mix. Pungent and sweaty can mix. Perhaps, smells can cancel other smells, not just mask them.
timing
Brain detects aldehyde smells first {top note, smell}. Brain detects floral smells second {middle note, smell}. Brain detects lingering smells, such as musk, civet, ambergris, vanilla, cedar, sandalwood, and vetiver, later {base note, smell}.
properties
Smell habituates quickly. Smell is in real time, with a half-second delay. Smell short-term memory is poor. Smell strength decreases with age. Fats absorb pungent food odors.
Butyrate and squalene odor patterns identify species members. In mammals, small pheromone amounts establish territories [Pantages and Dulac, 2000]. Humans have strong odors from hair-follicle apocrine glands. Perhaps, human odor warns predators away. Babies have small glands. Stress seems to cause odor. Menses smells like onions.
source location
Olfactory bulb preserves odor-receptor spatial relations. Smell cortex can detect smell location in space. Smell can detect several sources from one location. Smells from different sources can interfere.
diseases
Diabetes smells like sugar or acetone. Measles smells like feathers. Nephritis smells like ammonia. Plague smells like apples. Typhus smells like mice. Yellow fever smells like meat.
emotions
Smells can make people feel disgusted, intoxicated, sickened, delighted, revolted, excited, hypnotized, and pleasured. Smells can be surprising, because smells have many combinations.
evolution
Perhaps, the first smells were mating, food, or poison signs.
Butyrate and squalene odor patterns identify species members. Humans have strong odors from hair-follicle apocrine glands. Stress seems to cause odor. Perhaps, human odor warns predators away. (Babies have small glands.)
In mammals, small pheromone amounts establish territories [Pantages and Dulac, 2000].
development
In first few days, newborns can distinguish people by odor.
relations to other senses
Taste and retronasal-area smell can combine to make flavor. Taste has higher concentration than smell. Smell uses air as solvent, and taste uses water. Smell does not use molecule polarization, but taste does. Smell does not use molecule acidity, but taste does. Smells interfere with each other, but tastes are separate and independent. Taste does not use molecule vibrations, but perhaps smell uses vibrations. Taste and smell are both often silent. Taste and smell have early, middle, and late sensations. Smells and tastes have spatial source.
Smell is at body surface and so has touch. Touch can feel air near smell-receptor cells and react to noxious smells. Touch locates smell-receptor cells in upper nose. Trigeminal nerve carries signals from nose warmth-coolness, touch, and pain receptors.
Trigeminal nerve carries signals from nose warmth-coolness, touch, and pain receptors. Smell is at inner-nose surface and so has touch. Touch can feel air on inner nose and react to noxious odors. Touch locates olfactory receptors in upper nose.
Smell uses tactile three-dimensional space to locate smells in space.
Odor is painful at high concentrations.
People have upper-nostril skin areas, with molecule shape, size, and vibration receptors {smell, anatomy}. Smell uses more than 30 odor-receptor types, each with variations, making a thousand combinations. Smell-neuron axons go to older mammal-forebrain rhinencephalon, near frontal lobe, not to thalamus as other sense axons do. Invertebrates have skin odor receptors.
Odor receptors send to olfactory-bulb glomeruli, which send to cortical regions.
Behind eyebrow, where nose meets skull, is bone {cribriform plate} with many nerve-sized holes, through which olfactory-neuron axons go to olfactory bulb.
Olfactory epithelium has cells {basal cell} that can become olfactory neurons.
Olfactory-receptor cells send to neurons {mitral cell}, whose top dendrites go to horizontal cells to receive lateral inhibition and whose bottom branches are recurrent collateral axons to spread lateral inhibition. Mitral-cell axons go to anterior-olfactory-nucleus and prepyriform-cortex superficial and deep pyramidal neurons.
Olfactory-receptor cilia have molecules that bind odorants. Smell system has a thousand different protein receptors {olfactory receptor}, with seven to eleven major odor-receptor types, which each have a dozen minor types. People have ten million odor-receptors in each nostril. Dogs have 200 million. Odor-receptors die every month, and then new ones grow.
Of 1000 olfactory-receptor genes, 65% are not functional in humans. In Old World monkeys, 30% are not functional. In New World monkeys, 18% are not functional. In dogs, 20% are not functional. Odor-receptor chemical sites are for alcohols, aldehydes, amines, aryls, carboxylic acids, esters, ethers, halogens, ketones, cysteines, thiols, sulfides, or terpenes. Sites can be for small, medium, or large molecules [Firestein, 2001] [Laurent et al., 2001]:
Alcohols that are small, such as methanol and ethanol, smell alcoholy, biting, and hanging.
Alcohols that are medium-chain, such as butanol and octanol, smell sweet and fruity.
Alcohols that are cyclic, such as menthol, smell cool and minty.
Alcohols that are monoterpenoids, such as geraniol and linalool, smell flowery and fresh.
Alcohols that are monophenols, such as phenol and guaiacol, smell burnt and smoky.
Alcohols that are polyphenols, such as cresol, smell tarry and oily.
Aldehydes that are small, such as diacetyl aldehyde, smell buttery.
Aldehydes that are short-chain, such as isovaleraldehyde, smell malty.
Aldehydes that are alkene aldehydes, such as hexenal, smell grassy and herby.
Amines that are alkyl and aryl monoamines, such as trimethylamine and phenethylamine, smell fishy.
Amines that are alkyl multi-amines, such as putrescine, smell spermous.
Amines that are heterocyclic amines, such as pyrroline, smell spermous.
Amines that are heterocyclic aromatic, such as alkyl pyrazines, smell nutty, earthy, and green peppery.
Amines that are heterocyclic aromatic, such as 2-acetyl-tetrahydro-pyridine, smell roasted, fermented, and popcorny.
Aryls that are benzene alkyls, such as benzene, toluene, and xylenes, smell aromatic.
Aryls that are monophenols, such as phenol and guaiacol, smell burnt and smoky.
Aryls that are polyphenols, such as cresol, smell tarry and oily.
Aryls that are polycyclic aromatic hydrocarbons, such as anthracene and pyrene, smell burnt and smoky.
Aryls that are polycyclic in small concave sites, such as camphor, smell camphorous and resinous.
Aryls that are aryl monoamines, such as phenethylamine, smell fishy.
Carboxylic acids that are small, such as acetic acid, smell acrid, vinegary and pungent.
Carboxylic acids that are medium-short polar chains, such as butyric acid (butanoic acid), smell putrid, sweaty and rancid.
Carboxylic acids that are medium-length polar chains, such as caprylic acid (octanoic acid), smell goaty and hircine.
Carboxylic acids that are carboxylic-acid thiols, such as dithiolane-4-carboxylic acid, smell asparagusy and bitter.
Esters that are non-polar chains, such as methyl butyrate, smell sweet and fruity.
Ethers that are linear in concave and trough-shaped sites, such as ethyl methyl ether, smell fragrant, ethereal, floral and flowery.
Ethers that are cyclic, such as dioxacyclopentane, smell earthy, moldy and potatoey.
Halogens, such as fluorine, chlorine, and bromine, smell pharmaceutical, medicinal, pungent, and unpleasant.
Ketones that are heterocyclic, such as furanone and lactones, smell savory and spicy.
Ketones that are alkane ring ketones, such as steroid ketones, smell urinous.
Ketones that are macrocyclic in large concave sites, such as muscone (methylcyclopentade-canone), smell musky and ambrosial.
Ketones that are alkenes with one ring, such as ionones, damascones, and damascenones, smell tobaccoy.
Ketones that are cyclic alkene ketones in V-shaped sites, such as terpenoids and R-(-)-carvone (2-methyl-5-(1-methylethenyl)-2-cyclohexenone), smell minty, spearminty, and pepperminty.
Sulfur compounds that are cysteines, such as gamma-glutamylcysteines and cysteine sulfoxides, smell alliaceous and garlicy.
Sulfur compounds that are carboxylic-acid thiols, such as dithiolane-4-carboxylic acid, smell asparagusy and bitter.
Sulfur compounds that are small thiols, such as methyl mercaptan (methanethiol), smell foul, sulfurous, and rotten.
Sulfur compounds that are sulfides, such as methyl sulfides, smell cabbage-like and rotten at high concentrations.
Terpenes that are cyclic alkene ketones in V-shaped sites, such as terpenoids and R-(-)-carvone (2-methyl-5-(1-methylethenyl)-2-cyclohexenone), smell minty and pepperminty.
Terpenes that are monoterpenoid alcohols, such as geraniol and linalool, smell flowery and fresh.
Terpenes that are isoprenes and monoterpenes, such as isoterpene, smell rubbery.
Terpenes that are sesquiterpenes and triterpenes, such as humulene, smell woody.
Some sites are for both alcohol and terpene, alcohol and aryl, amine and aryl, carboxylic acid and thiol, or ketone and terpene.
Some sites are for carbon chains and rings: alkyls, alkenes, single rings, multiple rings, single heterocyclic rings, multiple heterocyclic rings, single aromatic rings, and multiple aromatic rings.
A limbic-system region {amygdala-hippocampal complex} measures smell associations and emotions.
Olfactory nerves, mitral cells, and tufted cells converge on olfactory-bulb spheres {glomerulus, smell} {glomeruli, smell}. Olfactory receptors send to one lateral glomerulus and one medial glomerulus. Glomeruli receive from one or more olfactory receptors and detect one odor or odor combination.
At nose tips, mammals have a ganglion {Grueneberg ganglion} that detects alarm pheromones (Hans Grueneberg) [1973].
Mammal nasal-cavity bases have smell neurons {vomeronasal system} {Jacobson's organ} {Jacobson organ} for sex-signal and other pheromones. Axons go to accessory olfactory bulb and then to amygdala [Holy et al., 2000] [Johnston, 1998] [Keverne, 1999] [Stowers et al., 2002] [Watson, 2001].
Odor receptors send output directly, left to left and right to right, to 2-mm-diameter brain region {olfactory bulb}| above and behind nose. Olfactory receptors send axons to mitral cells. Mitral-cell axons go to anterior-olfactory-nucleus and prepyriform-cortex superficial and deep pyramidal neurons. Pyramidal neurons send recurrent collateral axons to superficial pyramidal neurons and stellate cells. Pyramidal neurons have post-synaptic apical dendrites that receive from other pyramidal neurons. Tufted cells are local. Olfactory nerves, mitral cells, and tufted cells meet in olfactory-bulb glomeruli. Olfactory bulb preserves odor-receptor spatial relations. Olfactory bulb has fewer neurons than number of odor receptors.
Olfactory-bulb signals go to pyriform cortex, amygdala-hippocampal complex, and entorhinal complex {olfactory cortex}.
Nasal passages guide air onto olfactory epithelium {olfactory cleft} at nose back.
Upper-nose olfactory-cleft mucus cells {olfactory epithelium} {orthonasal olfactory system} are olfactory receptors, basal cells, and supporting cells. In mammals, odor receptors are at nose air-passage top or back. In humans, smell regions are four square-centimeters. Olfactory epithelium is mostly small sensory cells {olfactory sensory neuron} (OSN), with cilia that have odor receptors.
Olfactory region is light yellow in humans and dark yellow or brown in animals. Albinos have white regions and typically have poor smell ability.
Chewing and swallowing can send odorant up rear nasal tract {retronasal olfactory system}. People think sense qualities are in mouth. Orthonasal olfactory system is about outside environment, while retronasal olfactory system is about nutrients and poisons.
Inner-nose ridges {turbinate}| channel inhaled air to olfactory epithelium.
Objects can have smell {odor, smell}| to humans. Odorants mix to make odor.
Molecules can have smell {odorant} to humans. Odorants must be volatile. Airborne-molecule chemical-bond configurations (shapes) and vibration and rotation frequencies and intensities cause smell. Odorant molecules have molecular weight greater than 35 and less than 350, not too small nor too large for olfactory receptors. Odorants are typically hydrophobic.
Pungent odorants are compact non-polar aryl compounds. Sweet odorants are non-polar chain esters. Sweaty odorants are polar chain organic acids. Right-handed and left-handed chiral molecules, like spearmint and caraway, smell different.
primary
People can distinguish 30 primary odorants:
alliaceous and garlicy: cysteine sulfur compounds
aromatic: benzene alkyls
asparagusy, bitter: carboxylic-acid thiols
biting, hanging, alcoholy: small alcohols
burnt, smoky: monophenols and polycyclic aromatic hydrocarbons
buttery: small aldehydes
camphorous, resinous: polycyclic aryls
cool and minty: cyclic alcohols
earthy, moldy, potatoey: cyclic ethers
fishy: alkyl and aryl monoamines
flowery, fresh: monoterpenoid alcohols
foul, rotten, sulfurous: small thiol sulfur compounds
fragrant, floral, flowery, ethereal: linear ethers
fruity, sweet: medium-chain alcohols and non-polar chain esters
goaty, hircine: medium-length polar chain carboxylic acids
grassy, herby: alkene aldehydes
malty: short-chain aldehydes
minty, spearminty, pepperminty: cyclic alkene ketones
musky, ambrosial: macrocyclic ketones
nutty, earthy, green peppery: heterocyclic aromatic amines
pharmaceutical, medicinal, pungent, unpleasant: halogens
pungent, acrid, vinegary: small carboxylic acids
putrid, sweaty, rancid: medium-short polar chain carboxylic acids
roasted, fermented, popcorny: heterocyclic aromatic amines
rubber: monoterpenes (isoprenes)
cabbage-like, rotten: methyl sulfides
savory, spicy: heterocyclic ketones
spermous: alkyl multi-amines and heterocyclic amines
tarry, oily: polyphenols
tobacco: alkenes-with-one-ring ketones
urinous: steroid ketones
woody: triterpenes (sesquiterpenes)
Odorants mix to make odor, and people can distinguish 10,000 different odors.
categories
Smells can range through sweet/flowery/fruity, mild/vegetably, mild/animaly, mild/mineraly, strong/vegetably, strong/animaly, putrid/animaly, and sharp/mineraly.
The smell-category sequence correlates with molecule reactivity:
Ether -C-O-C-
Alcohol -CH2OH
Ester -COO-
Aryl =CHC=
Terpene =CC2
Ketone -COC-
Aldehyde -CHO
Acid -COOH
Amine -CH2NH2
Sulfhydryl -CH2SH
Halogens Br2
similarities based on chemical group
Similar chemical types make similar smells. Similar chemical origins make similar smells.
Alcohols are similar: biting, fruity, sweet.
Aldehydes are similar: malty, grassy (herby).
Amines are similar: spermous, fishy, nutty, roasted.
Aryls are similar: aromatic, burnt (smoky), camphorous (resinous), tarry (oily).
Carboxylic acids are similar: pungent (acrid, vinegary), putrid (sweaty, rancid), goaty (hircine).
Ethers are similar: fragrant, floral, fruity and sweet.
Ketones are similar: minty, spicy, savory, tobacco, musky (ambrosial), urinous.
Sulfur compounds are similar: asparagusy, cabbage-like, alliaceous (garlicy), foul, rotten.
Terpenes are similar: minty, flowery (fresh), rubbery, woody.
similarities based on similar chemical groups
Alcohols and aryl ketones are similar: biting, fruity, minty, musky.
Alcohols and esters are similar: fruity, sweet.
Aldehydes and alkene ketones are similar: malty, grassy, tobacco.
Aldehydes and ethers are similar: malty, grassy, earthy.
Aldehydes and terpenes are similar: malty, grassy, rubbery, woody.
Amines and steroid ketones are similar: spermous, fishy, nutty, roasted, urinous.
Amines and carboxylic acids are similar: spermous, fishy, nutty, roasted, pungent, putrid, goaty.
Polycyclic aryls and halogens are similar: camphorous, pharmaceutical.
Carboxylic acids and steroid ketones are similar: pungent, putrid, goaty, urinous.
Alkene ketones and terpenes are similar: tobacco, rubbery, woody.
Polycyclic aryl ketones and ethers are similar: minty, camphorous, musky, fragrant, flowery, fruity.
similarities based on organism type
Vegetable smells are similar: alcohols, aldehydes, ethers, aryl and alkene ketones, sulfur compounds, terpenes.
Animal smells are similar: carboxylic acids, amines, polycyclic aryl ketones, steroid ketones.
opposites
Carboxylic acids (sour, putrid, animal) and esters (sweet, fruity, vegetable) are opposites.
Carboxylic acids (sour, putrid, animal) and alcohols (sweet, fruity, vegetable) are opposites.
Amines (animal) and aldehydes (vegetable) are opposites.
Amines (animal) and terpenes (vegetable) are opposites.
Odors have pleasantness, familiarity, and intensity {odor hedonics}, which define how much people like them.
In mammals, chemicals {pheromone}| establish territories and find mates [Pantages and Dulac, 2000]. Sex-hormone-derived pheromones are in skin secretions [Savic et al., 2001] [Savic, 2002] [Sobel et al., 1999]. Baboons secrete female pheromones during sexual receptive period. Perhaps, pheromones synchronize ovulation [Gangestad et al., 2002] [McClintock, 1998] [Schank, 2001] [Stern and McClintock, 1998] [Weller et al., 1999].
Women living in close proximity menstruate at same time {McClintock effect}, perhaps from sweat pheromone.
Animals mark locations with scent {scent marking}. Cats and antelope use urine and face or cheek scent glands. Skunk and badger use anal glands.
Linnaeus said smells can be alliaceous like garlic, ambrosial like musk, aromatic, foul, fragrant, hircine like goat, and nauseating {primary odor}. Primary odors can be putrid, flowery, fruity, burnt, spicy, resinous or camphor, musk, floral, peppermint, ether, pungent, and putrid. Primary odors can be floral, minty, ethereal like pear, musky, resinous like camphor, foul or sulfurous, and acrid like vinegar. Primary odors can be acidic, burnt, caprylic like goat, and fragrant. Primary odors can be camphorous, fishy, malty, minty, musky, spermous, sweaty, or urinous odors.
Almond oil, honey, cinnamon, orange blossom, and henna {aegyptium} can mix.
Sperm-whale-stomach oil {ambergris, smell} can protect stomach lining.
Steroid molecules {androstenone} smell musky to 25% of people and urinous to 25% of people, and have no smell for 50% of people.
Orange-rind oils {bergamot, smell} can mix.
Violets can make drops {cacous}. Casca preciosa is sassafras.
d-carvone {carvone} is caraway, and l-carvone is spearmint.
Far-northern-beaver abdomen-gland oil {castoreum} marks territory.
Ethiopian-cat near-genitalia-gland honey-like compound {civet, smell} is a sex pheromone.
Violets make compounds {ionone} that can inhibit odors.
Rose, crocus, and violet oils {kyphi} can mix.
A genetic disease causes urine to smell like maple syrup {maple syrup urine}.
East-Asian deer-intestine red jelly {musk, smell} has steroids.
Oranges can make attar {neroli}.
Smell processes use molecule shape and electric-field differences to distinguish odorants {smell, physiology}. After seven or eight molecules bind to cilia odorant receptors, olfactory receptors signal once. People need 40 signals to perceive odor. Odorants affect several olfactory-receptor types, which send to smell neurons that excite and inhibit each other to form intensity ratios. Smell neurons work together to distinguish odors.
Odors are painful at high concentrations. Smell can detect very low concentrations. Odor intensity and sense qualities mix.
Smell can detect source location. Smell can detect many sources from one location.
Lower air pressure increases volatility and so smell intensity. Higher humidity increases volatility and so smell intensity. Light typically decreases smell, by breaking down chemicals.
After smelling an odor, smell is less sensitive to later odors {cross-adaptation}, probably because both odors share one or more odorant-receptor types. Different odor sequences result in different sensitivities.
People can be unable to name familiar odors {tip-of-the-nose phenomenon}. Unlike tip-of-the-tongue phenomena, there are no lexical cues.
Sinus problems or head blows can cause inability to smell anything {anosmia}. People can be unable to smell specific odors {specific anosmia}.
People can have heightened sucrose, urea, and hydrochloric-acid sensitivity {hyperosmia}.
People can have reduced smell sense {hyposmia}.
Air chemicals and odorant receptors have shapes. Perhaps, chemical shapes must be complementary to receptor shapes to detect odorants {shape-pattern theory}. Odorant-receptor firing pattern determines odor.
Perhaps, molecule geometry correlates with odor type {stereochemical theory}. Smell receptor sites are small concave for camphorous smell, large concave for musky smell, V-shaped for minty smell, trough-shaped for ethereal smell, and concave-and-trough-shaped for floral smell. Receptor sites can have electric charges that attract oppositely charged moelcules, with negative charge for pungent smell and positive charge for putrid smell [Amoore, 1964] [Moncrieff, 1949].
Perhaps, odorant molecules have vibration frequencies {vibration theory} (Luca Turin). Molecules with similar vibration frequency have similar smell.
Taste {taste, sense} {gustation} detects chemicals dissolved in water, using molecule electrochemical reactions and shape, acidity, and polarity. Taste molecules are below 200 molecular weight and include ions, hydrogen ions, hydroxide ions, and sugars. Taste is a synthetic sense, with some analysis.
physical properties
Tastable molecules include hydrogen ions, hydroxide ions, salt ions, and sugars, which are water-soluble and have molecular weights less than 200. Water-soluble molecules vary in size, shape, chemical sites, acidity, and ionicity. Water-soluble chemicals vary in concentration. Tastable molecules attach to tongue chemical receptors.
types
Taste types are sweet, salt, sour, and bitter.
Sweet is not acid, salt, or base. Salt is neutral. Sour is acid. Bitter is base.
Sweet is non-polar. Salt, sour, and bitter are polar.
Sour acid and salt are similar. Bitter base and salt are similar. Sweet and salt are similar.
Sour acid and bitter base are opposites. Sour acid and sweet are opposites. Salt and sweet are opposites.
Taste has same physical properties, and taste processes are similar, so taste perceptions are similar, for all undamaged people. Tastes relate in only one consistent and complete way. Tastes are not symmetric, so tastes have unique relations. Tastes cannot substitute. Tastes have specific sense qualities and so can never switch to other tastes. Newborns can detect sweet as pleasant and bitter as aversive.
Perhaps, the first taste was a food or poison sign.
mixing
Bitter and sweet can mix. Bitter and salt can mix. Salt and sour can mix. Tastes do not mix to make new tastes.
properties
Taste habituates quickly. Taste is in real time, with a half-second delay. Temperature affects taste, so sweets taste less sweet when warm than when cold. Taste has early, middle, and late sensations.
Sour acid and salt are similar. Bitter and salt are similar. Sweet and salt are similar.
Sour (acid) and bitter (base) are opposites. Sweet (neutral) and sour (acid) are opposites. Salt and sweet are opposites.
source location
Taste can detect source location. Taste can detect several sources from one location.
Taste has few spatial affects. However, taste can have interference from more than one source.
evolution
Perhaps, salt receptors evolved because animals need sodium and need associated chloride.
Perhaps, sour receptors evolved to detect food or dangerous acidic conditions.
Perhaps, sweet receptors evolved to detect sugar nutrients.
Perhaps, bitter receptors evolved to detect poisons.
development
Newborns do not taste salt, but babies soon can taste it, and they like it.
Newborns can taste sour. Children like sour taste.
Newborns can taste sweet and think it pleasant.
Babies can taste bitter and think it aversive.
relations to other senses
Taste and retronasal-area smell can combine to make flavor. Odors affect taste receptors. Taste has higher concentration than smell. Taste has water as solvent, not air. Taste has few spatial affects. Taste molecules can have polarization. Taste and smell can have interference from more than one source. Both taste and smell are often silent. Taste and smell have early, middle, and late sensations. Taste does not use vibrations, but smell can use vibrations.
Taste is at tongue surface and so has touch. Texture affects taste. Touch can feel solutions on tongue and react to noxious tastes. Touch locates tongue taste receptors.
Taste seems unrelated to hearing and vision.
effects
Sour makes people's lips pucker, sometimes downward.
Bitter makes people's eyes and nose change.
Salt is alerting.
Savory is less alerting.
Sweet is calming.
Taste and retronasal-area smell can combine {flavor, taste}.
Taste anatomy includes tongue, taste buds, chemical receptors, and neurons. Tongue chemical receptors send to thalamus, which sends to cortical regions.
Tongue skin has chemical receptors for water-soluble molecules {taste, anatomy}. Receptors have one receptor type. Taste uses four or five main receptor types, each with variations. Receptors have dozens of combinations. Taste buds have all receptor types. Tongue has no special salt, sweet, or sour regions.
Taste neurons typically receive from more than one taste-receptor type. Taste neurons detect one main taste category: salt-best, sugar-best, acid-best, and bitter-best. Similar taste sensations vary only in intensity, not in quality, because similar receptors go to same taste neuron.
Medulla solitary tract nucleus receives from tongue cranial nerves 7, 9, and 10, determines taste preferences, and sends to thalamus and to parabrachial nucleus, which also receives from GI tract. Taste cortex is in insula, which sends to orbitofrontal cortex.
Tongue chemical receptors {taste receptor} are for sweet, sour, salty, bitter, and L-glutamate. Receptor cells have 50 chemoreceptors, all of the same receptor type, which detect positive ions or polarity.
Tongue chemoreceptors detect L-glutamate and other amino acids. Some receptors {glutamate receptor} {umami receptor} are metabotropic receptors similar to brain glutamate receptors and underlie savory taste (Kikunae Ikeda) [1908]. People with glutamate receptors can detect monosodium glutamate. Other receptors {amino-acid receptor} are altered sweet receptors that bind amino acids. Glutamate and amino-acid receptors couple to G-proteins, which have unknown second messengers.
Tongue chemoreceptors {salt receptor} detect positively charged salt ions, including sodium and potassium ions. Sodium-chloride sodium ions make pure salt taste. Potassium-chloride potassium ions make salt and bitter taste. Positive ions enter ion channels and directly cause depolarization.
Newborns do not taste salt, but babies soon can taste it, and they like it. Perhaps, salt receptors evolved because animals need sodium and need associated chloride.
Glasorisic acid increases sodium-ion retention.
Tongue chemoreceptors {sour receptor} detect acids. Acid hydrogen ions enter ion channels, block potassium channels, or bind to and open other positive-ion channels. Newborns can taste sour. Children like sour taste. Perhaps, sour receptors evolved to detect food or dangerous acidic conditions.
Tongue chemoreceptors {sweet receptor} detect non-ionic organic compounds, mostly sugars. Sweet-receptors couple to G-proteins, and second messengers close potassium channels. Newborns can taste sweet and like it. Perhaps, sweet receptors evolved to detect sugar nutrients.
Asclepiad, similar to milkweed, inhibits tasting sweet. African miraculous berry makes everything taste sweet. Artificial sweeteners mimic sugar molecules.
Proteins {T1R proteins} can make cell-membrane taste chemoreceptors. Sweet receptor has one T1R2 and one T1R3 protein. Umami savory receptor has one T1R1 and one T1R3 protein. Bitter receptor has 25 possible proteins.
Thirty different chemoreceptors {bitter receptor} detect non-ionic organic compounds, such as alkaloids, including quinine and unripe-potato alkaloid {solanine}. Bitter receptors couple to G-proteins. Second messengers release calcium ions from endoplasmic reticulum. All bitter-receptor types synapse on same taste-neuron type, so people cannot discriminate among bitters. Babies can taste bitter and dislike it. Perhaps, bitter receptors evolved to detect poisons.
6-n-propylthiouracil (PROP) tastes bitter. Supertasters have its chemoreceptors {6-n-propylthiouracil taste receptor}, have many fungiform papillae, and have high-intensity tastes. One-third of people cannot taste PROP, lack those receptors, have fewer fungiform papillae, and have low-intensity tastes.
PTC taste
Phenylthiocarbamide tastes bitter and is similar to propylthiouraci. One-third of people cannot taste it.
Tongue and soft-palate hemispherical cell clusters {taste bud}| hold cells {taste receptor cell} that have tip microvilli. Adult tongue has 10,000 taste buds, but babies have more. Taste buds last one week, fade, and then new ones grow.
Taste-bud cells have tips with projections {microvillus} {microvilli} that extend into taste pore.
Tongue has four bump types {papilla}| {papillae}.
Papillae {circumvallate papilla} can be largest, be before tonsils, be large circular mounds with depressed circumference, and have three to five taste buds (on tongue rear sides).
Papillae {filiform papilla} can be smallest, be most, be down top middle, and have no taste buds.
Papillae {foliate papilla} can be medium-size, be at back sides, be tissue folds at tongue rear and outsides, and have taste buds.
Papillae {fungiform papilla} can be next smallest, be on tongue broad part, be one-millimeter-size mushroom shapes at tongue tip and edges, and have six taste buds each.
Taste distinguishes water-soluble salt, sugar, acid, and base chemicals {taste, physiology}. Taste receptors are for only salt, sugar, acid, or base. For example, salt taste receptors measure salt concentration as salt-to-receptor binding per second. Different taste receptors converge on taste neurons. Similar taste sensations vary only in intensity, not in quality, because similar receptors go to same taste neuron.
Salty chemicals are small and ionic and have neutral acidity. Sodium-chloride sodium ions make pure salt taste. Potassium-chloride potassium ions make salt and bitter taste.
Sour chemicals are small, ionic, and acidic. Hydrogen chloride makes pure sour taste.
Sweet chemicals are large and polar and have neutral acidity. Glucose makes pure sweet taste. Fructose and galactose are sweet.
Bitter chemicals are small or large, ionic, and basic. Hydroxide ions make pure bitter taste.
Savory chemicals are large, ionic-polar, and slightly acidic. L-glutamic acid sodium salt (monosodium glutamate) tastes distinctively salty and sweet.
Taste neurons inhibit and excite each other to compare sugar, acid, base, salt, and L-glutamate receptor inputs to find differences and indicate taste types [Kadohisa et al., 2005] [Pritchard and Norgren, 2004] [Rolls and Scott, 2003].
Tastes are relative. For example, salt only tastes salty relative to other tastes [Brillat-Savarin, 1825]. Saliva salt level is highest in morning, drops until afternoon, and then rises again to high morning value, so salt amount needed for salt taste varies during day. Saliva substance concentrations can vary tenfold. Tongue taste-receptor pattern affects taste.
Taste is painful at high concentrations. Taste can detect low concentrations.
Taste can detect source location. Taste can detect several sources from one location.
acidity
Molecule atoms, bonds, and electric charge determine acidity, which can be acidic, neutral, or basic.
Sour is acidic. Salty is neutral acidity. Savory is neutral. Sweet is neutral. Bitter is basic.
Salty, savory, and sweet have similar neutrality.
Sour and bitter have opposite acidity.
ionicity
Molecule atoms and bonds and molecule-electron properties determine ionicity, which can be ionic or polar.
Sweet and some bitters are polar. Salty, savory, sour, and some bitters are ionic.
Sour and sweet, salty and sweet, and savory and sweet have opposite ionicity.
size
Sour and some bitters have similar small size.
Salts have medium size.
Sweet, savory, and some bitters have similar large size.
polarity or ionicity; acidity, neutrality, or basicity; and size
Taste molecules have a combination of polarity or ionicity; acidity, neutrality, or basicity; and size.
Taste molecules can be:
acidic: hydrogen ion (sour)
neutral: monosodium glutamate (savory)
neutral: sodium chloride and potassium chloride (salt)
neutral: glucose and fructose (sweet)
slightly basic: phenylthiourea, phenylthiocarbamide, and 6-n-propylthiouracil (bitter)
basic: hydroxide ion (bitter)
Taste molecules can be:
polar: glucose and fructose (sweet)
polar: phenylthiourea, phenylthiocarbamide, and 6-n-propylthiouracil (bitter)
ionic: hydroxide ion (bitter)
ionic: hydrogen ion (sour)
ionic: sodium chloride and potassium chloride (salt)
ionic: monosodium glutamate (savory)
(They cannot be non-polar, because non-polar does not dissolve in water.)
Taste molecules can have molecular weight 1 to 200:
1: hydrogen ion (sour)
17: hydroxide ion (bitter)
58: sodium chloride (salt)
75: potassium chloride (salt)
152: phenylthiourea and phenylthiocarbamide (bitter)
169: monosodium glutamate (savory)
170: 6-n-propylthiouracil (bitter)
180: glucose and fructose (sweet)
Taste molecules are:
Sour: acidic, ionic, and small.
Salt: neutral, ionic, and medium.
Savory: neutral, ionic, and large.
Sweet: neutral, polar, and large.
Bitter: slightly basic, polar, and large.
Bitter: basic, ionic, and small.
Acidic and polar do not exist, because acids are ionic. Basic and polar do not exist, because bases are ionic.
Small and polar do not exist, because small molecules are ionic. Medium and polar do not exist, because medium molecules are ionic.
Small and neutral do not exist, because small molecules have hydrogen ions or hydroxide ions. Large and acidic do not exist, because acidic molecules have small hydrogen ions. Large and basic do not exist, because basic molecules have small hydroxide ions.
Taste molecules fall into six categories:
Large polar: neutral (sweet) or slightly basic (bitter)
Large ionic: neutral (savory)
Medium ionic: neutral (salt)
Small ionic: acidic (sour) or basic (bitter).
If new flavor associates with gastrointestinal illness, people are averse to the flavor {learned taste aversion}.
Taste receptors adjust for current saliva substance concentrations. Taste stimulus at same concentration as saliva concentration is tasteless {taste zero}.
Henning said tastes are bitter, salty, sour, and sweet {primary taste} {basic taste}. Some people can distinguish monosodium glutamate savory taste from salt taste.
Peppers have molecules {capsaicin} that cause pain and sweating.
Ethiopian spice mixtures {chow, spice} have chili and other spices and inhibit bacteria.
Roots {ginger, taste} prevent seasickness.
Brazilian daisy {spilanthes} {jambu} numbs and tingles mouth.
Some people can distinguish umami savory taste from salt taste. Glutamic-amino-acid sodium salt {monosodium glutamate}| (MSG) tastes distinctively salty and sweet. Autolyzed yeast extract, glutavene, calcium caseinate, sodium caseinate, Marmite, soy sauce, anchovy, and fish sauce have high MSG.
Brain makes amphetamines {phenylethylamine} (PEA).
For one-half to two-thirds of people, with dominant allele, urea compounds {phenylthiourea} (PTU) can taste bitter. PTU has no taste to other one-half to one-third of people, who cannot recognize NC=S chemical functional group [Kalmus and Hubbard, 1960].
Puffer-fish tissues can have poison {tetrodotoxin}, to which predators are averse.
Skin has cold and warm receptors {temperature sense} {temperature receptor}. Coolness and warmth are relative and depend on body-tissue relative average random molecule speed. Very cold objects can feel hot at first.
Nociceptive and thermal receptor systems interact. Tactile and thermal receptor systems interact. Warmth and coolness have no pressure.
Nose and tongue thermoreceptors adjust food-digestion enzymes.
Skin mechanoreceptors {thermoreceptor} can detect temperature. Muscles, tendons, joints, alimentary canal, and bladder have thermoreceptors.
Skin has mechanoreceptors {cold fiber} that detect decreased skin temperature. Skin is normally 30 C to 36 C. If objects are colder than 30 C, cold fibers provide information about material as heat flows from skin to object. Cold receptors are mostly on face and genitals. Cold fibers are 30 times more than warmth fibers.
Skin has receptors {warmth fiber} that detect increased skin temperature. Skin is normally 30 C to 36 C. If skin is above normal temperature, warmth fibers provide information about material as heat flows from skin to object. Warmth fibers also provide information about body state, such as fever or warm-weather overheating. Heat receptors are deep in skin, especially in tongue. Warm fibers are 30 times fewer than cool fibers.
Throat chemoreceptors {thirst, receptor} {dryness, receptor} measure dryness.
Mechanoreceptors can detect pressure at inside or outside body surfaces {touch, sense}. Compression, tension, and torsion stresses cause body-surface strains. Touch analyzes material properties, such as temperature, texture, surface curvature, density, hardness, and elasticity. Touch is a synthetic sense, with some analysis. Protozoa have touch and stretch receptors.
physical properties
Touch events include tissue stresses, motions, and vibrations, which displace surfaces and regions. Stresses vary in area, pressure, and vibration states. Pressures include compression, tension, and torsion. Stresses and stress changes stress skin mechanical receptors.
types
People can feel "butterflies", tickle, tingle, gentle touch, regular pressure, and sharp pressure. People can feel motion and vibrations up to 20 Hz. People can feel object temperature, texture, surface curvature, density, hardness, and elasticity.
Touches relate in only one consistent and complete way. Touches are not symmetric, so touches have unique relations. Touches cannot substitute. Touches have specific sense qualities and so can never switch to other touches. Touches do not have opposites. Touch has same physical properties, and touch processes are similar, so touch perceptions are similar, for all undamaged people.
Touch is pleasurable for babies and parents and for sexual relations. Perhaps, the first touch was for food or mating.
properties
Touch habituates quickly. Touch is in real time, with a half-second delay. Touch can detect low pressure or speed. Touch is painful at high pressure or speed. Touches do not mix to make new touches. Age reduces vibration sensitivity.
source location
Touch can locate body and objects {where system}.
From one location, touch detects only one source.
Touch can detect multiple sensations simultaneously.
Touch has no fixed coordinate origin (egocenter), so coordinates change with task.
evolution
Humans have higher touch sensitivity than other mammals. Lower animals have even less touch sensitivity. Perhaps, the first touch was for food or mating.
Protozoa have touch and stretch receptors.
development
Newborns can turn in touched-cheek direction.
effects
Pressure and touch receptor activity increases muscle flexor activity and decreases muscle extensor activity.
Emotions generate brain-gut hormones that cause abdominal feelings.
relations to other senses
Hearing, temperature, and touch involve mechanical energy.
Touch can feel vibrations below 20 Hz. Sound vibrates eardrum and other body surfaces but is not felt as touch. Touch uses higher energy level than hearing. Hearing uses waves that travel far, but touch uses vibrations that travel short. Hearing and touch have no input from most spatial locations. Hearing has sound attack and decay, and touch has temporal properties.
Touch can feel air near smell receptors and react to noxious smells. Touch locates smell receptors in upper nose.
Touch can feel solutions on tongue and react to noxious tastes. Touch locates tongue taste receptors.
Touch coordinates with vision.
Nociceptive and thermal receptor systems interact. Tactile and thermal receptor systems interact.
People can have extreme touch sensitivity and low pain threshold {hyperaesthesia, touch}|.
Sense-nerve myelinated-fiber pathways {epicritic pathway} {lemniscal system} can begin at Meissner's corpuscles, Pacinian corpuscles, hair root structures, muscle spindles, and Golgi tendon organs, go through lateral cervical nucleus, continue to gracile and cuneate nuclei, and end at cerebellum and thalamus.
Skin mechanical receptors send to spinal cord, brainstem nuclei, thalamus, and parietal lobe.
Skin mechanoreceptor fibers {A-beta fiber} can be large.
Meissner corpuscles are fast-adapting mechanoreceptors and have small receptive fields {fast-adapting fiber I} (FA I).
Pacinian corpuscles are fast-adapting mechanoreceptors and have large receptive fields {fast-adapting fiber II} (FA II).
Merkel receptors are slow-adapting mechanoreceptors and have small receptive fields {slow-adapting fiber I} (SA I).
Ruffini receptors are slow-adapting mechanoreceptors and have large receptive fields {slow-adapting fiber II} (SA II).
Skin, muscles, tendons, joints, alimentary canal, and bladder have mechanical receptors that detect tissue strains, pressures/stresses (compression, tension, and torsion), motions, and vibrations {touch receptor}. Eight basic mechanoreceptor types each have many variations, making thousands of combinations. Skin has encapsulated tactile receptors, free-nerve-ending receptors, hair-follicle receptors, Meissner's corpuscles, Merkel cells, Pacinian corpuscles, palisade cells, and Ruffini endorgans.
Skin mechanoreceptors (thermoreceptor) can detect surface temperature. Muscles, tendons, joints, alimentary canal, and bladder have thermoreceptors. Skin mechanoreceptors (cold fiber) can detect decreased skin temperature. Cold receptors are mostly on face and genitals. Skin has receptors (warmth fiber) that detect increased skin temperature. Heat receptors are deep in skin, especially in tongue. Warm fibers are 30 times fewer than cool fibers.
Skin mechanoreceptors {free nerve ending} respond to all skin-stimulation types, because they are specialized receptors.
Skin mechanoreceptors {hair cell, skin}, with tip cilia {stereocilia} {stereocilium}, detect movement. Stereocilia movement begins neurotransmitter release. Hair cells send to brainstem and receive from brain.
Woodpeckers have tongue vibration detectors {Herbst corpuscle}, which are like Pacinian corpuscles.
Skin encapsulated mechanoreceptors {Krause end bulb} {Krause's end bulb} are in mammals other than primates and correspond to primate Meissner's corpuscles. Krause end bulbs are mostly in genitals, tongue, and lips.
Teleosts have side canals and openings {lateral line system}|, running from head to tail, which perceive water pressure and flow changes. Visual signals influence lateral-line perceptions.
Primate glabrous-skin encapsulated mechanoreceptors {Meissner's corpuscle} {Meissner corpuscle} are fast-adapting, have small receptive fields of 100 to 300 micrometers diameter, and lie in rows just below fingertip surface-ridge dermal papillae. Meissner's corpuscles are only in primates and correspond to Krause end bulbs in other mammals.
Meissner's corpuscles respond to vibration, to detect changing stimuli. Maximum sensitivity is at 20 to 40 Hz. Range is from 1 Hz to 400 Hz. Meissner's corpuscles send to myelinated dorsal-root neuron fibers.
Numerous encapsulated mechanoreceptors {Merkel cell} {Merkel-cell neurite complex} form domes {Iggo-Pinkus dome} visible at skin surfaces. Merkel cells are slow-adapting, have small receptive fields of 100 to 300 micrometers diameter, and are in hairy-skin epidermis-bottom small scattered clusters and in glabrous-skin epidermis rete pegs.
Merkel cells detect continuous pressures and deformations as small as one micrometer. Merkel cells detect 0.4-Hz to 3-Hz low-frequency vibrations. Merkel cells send to myelinated dorsal-root neuron fibers.
Enzymes {ODC enzyme} begin touch chemical changes.
Encapsulated mechanoreceptors {pacinian corpuscle}, 1 to 2 mm diameter, detect deep pressure. Pacinian corpuscles are fast-adapting, have large receptive fields, and are in body, joint, genital, and mammary-gland hairy-skin and glabrous-skin deep layers.
Pacinian corpuscles respond to vibration with maximum sensitivity at 200 to 300 Hz. Range is 20 to 1500 Hz. Pacinian corpuscles can detect movements smaller than one micrometer. Pacinian corpuscles have lamellae, which act as high-pass filters to prevent steadily maintained pressure from making signals. Pacinian corpuscles send to myelinated dorsal-root neuron fibers.
Hair follicles have pressure mechanoreceptors {palisade cell, touch} {hair follicle nerve}, around hair-shaft base, that have three myelinated-fiber types. Palisade cells respond to different deformations. Palisade cells respond to vibration frequencies from 1 to 1500 Hz.
Encapsulated skin mechanoreceptors {Ruffini's endorgan} {Ruffini endorgan} {Ruffini ending} are spindle shaped and 1 mm to 2 mm long, similar to Golgi tendon organs. Ruffini's endorgans are slow-adapting, are in joints and glabrous-skin dermis, and have large receptive fields (SA II), several centimeters diameter in arms and trunk. Ruffini endorgans have densely-branched center nerve endings.
Ruffini endorgans respond to skin slip, stretch, and deformation, with sensitivity less than that of SA I receptors. Ruffini endorgans respond to 100 Hz to 500 Hz. Ruffini endorgans send to myelinated dorsal-root neuron fibers.
Skin has hair-follicle receptors, Meissner's corpuscles, Merkel cells, Pacinian corpuscles, and Ruffini endorgans {skin receptor}.
Skin encapsulated mechanoreceptors {tactile receptor} are for vibration, steady pressure, and light touch. Receptors measure amplitude, constancies, changes, and frequencies.
Mechanoreceptors detect pressures, strains, and movements {touch, physiology}. Touch stimuli affect many touch-receptor types, which excite and inhibit each other to form intensity ratios. Receptors do not make equal contributions but have weights. Receptor sensitivity varies over touch spectrum and touch region [Katz, 1925] [McComas and Cupido, 1999] [Teuber et al., 1960] [Teuber, 1960].
Touch is more about weight, heat transfer, texture, and hardness {material property, touch} than about shape {geometric property, touch}. Weight discrimination is best if lifted-weight density is one gram per cubic centimeter. Touch receptors can detect mechanical vibrations up to 20 to 30 Hz.
Touch can detect body location. From one location, touch detects only one source. Touch can detect multiple sensations simultaneously. Touch has no fixed coordinate origin (egocenter), so coordinates change with task.
Pressure, pain, and touch receptor activity increases muscle flexor activity and decrease muscle extensor activity.
Mechanoreceptors detect pressures/stresses (compression, tension, torsion), strains, motions, and vibrations [Bolanowski et al., 1998] [Hollins, 2002] [Johnson, 2002]:
Free nerve ending: smooth or rough surface texture
Hair cell: motion
Meissner corpuscle: vibration
Merkel cell: light compression and vibration
Pacinian corpuscle: deep compression and vibration
Palisade cell: light compression
Ruffini endorgan: slip, stretch, and vibratio
pressure
Skin encapsulated tactile receptors are for steady pressure and light touch.
Skin free-nerve-ending mechanoreceptors respond to all skin-stimulation types.
Merkel cells detect continuous pressures and deformations as small as one micrometer. Merkel cells detect 0.4-Hz to 3-Hz low-frequency vibrations. Merkel cells are slow-adapting.
Pacinian corpuscles detect deep pressure. Pacinian corpuscles are fast-adapting.
Palisade cells respond to different deformations.
Ruffini endorgans respond to skin slip, stretch, and deformation, with sensitivity less than that of SA I receptors. Ruffini's endorgans are slow-adapting.
Nerve signals differ for pain, itch, heat, and pressure [Bialek et al., 1991]. Pain is irregular and high intensity and has rapid increase. Itch is regular and fast. Heat rises higher. Pressure has high intensity that fades away.
People can distinguish 10 stress levels. Maximum touch is when high pressure causes tissues to have inelastic strain, which stretches surface tissues past point to which they can completely return and which typically causes pain.
vibration
Skin encapsulated tactile receptors are for vibration.
Skin free-nerve-ending mechanoreceptors respond to all skin-stimulation types.
Meissner's corpuscles respond to vibration, to detect changing stimuli. Maximum sensitivity is at 20 to 40 Hz. Range is from 1 Hz to 400 Hz. Meissner corpuscles are fast-adapting.
Pacinian corpuscles respond to vibration with maximum sensitivity at 200 to 300 Hz. Range is 20 to 1500 Hz. Pacinian corpuscles can detect movements smaller than one micrometer. Pacinian-corpuscle lamellae act as high-pass filters to prevent steadily maintained pressure from making signals. Pacinian corpuscles are fast-adapting.
Palisade cells respond to vibration frequencies from 1 to 1500 Hz.
Ruffini endorgans respond to 100 Hz to 500 Hz. Ruffini's endorgans are slow-adapting.
People can distinguish 10 vibration levels. Age reduces vibration sensitivity.
movement
Skin hair-cell mechanoreceptors detect movement.
Skin free-nerve-ending mechanoreceptors respond to all skin-stimulation types.
The touch system can detect whether objects are stationary. Touch can tell whether surface is sliding under stationary skin, or skin is sliding over stationary surface.
Most objects connect to the ground and are stationary. Their connection to the ground makes them have high inertia and no acceleration when pushed or pulled.
Objects that slide past stationary skin have inertia similar to or less than the body. (If large object slides by skin, the collision affects the whole body, not just the skin.) They have measurable deceleration when pushed or pulled.
The touch system measures accelerations and decelerations in the skin. Large decelerations in skin result from sliding by stationary objects. Small decelerations in skin result from objects sliding by skin.
People can distinguish 10 motion levels.
space
Skin touches objects, and touch receptors receive information about objects adjacent to body. As body moves around in space, mental space expands by adding adjacency information. Sensations impinge on body surface in repeated patterns at touch receptors. From receptor activity patterns, nervous system builds a three-dimensional sensory surface.
Foot motions stop at ground. Touch and kinesthetic receptors define a horizontal plane in space.
People can distinguish inside-body stimuli, as self. Tightening muscles actively compresses, to affect proprioception receptors that define body points. When people move, other objects do not move, so correlated body movements belong to self.
People can distinguish outside-body stimuli, as non-self. During movements or under pressure, body surfaces passively extend, to affect touch receptors that define external-space points. When people move, correlated non-movements belong to non-self.
Because distance equals rate times time, motion provides information about distances. Nervous system correlates body motions and touch and kinesthetic receptors to extract reference points and three-dimensional space. Repeated body movements define perception metrics. Such ratios build standard length, angle, time, and mass units that model physical-space lengths, angles, times, and masses. As body, head, and eyes move, they trace geometric structures and motions.
material properties
Touch can identify {what system}.
Holding in hand determines weight.
Touching with no moving determines temperature. Material properties determine heat flow, which determines temperature, which ranges from cold to warm to pain. Temperature perceptual processes compare thermoreceptor inputs. People can distinguish 10 temperature levels.
Applying pressure determines hardness.
Sliding touch back and forth determines texture.
Wrapping around determines shape and volume. Following contours determines shape.
Touch is more about weight, heat transfer, texture, and hardness than about shape. Weight discrimination is best if lifted-weight density is one gram per cubic centimeter.
qualities
Emotions generate brain-gut hormones that cause abdominal feelings. Maximum touch is when high pressure causes tissues to have inelastic strain, which stretches surface tissues past point to which they can completely return and which typically causes pain.
neuron
Nerve signals differ for pain, itch, heat, and pressure [Bialek et al., 1991]. Pain is irregular and high intensity and has rapid increase. Itch is regular and fast. Heat rises higher. Pressure has high intensity that fades away.
EEG
In NREM sleep, anesthesia, and waking, short touch causes P1 cortical response 25 milliseconds later. In waking, short touch causes N1 cortical response 100 milliseconds later, lasting hundreds of milliseconds.
temperature
Coolness and warmth are relative and depend on body-tissue relative average random molecule speed. Very cold objects can feel hot at first. Skin is normally 30 C to 36 C. If objects are colder than 30 C, cold fibers provide information about material as heat flows from skin to object. If skin is above normal temperature, warmth fibers provide information about material as heat flows from skin to object. Warmth fibers also provide information about body state, such as fever or warm-weather overheating.
When touching objects, people use hand-movement patterns {exploratory procedure} to learn about features. Applying pressure determines hardness. Wrapping around determines shape and volume. Following contours determines shape. Touching with no moving determines temperature. Sliding touch back and forth determines texture. Holding in hand determines weight.
Skin, muscles, tendons, and joints have mechanoreceptors that work with muscle movements to explore environment. Touching by active exploration with fingers {haptic touch} {haptic perception} uses one information channel. Passive touch uses parallel channels. Touch can tell whether surface is sliding under stationary skin, or skin is sliding over stationary surface. See Figure 1.
At different skin areas, for people to perceive separate touches, two touches must be separate by greater or smaller distances {two-point threshold}.
Touch can identify {what system, touch}. Touch is more about weight, heat transfer, texture, and hardness than about shape.
Touch can locate {where system, touch}.
Bladder mechanoreceptors {urination, receptor} {bladder, receptor} measure distension {distension receptor, bladder}.
Semicircular canals, utricle, and saccule {vestibular system}| work together. Vestibular system detects rotary and linear accelerations and body positions. Vestibular system maintains balance. All vertebrates have semicircular canals to detect accelerations.
Gravity makes constant force, and vestibular systems are similar, so balance feelings are similar, for all undamaged people.
Body-equilibrium neurons continuously stimulate motor nerves. If body-equilibrium nerves have damage, body becomes weak [Cole, 1995] [Lee and Lishman, 1975].
In inner ear, three mutually perpendicular semicircular tubes {semicircular canal}| {labyrinth, inner ear} detect head rotation.
In inner ear, small calcium-carbonate beads {otolith}| press on semicircular-canal hair-cell hairs.
Inner-ear parts {saccule}| can have small calcium-carbonate stones pressing on hair-cell hairs to detect body positions and rotary and linear accelerations.
Inner-ear parts {utricle}| can have small calcium-carbonate stones pressing on hair-cell hairs to detect head positions, centrifugal forces, and linear accelerations.
Hair-cell damage can produce dizziness {Ménière's disease} {Ménière disease}.
Rapid involuntary eyeball oscillation {nystagmus}| can accompany dizziness.
Perception, imagination, dreaming, and memory-recall process visual information to represent color, distance, and location {vision, sense}. Eyes detect visible light by absorbing light energy to depolarize receptor-cell membrane. Vision analyzes light intensities and frequencies [Wallach, 1963]. Vision can detect color, brightness, contrast, texture, alignment, grouping, overlap, transparency, shadow, reflection, refraction, diffraction, focus, noise, blurriness, smoothness, and haze. Lateral inhibition and spreading excitation help find color categories and space surfaces.
properties: habituation
Vision habituates slowly.
properties: location
Vision can detect location. Vision detects only one source from one location. Vision receives from many locations simultaneously. Vision perceives locations that correspond to physical locations, with same lengths and angles.
properties: synthetic sense
Vision is a synthetic sense. From each space direction/location, vision mixes colors and reduces frequency-intensity spectrum to one color and brightness.
properties: phase
Vision does not use electromagnetic-wave phase differences.
properties: time
Vision is in real time, with a half-second delay.
factors: age
Age gradually yellows eye lenses, and vision becomes more yellow.
factors: material
Air is transparent to visible light and other electromagnetic waves. Water is opaque, except to visible light and electric waves. Skin is translucent to visible light.
nature: language
People see same basic colors, whether language has rudimentary or sophisticated color vocabulary. However, people can learn color information from environment and experiences. Fundamental sense qualities are innate and learned.
nature: perspective
Vision always has viewpoint, which always changes.
relations to other senses
Vision seems unrelated to hearing. Hearing has higher energy level than vision. Hearing has longitudinal mechanical waves, and vision has transverse electric waves. Hearing has ten-octave frequency range, and vision has one-octave frequency range. Hearing uses wave phase differences, but vision does not. Hearing is silent from most spatial locations, but vision displays information from all scene locations. Hearing has sound attack and decay, but vision is so fast that it has no temporal properties. Integrating vision and hearing makes three-dimensional space. Hearing can have interference from more than one source, but vision can have interference from only one source. Hearing hears multiple frequencies, but vision reduces to one quality. Vision mixes sources and frequencies into one sensation, but hearing can detect more than one source and frequency from one location.
Touch provides information about eyes. Vision coordinates with touch. Vision is at eye body surface, but brain feels no touch there.
Vision coordinates with kinesthesia.
Vision seems unrelated to smell and taste.
graphics
Images use vector graphics, such as splines with generalized ellipses or ellipsoids. Splines represent lines and can represent region boundary lines. Spline sets can represent surfaces using parallel lines or line grids, because they divide surfaces into polygons. Closed surfaces can be polygon sets. For simplicity, polygons can be triangles. Perhaps, brain uses ray tracing, but not two-dimensional projection.
Vector graphics represents images using mathematical formulas for volumes, surfaces, and curves (including boundaries) that have parameters, coordinates, orientations, colors, opacities, shading, and surface textures. For example, circle information includes radius, center point, line style, line color, fill style, and fill color. Vector graphics includes translation, rotation, reflection, inversion, scaling, stretching, and skewing. Vector graphics uses logical and set operations and so can extrapolate and interpolate, including filling in.
movement
Vision improves motor control by locating and recognizing objects.
evolution
More than 500 million years ago, animal skin touch-receptor cells evolved photoreceptor protein for dim light, making light-sensitive rod cells. More than 500 million years ago, gene duplication evolved photoreceptor proteins for bright light, and cone cells evolved.
Multiplying light-sensitive cells built a rod-cell region. Rod-cell region sank into skin to make a dimple, so light can enter only from straight-ahead. Dimple became a narrow hole and, like pinholes, allowed image focusing on light-sensitive rod-cell region. Transparent skin covered narrow hole. Transparent-skin thickening created a lens, allowing better light gathering. Muscles controlled lens shape, allowing focusing at different distances.
evolution: beginning
Perhaps, the first vision was for direct sunlight, fire, lightning, or lightning bugs.
evolution: animals
Animal eyes are right and left, not above and below, to help align vertical direction.
development
Pax-6 gene has homeobox and regulates head and eye formation.
People often do not see scene changes or anomalies {change blindness}, especially if overall meaning does not change.
blinking
When scene changes during eye blinks, people do not see differences.
saccades
When scene changes during saccades, people do not see differences.
gradient
People do not see gradual changes.
masking
People do not see changes when masking hides scene changes.
featureless intermediate view
When a featureless gray picture flashes between views of first scene and slightly-different second scene, people do not see differences.
attentional load
If attentional load increases, change blindness increases.
Vision behavior and use determine vision phenomena {enactive perception} [Noë, 2002] [Noë, 2004] [O'Regan, 1992] [O'Regan and Noë, 2001].
To fixate moving visual target, or stationary target when head is moving {fixation, vision}|, vertebrates combine vestibular system, vision system, neck somatosensory, and extraocular proprioceptor movement-sensor inputs. For vision, eyes jump from fixation to fixation, as body, head, and/or eyes move. At each eye fixation, body parts have distances and angles to objects (landmarks). (Fixations last long enough to gather new information with satisfactory marginal returns. Fixations eventually gather new information too slowly, so eyes jump again.)
As observer looks in a plane mirror, mirror reflects observer top, bottom, right, and left at observed top, bottom, right, and left. Observer faces in opposite direction from reflection, reflection right arm is observer left arm, and reflection left arm is observer right arm, as if observer went through mirror and turned front side back (inside out) {mirror reversal}.
no inversion
Reflection through one point causes reflection and rotation (inversion). Inversion makes right become left, left become right, top become bottom, and bottom become top. Plane mirrors do not reflect through one point.
rotation
If an object is between observer and mirror, observer sees object front, and mirror reflects object back. Front top is at observer top, front bottom is at observer bottom, front right is at observer left, and front left is at observer right. Back top is at observer top, back bottom is at observer bottom, back right is at observer right, and back left is at observer left. It is like object has rotated horizontally 180 degrees. Mirrors cause rotation {mirror rotation}. 180-degree horizontal rotation around vertical axis exchanges right and left. 180-degree vertical rotation around right-left horizontal axis exchanges top and bottom. 180-degree vertical rotation around front-back horizontal axis exchanges right and left and top and bottom.
mirror writing
If a transparent glass sheet has writing on the back side facing a plane mirror, observers looking at the glass front and mirror see the same "mirror" writing. People can easily read what someone writes on their foreheads, and it is not "mirror" writing. People can choose to observe from another viewpoint.
eyes
Because mirror reversal still occurs using only one eye, having two horizontally separated eyes does not affect mirror reversal. Observing mirror reversal while prone, with eyes vertically separated, does not affect mirror reversal.
reporting
Mirror reversals are not just verbal reports, because "mirror" writing is difficult to read and looks different from normal writing.
cognition
Because mirror reversal occurs even when people cannot perceive the mirror, mirror reversal does not have cognitive rotation around vertical axis. People do not see mirror reversal if they think a mirror is present, but it is not.
Light rays reflect from visual-field objects, forming a two-dimensional array {optic array} [Gibson, 1966] [Gibson, 1979].
Repeated stimuli can lead to not seeing {repetition blindness}, especially if overall meaning does not change [Kanwisher, 1987].
Surfaces can be transparent, translucent (semi-reflective), or opaque (reflective) {opacity}.
For each wavelength, a percentage {absorbance} of impinging light remains in the surface. Surface transmits or reflects the rest.
For each wavelength, a percentage {reflectance} of impinging light reflects from surface. Surface transmits or absorbs the rest. Reflectance changes at object boundaries are abrupt [Land, 1977]. Color depends on both illumination and surface reflectance [Land, 1977]. Comparing surfaces' reflective properties results in color.
For each wavelength, a percentage {transmittance} of impinging light transmits through surface. Surface reflects or absorbs the rest.
Inner eyeball has a visible-light receptor-cell layer {vision, anatomy}.
occipital lobe
Areas V2 and V4 detect contour orientation, regardless of luminance. Area V4 detects curved boundaries.
temporal lobe
Middle temporal-lobe area V5 detects pattern directions and motion gradients. Dorsal medial superior temporal lobe detects heading.
temporal lobe: inferotemporal lobe
Inferotemporal lobe (IT) detects shape parts. IT and CIP detect curvature and orientation.
retina and brain
Brain sends little feedback to retina [Brooke et al., 1965] [Spinelli et al., 1965].
pathways
Brain processes object recognition and color from area V1, to area V2, to area V4, to inferotemporal cortex. Cortical area V1, V2, and V3 damage impairs shape perception and pattern recognition, leaving only flux perception. Brain processes locations and actions in a separate faster pathway.
At first-ventricle top, chordates have cells {lamellar body} with cilia and photoreceptors. In vertebrates, lamellar body evolved to make parietal eye and pineal gland.
Cortical-neuron sets {spatial frequency channel} can detect different spatial-frequency ranges and so detect different object sizes.
Vision cells {vision, cells} are in retina, thalamus, and cortex.
One thousand cortical cells collectively {cardinal cell} code for one perception type.
Area-V4 neurons {color difference neuron} can detect adjacent and surrounding color differences, by relative intensities at different wavelengths.
Neurons {color-opponent cell} can detect output differences from different cone cells for same space direction.
Visual-cortex neurons {comparator neuron} can receive same output that eye-muscle motor neurons send to eye muscles, so perception can account for eye movements that change scenes.
Cells {double-opponent neuron} can have both ON-center and OFF-center circular fields and compare colors.
Some cortical cells {face cell} respond only to frontal faces, profile faces, familiar faces, facial expressions, or face's gaze direction. Face cells are in inferior-temporal cortex, amygdala, and other cortex. Face-cell visual field is whole fovea. Color, contrast, and size do not affect face cells [Perrett et al., 1992].
Some brain neurons {grandmother cell} {grandmother neuron} {Gnostic neuron} {place cell, vision} can recognize a perception or store a concept [Barlow, 1972] [Barlow, 1995] [Gross, 1998] [Gross, 2002] [Gross et al., 1969] [Gross et al., 1972] [Konorski, 1967]. Place cells recognize textures, objects, and contexts. For example, they fire only when animal sees face (face cell), hairbrush, or hand.
Small retinal cells {amacrine cell, vision} inhibit inner-plexiform-layer ganglion cells, using antitransmitter to block pathways. There are 27 amacrine cell types.
Photoreceptor cells excite retinal neurons {bipolar cell, vision}. There are ten bipolar-cell types. Parasol ganglion cells can receive from large-dendrite-tree bipolar cells {diffuse bipolar cell}.
input
Central-retina small bipolar cells {midget bipolar cell} receive from one cone. Peripheral-retina bipolar cells receive from more than one cone. Horizontal cells inhibit bipolar cells.
output
Bipolar cells send to inner plexiform layer to excite or inhibit ganglion cells, which can be up to five neurons away.
ON-center cells
ON-center midget bipolar cells increase output when light intensity increases in receptive-field center and/or decreases in receptive-field periphery. OFF-center midget bipolar cells increase output when light intensity decreases in receptive-field center and/or increases in receptive-field periphery.
Retinal neurons {ganglion cell, retina} can receive from bipolar cells and send to thalamus lateral geniculate nucleus (LGN), which sends to visual-cortex hypercolumns.
midget ganglion cell
Small central-retina ganglion cells {midget ganglion cell} receive from one midget bipolar cell. Midget cells respond mostly to contrast. Most ganglion cells are midget ganglion cells.
parasol cell
Ganglion cells {parasol cell} {parasol ganglion cell} can receive from diffuse bipolar cells. Parasol cells respond mostly to change. Parasol cells are 10% of ganglion cells.
X cell
Ganglion X cells can make tonic and sustained signals, with slow conduction, to detect details and spatial orientation. X cells send to thalamus simple cells. X cells have large dendritic fields. X cells are more numerous in fovea.
Y cell
Ganglion Y cells can make phasic and transient signals, with fast conduction, to detect stimulus size and temporal motion. Y cells send to thalamus complex cells. Y cells have small dendritic fields. Y cells are more numerous in retinal periphery.
W cell
Ganglion W cells are small, are direction sensitive, and have slow conduction speed.
ON-center neuron
ON-center ganglion cells respond when light intensity above background level falls on their receptive field. Light falling on field surround inhibits cell. Bipolar cells excite ON-center neurons.
Four types of ON-center neuron depend on balance between cell excitation and inhibition. One has high firing rate at onset and zero rate at offset. One has high rate at onset, then zero, then high, and then zero. One has high rate at onset, goes to zero, and then rises to constant level. One has high rate at onset and then goes to zero.
OFF-center neuron
OFF-center ganglion cells increase output when light intensity decreases in receptive-field center. Light falling on field surround excites cell. Bipolar cells excite OFF-center neurons.
ON-OFF-center neuron
ON-OFF-center ganglion cells for motion use ON-center-neuron time derivatives to find movement position and direction. Amacrine cells excite transient ON-OFF-center neurons.
similar neurons
Ganglion cells are like auditory nerve cells, Purkinje cells, olfactory bulb cells, olfactory cortex cells, and hippocampal cells.
spontaneous activity
Ganglion-cell spontaneous activity can be high or low [Dowling, 1987] [Enroth-Cugell and Robson, 1984] [Wandell, 1995].
Retinal cells {horizontal cell} can receive from receptor cells and inhibit bipolar cells.
Retina has pigment cells {photoreceptor cell}, with three layers: cell nucleus, then inner segment, and then outer segment with photopigment. Visual-receptor cells find illumination logarithm.
types
Human vision uses four receptor types: rods, long-wavelength cones, middle-wavelength cones, and short-wavelength cones.
hyperpolarization
Visual receptor cells hyperpolarize up to 30 mV from resting level [Dowling, 1987] [Enroth-Cugell and Robson, 1984] [Wandell, 1995]. Photoreceptors have maximum response at one frequency and lesser responses farther from that frequency.
Rod-shaped retinal cells {rod cell} are night-vision photoreceptors, detect large features, and do not signal color.
frequency
Rods have maximum sensitivity at 498 nm, blue-green.
Just above cone threshold intensity {mesopic vision, rod}, rods are more sensitive to short wavelengths, so blue colors are brighter but colorless.
number
Retinas have 90 million rod cells.
layers
Rods have cell nucleus layer, inner layer that makes pigment, and outer layer that stores pigment. Outer layer is next to pigment epithelium at eyeball back.
size
Rods are larger than cones.
pigment
Rod light-absorbing pigment is rhodopsin. Cones have iodopsin.
rod cell and long-wavelength cone
Brain can distinguish colors using light that only affects rod cells and long-wavelength cone cells.
fovea
Fovea has no rod cells. Rod cells are denser around fovea.
Cone-shaped retinal cells {cone, cell} have daylight-vision photoreceptors and detect color and visual details.
types
Humans have three cone types. Cone maximum wavelength sensitivities are at indigo 437 nm {short-wavelength cone}, green 534 nm {middle-wavelength cone}, and yellow-green 564 nm {long-wavelength cone}. Shrimp can have eleven cone types.
evolution
Long-wavelength cones evolved first, then short-wavelength cones, and then middle-wavelength cones. Long-wavelength and short-wavelength cones differentiated 30,000,000 years ago. Three cone types and trichromatic vision began in Old World monkeys.
fovea
Fovea has patches of only medium-wavelength or only long-wavelength cones. To improve acuity, fovea has few short-wavelength cones, because different colors focus at different distances. Fovea center has no short-wavelength cones [Curcio et al., 1991] [Roorda and Williams, 1999] [Williams et al., 1981] [Williams et al., 1991].
number
There are five million cones, mostly in fovea. Short-wavelength cones are mostly outside fovea.
size
Cones are smaller than rods.
pigment
Cone light-absorbing pigment is iodopsin. Rods have rhodopsin.
frequency
When rods saturate, cones have approximately same sensitivity to blue and red.
Just above cone threshold {mesopic vision, cone}, rods are more sensitive to short wavelengths, so blue colors are brighter but colorless. Retinal receptors do not detect pure or unmixed colors. Red light does not optimally excite one cone type but makes maximum excitation ratio between two cone types. Blue light excites short-wavelength cones and does not excite other cone types. Green light excites all cone types.
output
Cones send to one ON-center and one OFF-center midget ganglion cell.
Most mammals, including cats and dogs, have two photopigments and two cone types {dichromat}. For dogs, one photopigment has maximum sensitivity at 429 nm, and one photopigment has maximum sensitivity at 555 nm. Early mammals and most mammals are at 424 nm and 560 nm.
Animals can have only one photopigment and one cone type {monochromat} {cone monochromat}. They have limited color range. Animals can have only rods and no cones {rod monochromat} and cannot see color.
Reptiles and birds have four different photopigments {quadchromat}, with maximum sensitivities at near-ultraviolet 370 nm, 445 nm, 500 nm, and 565 nm. Reptiles and birds have yellow, red, and colorless oil droplets, which make wavelength range less, except for ultraviolet sensor.
Women can have two different long-wavelength cones {L-cone} {L photopigment}, one short-wavelength cone {S-cone} {S photopigment}, and one middle-wavelength cone {M-cone} {M photopigment}, and so have four different pigments {tetrachromacy}. Half of men have one or the other long-wavelength cone [Asenjo et al., 1994] [Jameson et al., 2001] [Jordan and Mollon, 1993] [Nathans, 1999].
People with normal color vision have three different photopigments and cones {trichromat}.
Land-vertebrate eyes {eye} are spherical and focus images on retina.
eye muscles
Eye muscles exert constant tension against movement, so effort required to move eyes or hold them in position is directly proportional to eye position. Midbrain oculomotor nucleus sends, in oculomotor nerve, to inferior oblique muscle below eyeball, superior rectus muscle above eyeball, inferior rectus muscle below eyeball, and medial rectus muscle on inside. Pons abducens nucleus sends, in abducens nerve, to lateral rectus muscle on outside. Caudal midbrain trochlear nucleus sends, in trochlear nerve, to superior oblique muscle around light path from above eyeball.
eye muscles: convergence
Eyes converge toward each other as object gets nearer than 10 meters.
eye muscles: zero-gravity
In zero-gravity environment, eye resting position shifts upward, but people are not aware of shift.
fiber projection
Removing embryonic eye and re-implanting it in rotated positions does not change nerve fiber projections from retina onto visual cortex.
Horseshoe crab (Limulus) eye {simple eye} can only detect light intensity, not direction. Input/output equation uses relation between Green function and covariance, because synaptic transmission is probabilistic.
Most mammals and birds have tissue fold {inner eyelid} {palpebra tertia} that, when eye retracts, comes down from above eye to cover cornea. Inner eyelid has outside mucous membrane {conjunctiva}, inner-side lymphoid follicles, and lacrimal gland.
Reptiles and other vertebrates have transparent membrane {nictitating membrane}| that can cover and uncover eye.
Eye has transparent cells {cornea}| protruding in front. Cornea provides two-thirds of light refraction. Cornea has no blood vessels and absorbs nutrients from aqueous humor. Cornea has many nerves. Non-spherical-cornea astigmatism distorts vision. Corneas can transplant without rejection.
Elastic and transparent cell layers {lens, eye} {crystalline lens} attach to ciliary muscles that change lens shape. To become transparent, lens cells destroy all cell organelles, leaving only protein {crystallin} and outer membrane. Lens cells are all the same. They align and interlock [Weale, 1978]. Lens shape accommodates when objects are less than four feet away. Lens maximum magnification is 15.
Sphincter muscles in a colored ring {iris, eye}| close pupils. When iris is translucent, light scattering causes blue color. In mammals, autonomic nervous system controls pupil smooth muscles. In birds, striate muscles control pupil opening.
Eye has opening {pupil}| into eye. In bright light, pupil is 2 mm diameter. At twilight, pupil is 10 mm diameter. Iris sphincter muscles open and close pupils. Pupil reflex goes from one eye to the other.
Eyeball has insides {fundus, eye}.
Liquid {aqueous humor}| can be in anterior chamber behind cornea and nourish cornea and lens.
Liquid {vitreous humor}| fills main eyeball chamber between lens and retina.
Eyeball has outer white opaque connective-tissue layer {sclera}|.
Eye regions {trochlea}| can have eye muscles.
Eyeball has inner blood-vessel layer {choroid}.
Between retina and choroid is a cell layer {retinal pigment epithelium} (RPE) and Bruch's membrane. RPE cells maintain rods and cones by absorbing used molecules.
Retinal-pigment epithelium and membrane {Bruch's membrane} {Bruch membrane} are between retina and choroid.
At back inner eyeball, visual receptor-cell layers {retina}| have 90 million rod cells, one million cones, and one million optic nerve axons.
cell types
Retina has 50 cell types.
cell types: clustering
Retina has clusters of same cone type. Retina areas can lack cone types. Fovea has few short-wavelength cones.
development
Retina grows by adding cell rings to periphery. Oldest eye part is at center, near where optic nerve fibers leave retina. In early development, contralateral optic nerve fibers cross over to connect to optic tectum. In early development, optic nerve fibers and brain regions have topographic maps. After maturation, axons can no longer alter connections.
processing
Retina cells separate information about shape, reflectance, illumination, and viewpoint.
Ganglion-cell axons leave retina at region {blindspot}| medial to fovea [DeWeerd et al., 1995] [Finger, 1994] [Fiorani, 1992] [Komatsu and Murakami, 1994] [Komatsu et al., 2000] [Murakami et al., 1997].
Cone cells are Long-wavelength, Middle-wavelength, or Short-wavelength. Outside fovea, cones can form two-dimensional arrays {color-receptor array} with L M S cones in equilateral triangles. Receptor rows have ...S-M-L-S-M-L-S... Receptor rows above, and receptor rows below, are offset a half step: ...-L-S-M-L-S-M-.../...S-M-L-S-M-L-S.../...-L-S-M-L-S-M-...
hexagons
Cones have six different cones around them in hexagons: three of one cone and three of other cone. No matter what order the three cones have, ...S-M-L-S-M..., ...S-L-M-S-L..., or ...M-L-S-M-L..., M and L are beside each other and S always faces L-M pair, allowing red+green brightness, red-green opponency, and yellow-blue opponency. L receptors work with three surrounding M receptors and three surrounding S receptors. M receptors work with three surrounding L receptors and three surrounding S receptors. S receptors work with six surrounding L+M receptor pairs, which are from three equilateral triangles, so each S has three surrounding L and three surrounding M receptors.
In all directions, fovea has alternating long-wavelength and middle-wavelength cones: ...-L-M-L-M-.
Primates have central retinal region {fovea}| that tracks motions and detects self-motion. Retinal periphery detects spatial orientation. Fovea contains 10,000 neurons in a two-degree circle. Fovea has no rods. Fovea center has no short-wavelength cones. Fovea has patches of only medium-wavelength cones or only long-wavelength cones. Fovea has no blood vessels, which pass around fovea.
Retinal layers {inner plexiform layer} can have bipolar-cell and amacrine-cell axons and ganglion-cell dendrites. There are ten inner plexiform layers.
Near retina center is a yellow-pigmented region {macula lutea}| {yellow spot}. Yellow pigment increases with age. If incident light changes spectra, people can briefly see macula image {Maxwell spot}.
Lateral-geniculate-nucleus magnocellular neurons measure luminance {luminance channel, vision} {achromatic channel} {spectrally non-opponent channel}.
Lateral-geniculate-nucleus parvocellular neurons measure colors {chromatic channel} {spectrally opponent channel}.
Regions {horizontal gaze center}, near pons abducens nucleus, can detect right-to-left and left-to-right motions.
Regions {vertical gaze center}, near midbrain oculomotor nucleus, can detect up and down motions.
Visual processing finds colors, features, parts, wholes, spatial relations, and motions {vision, physiology}. Brain first extracts elementary perceptual units, contiguous lines, and non-accidental properties.
properties: sizes
Observers do not know actual object sizes but only judge relative sizes.
properties: reaction speed
Reaction to visual perception takes 450 milliseconds [Bachmann, 2000] [Broca and Sulzer, 1902] [Efron, 1967] [Efron, 1970] [Efron, 1973] [Taylor and McCloskey, 1990] [Thorpe et al., 1996] [VanRullen and Thorpe, 2001].
properties: timing
Location perception is before color perception. Color perception is before orientation perception. Color perception is 80 ms before motion perception. If people must choose, they associate current color with motion 100 ms before. Brain associates two colors or motions before associating color and motion.
processes: change perception
Brain does not maintain scene between separate images. Perceptual cortex changes only if brain detects change. Perceiving changes requires high-level processing.
processes: contrast
Retina neurons code for contrast, not brightness. Retina compares point brightness with average brightness. Retinal-nerve signal strength automatically adjusts to same value, whatever scene average brightness.
processes: orientation response
High-contrast feature or object movements cause eye to turn toward object direction {orientation response, vision}.
processes: voluntary eye movements
Posterior parietal and pre-motor cortex plan and command voluntary eye movements [Bridgeman et al., 1979] [Bridgeman et al., 1981] [Goodale et al., 1986]. Stimulating superior-colliculus neurons can cause angle-specific eye rotation. Stimulating frontal-eye-field or other superior-colliculus neurons makes eyes move to specific locations, no matter from where eye started.
information
Most visual information comes from receptors near boundaries, which have large brightness or color contrasts. For dark-adapted eye, absorbed photons supply one information bit. At higher luminance, 10,000 photons make one bit.
People lower and raise eyelids {blinking}| every few seconds.
purpose
Eyelids close and open to lubricate eye [Gawne and Martin, 2000] [Skoyles, 1997] [Volkmann et al., 1980]. Blinking can be a reflex to protect eye.
rate
Blinking rate increases with anxiety, embarrassment, stress, or distraction, and decreases with concentration. Mind inhibits blinking just before anticipated events.
perception
Automatic blinks do not noticeably change scene [Akins, 1996] [Blackmore et al., 1995] [Dmytryk, 1984] [Grimes, 1996] [O'Regan et al., 1999] [Rensink et al., 1997] [Simons and Chabris, 1999] [Simons and Levin, 1997] [Simons and Levin, 1998] [Wilken, 2001].
Vision maintains constancies: size constancy, shape constancy, color constancy, and brightness constancy {constancy, vision}. Size constancy is accurate and learned.
Scene features land on retina at distances {eccentricity, retina} {visual eccentricity} from fovea.
Visual features can blend {feature inheritance} [Herzog and Koch, 2001].
If limited or noisy stimuli come from space region, perception completes region boundaries and surface textures {filling-in}| {closure, vision}, using neighboring boundaries and surface textures.
perception
Filling-in always happens, so people never see regions with missing information. If region has no information, people do not notice region, only scene.
perception: conceptual filling-in
Brain perceives occluded object as whole-object figure partially hidden behind intervening-object ground {conceptual filling-in}, not as separate, unidentified shape beside intervening object.
perception: memory
Filling-in uses whole brain, especially innate and learned memories, as various neuron assemblies form and dissolve and excite and inhibit.
perception: information
Because local neural processing makes incomplete and approximate representations, typically with ambiguities and contradictions, global information uses marked and indexed features to build complete and consistent perception. Brain uses global information when local region has low receptor density, such as retina blindspot or damaged cells. Global information aids perception during blinking and eye movements.
processes: expansion
Surfaces recruit neighboring similar surfaces to expand homogeneous regions by wave entrainment. Contours align by wave entrainment.
processes: lateral inhibition
Lateral inhibition distinguishes and sharpens boundaries. Surfaces use constraint satisfaction to optimize edges and regions.
processes: spreading
Brain fills in using line completion, motion continuation, and color spreading. Brain fills areas and completes half-hidden object shapes. Blindspot filling-in maintains lines and edges {completion, filling-in}, preserves motion using area MT, and keeps color using area V4.
processes: surface texture
Surfaces have periodic structure and spatial frequency. Surface texture can expand to help filling in. Blindspot filling-in continues background texture using area V3.
processes: interpolation
Brain fills in using plausible guesses from surroundings and interpolation from periphery. For large damaged visual-cortex region, filling-in starts at edges and goes inward toward center, taking several seconds to finish [Churchland and Ramachandran, 1993] [Dahlbom, 1993] [Kamitani and Shimojo, 1999] [Pessoa and DeWeerd, 2003] [Pessoa et al., 1998] [Poggio et al., 1985] [Ramachandran, 1992] [Ramachandran and Gregory, 1991].
Stimuli blend if less than 200 milliseconds apart {flicker fusion frequency} [Efron, 1973] [Fahle, 1993] [Gowdy et al., 1999] [Gur and Snodderly, 1997] [Herzog et al., 2003] [Nagarajan et al., 1999] [Tallal et al., 1998] [Yund et al., 1983] [Westheimer and McKee, 1977].
People have different abilities to detect color radiance. Typical people {Standard Observer} have maximum sensitivity at 555 nm and see brightness {luminance, Standard Observer} according to standard radiance weightings at different wavelengths. Brightness varies with luminance logarithm.
In dim light, without focus on anything, black, gray, and white blobs, smaller in brighter light and larger in dimmer light, flicker on surfaces. In darkness, people see large-size regions slowly alternate between black and white. Brightest blobs are up to ten times brighter than background. In low-light conditions, people see three-degrees-of-arc circular regions, alternating randomly between black and white several times each second {variable resolution}. If eyes move, pattern moves. In slightly lighter conditions, people see one-degree-of-arc circular regions, alternating randomly between dark gray and light gray, several times each second. In light conditions, people see colors, with no flashing circles.
Flicker rate varies with activity. If you relax, flicker rate is 4 to 20 Hz. If flicker rate becomes more than 25 Hz, you cannot see flicker.
Flicker shows that sense qualities have elements.
causes
Variable-resolution size reflects sense-field dynamic building. Perhaps, fewer receptor numbers can respond to lower light levels. Perhaps, intensity modulates natural oscillation. Perhaps, rods have competitive inhibition and excitation [Hardin, 1988] [Hurvich, 1981].
Observers can look {visual search} for objects, features, locations, or times {target, search} in scenes or lists.
distractors
Other objects {distractor, search} are not targets. Search time is directly proportional to number of targets and distractors {set size, search}.
types
Searches {conjunction search} can be for feature conjunctions, such as both color and orientation. Conjunction searches {serial self-terminating search} can look at items in sequence until finding target. Speed decreases with number of targets and distractors.
Searches {feature search} can be for color, size, orientation, shadow, or motion. Feature searches are fastest, because mind searches objects in parallel.
Searches {spatial search} can be for feature conjunctions that have shapes or patterns, such as two features that cross. Mind performs spatial searches in parallel but can only search feature subsets {limited capacity parallel process}.
guided search theory
A parallel process {preattentive stage} suggests serial-search candidates {attentive stage} {guided search theory, search}.
Vision combines output from both eyes {binocular vision}|. Cats, primates, and predatory birds have binocular vision. Binocular vision allows stereoscopic depth perception, increases light reception, and detects differences between camouflage and surface. During cortex-development sensitive period, what people see determines input pathways to binocular cells and orientation cells [Blakemore and Greenfield, 1987] [Cumming and Parker, 1997] [Cumming and Parker, 1999] [Cumming and Parker, 2000].
One stimulus can affect both eyes, and effects can add {binocular summation}.
Visual-cortex cells {disparity detector} can combine right and left eye outputs to detect relative position disparities. Disparity detectors receive input from same-orientation orientation cells at different retinal locations. Higher binocular-vision cells detect distance directly from relative disparities, without form or shape perception.
People using both eyes do not know which eye {eye-of-origin} saw something [Blake and Cormack, 1979] [Kolb and Braun, 1995] [Ono and Barbieto, 1985] [Pickersgill, 1961] [Porac and Coren, 1986] [Smith, 1945] [Helmholtz, 1856] [Helmholtz, 1860] [Helmholtz, 1867] [Helmholtz, 1962].
Adaptation can transfer from one eye to the other {interocular transfer}.
Boundaries {contour, vision} have brightness differences and are the most-important visual perception. Contours belong to objects, not background.
curved axes
Curved surfaces have perpendicular curved long and short axes. In solid objects, short axis is object depth axis and indicates surface orientation. Curved surfaces have dark edge in middle, where light and dark sides meet.
completion
Mind extrapolates or interpolates contour segments to make object contours {completion, contour}.
When looking only at object-boundary part, even young children see complete figures. Children see completed outline, though they know it is not actually there.
crowding
If background contours surround figure, figure discrimination and recognition fail.
Two line segments can belong to same contour {relatability}.
Perception extends actual lines to make imaginary figure edges {subjective contour}|. Subjective contours affect depth perception.
Rods and cones {duplex vision} operate in different light conditions.
Vision has systems {photopic system} for daylight conditions.
Vision has systems {scotopic system} for dark or nighttime conditions.
Seeing at dusk {mesopic vision, dark} {twilight vision} is more difficult and dangerous.
Brain can find depth and distance {depth perception} {distance perception} in scenes, paintings, and photographs.
depth: closeness
Closer objects have higher edge contrast, more edge sharpness, position nearer scene bottom, larger size, overlap on top, and transparency. Higher edge contrast is most important. More edge sharpness is next most important. Position nearer scene bottom is more important for known eye-level. Transparency is least important. Nearer objects are redder.
depth: farness
Farther objects have smaller retinal size; are closer to horizon (if below horizon, they are higher than nearer objects); have lower contrast; are hazier, blurrier, and fuzzier with less texture details; and are bluer or greener. Nearer objects overlap farther objects and cast shadows on farther objects.
binocular depth cue: convergence
Focusing on near objects causes extraocular muscles to turn eyeballs toward each other, and kinesthesia sends this feedback to vision system. More tightening and stretching means nearer. Objects farther than ten meters cause no muscle tightening or stretching, so convergence information is useful only for distances less than ten meters.
binocular depth cue: shadow stereopsis
For far objects, with very small retinal disparity, shadows can still have perceptibly different angles {shadow stereopsis} [Puerta, 1989], so larger angle differences are nearer, and smaller differences are farther.
binocular depth cue: stereopsis
If eye visual fields overlap, the two scenes differ by a linear displacement, due to different sight-line angles. For a visual feature, displacement is the triangle base, which has angles at each end between the displacement line and sight-line, allowing triangulation to find distance. At farther distances, displacement is smaller and angle differences from 90 degrees are smaller, so distance information is imprecise.
binocular depth cue: inference
Inference includes objects at edges of retinal overlap in stereo views.
monocular depth cue: aerial perspective
Higher scene contrast means nearer, and lower contrast means farther. Bluer means farther, and redder means nearer.
monocular depth cue: accommodation
Focusing on near objects causes ciliary muscles to tighten to increase lens curvature, and kinesthesia sends this feedback to vision system. More tightening and stretching means nearer. Objects farther than two meters cause no muscle tightening or stretching, so accommodation information is useful only for distances less than two meters.
monocular depth cue: blur
More blur means farther, and less blur means nearer.
monocular depth cue: color saturation
Bluer objects are farther, and redder objects are nearer.
monocular depth cue: color temperature
Bluer objects are farther, and redder objects are nearer.
monocular depth cue: contrast
Higher scene contrast means nearer, and lower contrast means farther. Edge contrast, edge sharpness, overlap, and transparency depend on contrast.
monocular depth cue: familiarity
People can have previous experience with objects and their size, so larger retinal size is closer, and smaller retinal size is farther.
monocular depth cue: fuzziness
Fuzzier objects are farther, and clearer objects are nearer.
monocular depth cue: haziness
Hazier objects are farther, and clearer objects are nearer.
monocular depth cue: height above and below horizon
Objects closer to horizon are farther, and objects farther from horizon are nearer. If object is below horizon, higher objects are farther, and lower objects are nearer. If object is above horizon, lower objects are farther, and higher objects are nearer.
monocular depth cue: kinetic depth perception
Objects becoming larger are moving closer, and objects becoming smaller are moving away {kinetic depth perception}. Kinetic depth perception is the basis for judging time to collision.
monocular depth cue: lighting
Light and shade have contours. Light is typically above objects. Light typically falls on nearer objects.
monocular depth cue: motion parallax
While looking at an object, if observer moves, other objects moving backwards are nearer than object, and other objects moving forwards are farther than object. For the farther objects, objects moving faster are nearer, and objects moving slower are farther. For the nearer objects, objects moving faster are nearer, and objects moving slower are farther. Some birds use head bobbing to induce motion parallax. Squirrels move orthogonally to objects. While observer moves while looking straight ahead, objects moving backwards faster are closer, and objects moving backwards slower are farther.
monocular depth cue: occlusion
Objects that overlap other objects {interposition} are nearer, and objects behind other objects are farther {pictorial depth cue}. Objects with occluding contours are farther.
monocular depth cue: peripheral vision
At the visual periphery, parallel lines curve, like the effect of a fish eye lens, framing the visual field.
monocular depth cue: perspective
By linear perspective, parallel lines converge, so, for same object, smaller size means farther distance.
monocular depth cue: relative movement
If objects physically move at same speed, objects moving slower are farther, and objects moving faster are nearer, to a stationary observer.
monocular depth cue: relative size
If two objects have the same shape and are judged to be the same, object with larger retinal size is closer.
monocular depth cue: retinal size
If observer has previous experience with object size, object retinal size allows calculating distance.
monocular depth cue: shading
Light and shade have contours. Shadows are typically below objects. Shade typically falls on farther objects.
monocular depth cue: texture gradient
Senses can detect gradients by difference ratios. Less fuzzy and larger surface-texture sizes and shapes are nearer, and more fuzzy and smaller are farther. Bluer and hazier surface texture is farther, and redder and less hazy surface texture is closer.
properties: precision
Depth-calculation accuracy and precision are low.
properties: rotation
Fixed object appears to revolve around eye if observer moves.
factors: darkness
In the dark, objects appear closer.
processes: learning
People learn depth perception and can lose depth-perception abilities.
processes: coordinates
Binocular depth perception requires only ground plane and eye point to establish coordinate system. Perhaps, sensations aid depth perception by building geometric images [Poggio and Poggio, 1984].
processes: two-and-one-half dimensions
ON-center-neuron, OFF-center-neuron, and orientation-column intensities build two-dimensional line arrays, then two-and-one-half-dimensional contour arrays, and then three-dimensional surfaces and texture arrays [Marr, 1982].
processes: three dimensions
Brain derives three-dimensional images from two-dimensional ones by assigning convexity and concavity to lines and vertices and making convexities and concavities consistent.
processes: triangulation model
Animals continually track distances and directions to distinctive landmarks.
Adjacent points not at edges are on same surface and so at same distance {continuity constraint, depth}.
Scenes land on right and left eye with same geometric shape, so feature distances and orientations are the same {corresponding retinal points}.
Brain stimuli {cyclopean stimulus} can result only from binocular disparity.
One eye can find object-size to distance ratio {distance ratio} {geometric depth}, using three object points. See Figure 1.
Eye fixates on object center point, edge point, and opposite-edge point. Assume object is perpendicular to sightline. Assume retina is planar. Assume that eye is spherical, rotates around center, and has calculable radius.
Light rays go from center point, edge point, and opposite edge point to retina. Using kinesthetic and touch systems and motor cortex, brain knows visual angles and retinal distances. Solving equations can find object-size to distance ratio.
When eye rotates, scenes do not change, except for focus. See Figure 2. 3.
Calculating distances to space points
Vision cone receptors receive from a circular area of space that subtends one minute of arc (Figure 3). Vision neurons receive from a circular area of space that subtends one minute to one degree of arc.
To detect distance, neuron arrays receive from a circular area of space that subtends one degree of arc (Figure 4). For the same angle, circular surfaces at farther distances have longer diameters, bigger areas, and smaller circumference curvature.
Adjacent neuron arrays subtend the same visual angle and have retinal (and cortical) overlap (Figure 5). Retinal and cortical neuron-array overlap defines a constant length. Constant-length retinal-image size defines the subtended visual angle, which varies inversely with distance, allowing calculating distance (r = s / A) in one step.
Each neuron array sends to a register for a unique spatial direction. The register calculates distance and finds color. Rather than use multiple registers at multiple locations, as in neural networks or holography, a single register can place a color at the calculated distance in the known direction. There is one register for each direction and distance. Registers are not physical neuron conglomerations but functional entities.
Both eyes can turn outward {divergence, eye}, away from each other, as objects get farther. If divergence is successful, there is no retinal disparity.
Brain expands more distant objects in proportion to the more contracted retinal-image size, making apparent size increase with increasing distance {size-constancy scaling} {Emmert's law} {Emmert law}. Brain determines size-constancy scaling by eye convergence, geometric perspective, texture gradients, and image sharpness. Texture gradients decrease in size with distance. Image sharpness decreases with distance.
Two eyes can measure relative distance to scene point, using geometric triangulation {triangulation, eye}. See Figure 1.
comparison
Comparing triangulations from two different distances does not give more information. See Figure 2.
movement
Moving eye sideways while tracking scene point can calculate distance from eye to point, using triangulation. See Figure 3.
Moving eye sideways while tracking scene points calibrates distances, because other scene points travel across retina. See Figure 4.
Moving eye from looking at object edge to looking at object middle can determine scene-point distance. See Figure 5.
Moving eye from looking at object edge to looking at object other edge at same distance can determine scene-point distance. See Figure 6.
Scene features land on one retina point {uniqueness constraint, depth}, so brain stereopsis can match right-retina and left-retina scene points.
Various features {depth cue}| {cue, depth} signal distance. Depth cues are accommodation, colors, color saturation, contrast, fuzziness, gradients, haziness, distance below horizon, linear perspective, movement directions, occlusions, retinal disparities, shadows, size familiarity, and surface textures.
types
Non-metrical depth cues can show relative depth, such as object blocking other-object view. Metrical depth cues can show quantitative information about depth. Absolute metrical depth cues can show absolute distance by comparison, such as comparing to nose size. Relative metrical depth cues can show relative distance by comparison, such as twice as far away.
Vision has less resolution at far distances. Air has haze, smoke, and dust, which absorb redder light, so farther objects are bluer, have less light intensity, and have blurrier edges {aerial perspective}| than if air were transparent. (Air scatters blue more than red, but this effect is small except for kilometer distances.)
Brain perceives depth using scene points that stimulate right and left eyes differently {binocular depth cue} {binocular depth perception}. Eye convergences, retinal disparities, and surface-area sizes have differences.
surface area size
Brain can judge distance by overlap, total scene area, and area-change rate. Looking at surfaces, eyes see semicircles. See Figure 1. Front edge is semicircle diameter, and vision field above that line is semicircle half-circumference. For two eyes, semicircles overlap in middle. Closer surfaces make overlap less, and farther surfaces make overlap more. Total scene surface area is more for farther surfaces and less for closer surfaces. Movement changes perceived area at rate that depends on distance. Closer objects have faster rates, and farther objects have slower rates.
For fixation, both eyes turn toward each other {convergence, eye} {eye convergence} when objects are nearer than 10 meters. If convergence is successful, there is no retinal disparity. Greater eye convergence means object is closer, and lesser eye convergence means object is farther. See Figure 1.
Brain can judge surface relative distance by intensity change during movement toward and away from surface {intensity difference during movement}. See Figure 1.
moving closer
Moving from point to half that distance increases intensity four times, because eye gathers four times more light at closer radius.
moving away
Moving from point to double that distance decreases intensity four times, because eye gathers four times less light at farther radius.
moving sideways
Movement side to side and up and down changes intensity slightly by changing distance slightly. Perhaps, saccades and/or eyeball oscillations help determine distances.
memory
Experience with constant-intensity objects establishes distances.
accommodation
Looking at object while moving it or eye closer, or farther, causes lens-muscle tightening, or loosening, and makes more, or less, visual angle. If brain knows depth, movement toward and away can measure source intensity.
light ray
Scene points along same light ray project to same retina point. See Figure 2.
haze
Atmospheric haze affects light intensity. Haze decreases intensity proportionally with distance. Object twice as far away has half the intensity, because it encounters twice as many haze particles.
sound
Sound-intensity changes can find distances. Bats use sonar because it is too dark to see at night. Dolphins use sonar because water distorts light.
One eye can perceive depth {monocular depth cue}. Monocular depth cues are accommodation, aerial perspective, color, color saturation, edge, monocular movement parallax, occlusion, overlap, shadows, and surface texture.
Closer object can hide farther object {occlusion, cue}|. Perception knows many rules about occlusion.
Using both eyes can make depth and three dimensions appear {stereoscopic depth} {stereoscopy} {stereopsis}. Stereopsis aids random shape perception. Stereoscopic data analysis is independent of other visual analyses. Monocular depth cues can cancel stereoscopic depth. Stereoscopy does not allow highly unlikely depth reversals or unlikely depths.
Features farther away are smaller than when closer, so surfaces have larger texture nearby and smaller texture farther away {texture gradient}.
During fixations, eye is not still but drifts irregularly {drift, eye} {eye drift} through several minutes of arc, over several fovea cones.
During fixations, eye is not still but moves in straight lines {microsaccade} over 10 to 100 fovea cones.
Eyes scan scenes {scanning, vision} in regular patterns along outlines or contours, looking for angles and sharp curves, which give the most shape information.
During fixations, eye is not still but has tremor {eye tremor} {tremor, eye} over one or two fovea cones, as it also drifts.
After fixations lasting 120 ms to 130 ms, eye moves {saccade}|, in 100 ms, to a new fixation position.
brain
Superior colliculus controls involuntary saccades. Brain controls saccades using fixed vectors in retinotopic coordinates and using endpoint trajectories in head or body coordinates [Bridgeman et al., 1979] [Bridgeman et al., 1981] [Goodale et al., 1986].
movement
People do not have saccades while following moving objects or turning head while fixating objects.
transformation
When eye moves from one fixation to another, brain translates whole image up to 100 degrees of arc. World appears to stand still while eyes move, probably because motor signals to move eyes cancel perceptual retinal movement signals.
perception
Automatic saccades do not noticeably change scene [Akins, 1996] [Blackmore et al., 1995] [Dmytryk, 1984] [Grimes, 1996] [O'Regan et al., 1999] [Rensink et al., 1997] [Simons and Chabris, 1999] [Simons and Levin, 1997] [Simons and Levin, 1998] [Wilken, 2001].
Brain does not block input from eye to brain during saccades, but cortex suppresses vision during saccades {saccadic suppression}, so image blurs less. For example, people cannot see their eye movements in mirrors.
In land-vertebrate eyes, flexible lens focuses {accommodation, vision} image by changing surface curvature using eye ciliary muscles. In fish, an inflexible lens moves backwards and forwards, as in cameras. Vision can focus image on fovea, by making thinnest contour line and highest image-edge gradient [Macphail, 1999].
process
To accommodate, lens muscles start relaxed, with no accommodation. Brain tightens lens muscles and stops at highest spatial-frequency response.
distance
Far objects require no eye focusing. Objects within four feet require eye focusing to reduce blur. Brain can judge distance by muscle tension, so one eye can measure distance. See Figure 1.
Pinhole camera can focus scene, but eye is not pinhole camera. See Figure 2.
far focus
If accommodation is for point beyond object, magnification is too low, edges are blurry, and spatial-frequency response is lower, because scene-point light rays land on different retina locations, before they meet at focal point. Focal point is past retina.
near focus
If accommodation is for point nearer than object, magnification is too high, edges are blurry, and spatial-frequency response is lower, because scene-point light rays meet at focal point and then land on different retina locations. Focal point is in eye middle.
Right and left retinas see different images {retinal disparity} {binocular disparity}| [Dacey et al., 2003] [DeVries and Baylor, 1997] [Kaplan, 1991] [Leventhal, 1991] [MacNeil and Masland, 1998] [Masland, 2001] [Polyak, 1941] [Ramón y Cajal, 1991] [Rodieck et al., 1985] [Rodieck, 1998] [Zrenner, 1983].
correlation
Brain can correlate retinal images to pair scene retinal points and then find distances and angles.
fixation
Assume eye fixates on a point straight-ahead. Light ray from scene point forms horizontal azimuthal angle and vertical elevation angle with straight-ahead direction. With no eye convergence, eye azimuthal and elevation angles from scene point differ {absolute disparity}. Different scene points have different absolute disparities {relative disparity}.
When both eyes fixate on same scene point, eye convergence places scene point on both eye foveas at corresponding retinal points, azimuthal and elevation angles are the same, and absolute disparity is zero. See Figure 1. After scene-point fixation, azimuth and elevation angles differ for all other scene points. Brain uses scene-point absolute-disparity differences to find relative disparities to estimate relative depth.
horopter
Points from horopter land on both retinas with same azimuthal and elevation angles and same absolute disparities. These scene points have no relative disparity and so have single vision. Points not close to horopter have different absolute disparities, have relative disparity, and so have double vision. See Figure 2.
location
With eye fixation on far point between eyes and with eye convergence, if scene point is straight-ahead, between eyes, and nearer than fixation distance, point lands outside fovea, for both eyes. See Figure 3. For object closer than fixation plane, focal point is after retina {crossed disparity}.
With eye fixation on close point between eyes and eye convergence, if scene point is straight-ahead, between eyes, and farther than fixation distance, point lands inside fovea, for both eyes. For object farther than fixation plane, focal point is before retina {uncrossed disparity}.
Two eyes can measure relative distance to point by retinal disparity. See Figure 4.
motion
Retinal disparity and motion change are equivalent perceptual problems, so finding distance from retinal disparity and finding lengths and shape from motion changes use similar techniques.
Eye focuses at a distance, through which passes a vertical plane {fixation plane} {plane of fixation}, perpendicular to sightline. From that plane's points, eye convergence can make right and left eye images almost correspond, with almost no disparity. From points in a circle {Vieth-Müller circle} in that plane, eye convergence can make right and left eye images have zero disparity.
After eye fixation on scene point and eye convergence, an imaginary sphere {horopter} passes through both eye lenses and fixation point. Points from horopter land on both retinas with same azimuthal and elevation angles and same absolute disparities. These scene points have no relative disparity and so have single vision.
Brain fuses scene features that are inside distance from horopter {Panum's fusion area} {Panum fusion area} {Panum's fusional area}, into one feature. Brain does not fuse scene features outside Panum's fusional area, but features still register in both eyes, so feature appears double.
Color varies in energy flow per unit area {intensity, vision}. Vision can detect very low intensity. People can see over ten-thousand-fold light intensity range. Vision is painful at high intensity.
sensitivity
People can perceive one-percent intensity differences. Sensitivity improves in dim light when using both eyes.
receptors
Not stimulating long-wavelength or middle-wavelength receptor reduces brightness. For example, extreme violets are less bright than other colors.
temporal integration
If light has constant intensity for less than 100 ms, brain perceives it as becoming less bright. If light has constant intensity for 100 ms to 300 ms, brain perceives it as becoming brighter. If light has constant intensity for longer than 300 ms, brain perceives it as maintaining same brightness.
unchanging image
After people view unchanging images for two or three seconds, image fades and becomes dark gray or black. If object contains sharp boundaries between highly contrasting areas, object reappears intermittently.
bleaching
Eyes blinded by bright light recover in 30 minutes, as eye chemicals become unbleached.
If stimulus lasts less than 0.1 second, brightness is product of intensity and duration {Bloch's law} {Bloch law}.
Phenomenal brightness {brightness} {luminosity} relates to logarithm of total stimulus-intensity energy flux from all wavelengths. Surfaces that emit more lumens are brighter. On Munsell scale, brightness increases by 1.5 units if lumens double.
properties: reflectance
Surfaces that reflect different spectra but emit same number of lumens are equally bright.
properties: reflectivity
For spectral colors, brightness is logarithmic, not linear, with reflectivity.
factors: adaptation
Brightness depends on eye adaptation state. Parallel pathways calculate brightness. One pathway adapts to constant-intensity stimuli, and the other does not adapt. If two same-intensity flashes start at same time, briefer flash looks dimmer than longer flash. If two same-intensity flashes end at same time, briefer flash looks brighter than longer flash {temporal context effect} (Sejnowsky). Visual system uses visual-stimulus timing and spatial context to calculate brightness.
factors: ambient light
Brightness is relative and depends on ambient light.
factors: color
Light colors change less, and dark colors change more, as source brightness increases. Light colors change less, and dark colors change more, as color saturation decreases.
factors: mental state
Brightness depends on mental state.
brightness control
Good brightness control increases all intensities by same amount. Consciousness cannot control brightness directly. Television Brightness control sets "picture" level by increasing input-signal multiple {gain, brightness}. If gain is too low, high-input signals have low intensity and many low-input signals are same black. If gain is too high, low-input signals have high intensity and many high-input signals are same white. Television Brightness control increases ratio between black and white and so really changes contrast.
Detected light has difference between lowest and highest intensity {contrast, vision}.
contrast control
Good contrast control sets black to zero intensity while decreasing or increasing maximum intensity. Consciousness cannot control contrast directly. Television Contrast control sets "black level" by shifting lowest intensity to shift intensity scale. It adjusts input signal to make zero intensity. If input is too low, lower input signals all result in zero intensity. If input is too high, lowest input signal results in greater than zero intensity. Television Contrast control changes all intensities by same amount and so really changes brightness.
Mind can detect small intensity difference {contrast threshold} between light and dark surface area.
Larger objects have smaller contrast thresholds. Stimulus-size spatial frequency determines contrast-threshold reciprocal {contrast sensitivity function} (CSF). Contrast-threshold reciprocal is large when contrast threshold is small.
Visual system increases brightness contrast across edge {edge enhancement}, making lighter side lighter and darker side darker.
If eyes are still with no blinking, scene fades {fading} [Coppola and Purves, 1996] [Pritchard et al., 1960] [Tulunay-Keesey, 1982].
Human visual systems increase brightness contrast across edges, making lighter side lighter and darker side darker {Mach band}.
Leaving, arriving, or transmitted luminous flux in a direction divided by surface area {luminance}. Constant times sum over frequencies of spectral radiant energy times long-wavelength-cone and short-wavelength-cone spectral-sensitivity function [Autrum, 1979] [Segall et al., 1966]. Luminance relates to brightness. Lateral-geniculate-nucleus magnocellular-cell layers {luminance channel, LGN} measure luminance. Light power (radiance) and energy differ at different frequencies {spectral power distribution}, typically in 31 ranges 10 nm wide between 400 nm and 700 nm.
Light {luminous flux} can shine with a spectrum of wavelengths.
Light sources {illuminant} shine light on observed surfaces.
Light {radiant flux} can emit or reflect with a spectrum of wavelengths.
Radiant flux in a direction divided by surface area {radiance}.
Radiant flux divided by surface area {irradiance}.
Brain can perceive motion {motion perception} {motion detector}. Motion analysis is independent of other visual analyses.
properties: adaptation
Motion detector neurons adapt quickly.
properties: direction
Most cortical motion-detector neurons detect motion direction.
properties: distance
Most cortical motion-detector neurons are for specific distance.
properties: fatigue
Motion-detector neurons can fatigue.
properties: location
Most cortical motion-detector neurons are for specific space direction.
properties: object size
Most cortical motion-detector neurons are for specific object spot or line size. To detect larger or smaller objects, motion-detector neurons have larger or smaller receptive fields.
properties: rotation
To have right and left requires asymmetry, such as dot or shape. In rotation, one side appears to go backward while the other goes forward, which makes whole thing stand still.
properties: speed
Most cortical motion-detector neurons detect motion speed.
processes: brain
Area-V5 neurons detect different speed motions in different directions at different distances and locations for different object spot or line sizes. Motion detectors are for one direction, object size, distance, and speed relative to background. Other neurons detect expansion, contraction, and right or left rotation [Thier et al., 1999].
processes: frame
Spot motion from one place to another is like appearance at location and then appearance at another location. Spot must excite motion-detector neuron for that direction and distance.
processes: opposite motions
Motion detectors interact, so motion inhibits opposed motion, making motion contrasts. For example, motion in one direction excites motion detectors for that direction and inhibits motion detectors for opposite direction.
processes: retina image speed
Retinal radial-image speed relates to object distance.
processes: timing
Motion-detector-neuron comparison is not simultaneous addition but has delay or hold from first neuron to wait for second excitation. Delay can be long, with many intermediate neurons, far-apart neurons, or slow motion, or short, with one intermediate neuron, close neurons, or fast motion.
processes: trajectory
Motion detectors work together to detect trajectory or measure distances, velocities, and accelerations. Higher-level neurons connect motion detection units to detect straight and curved motions (Werner Reichardt). As motion follows trajectory, memory shifts to predict future motions.
Animal species have movement patterns {biological motion}. Distinctive motion patterns, such as falling leaf, pouncing cat, and swooping bat, allow object recognition and future position prediction.
Vision can detect that surface is approaching eye {looming response}. Looming response helps control flying and mating.
For moving objects, eyes keep object on fovea, then fall behind, then jump to put object back on fovea {smooth pursuit}. Smooth pursuit is automatic. People cannot voluntarily use smooth pursuit. Smooth pursuit happens even if people have no sensations of moving objects [Thiele et al., 2002].
Three-month-old infants understand {Theory of Body} that when moving objects hit other objects, other objects move. Later, infants understand {Theory of Mind Mechanism} self-propelled motion and goals. Later, infants understand {Theory of Mind Mechanism-2} how mental states relate to behaviors. Primates can understand that acting on objects moves contacted objects.
Head or body movement causes scene retinal displacement. Nearer objects displace more, and farther objects displace less {motion parallax}| {movement parallax}. If eye moves to right while looking straight-ahead, objects appear to move to left. See Figure 1.
Nearer objects move greater visual angle. Farther objects move smaller visual angle and appear almost stationary. See Figure 2.
movement sequence
Object sequence can change with movement. See Figure 3.
depth
Brain can use geometric information about two different positions at different times to calculate relative object depth. Brain can also use geometric information about two different positions at same time, using both eyes.
While observer is moving, nearer objects seem to move backwards while farther ones move in same direction as observer {monocular movement parallax}.
When viewing moving object through small opening, motion direction can be ambiguous {aperture problem}, because moving spot or two on-off spots can trigger motion detectors. Are both spots in window aperture same object? Motion detectors solve the problem by finding shortest-distance motion.
When people see objects, first at one location, then very short time later at another location, and do not see object anywhere between locations, first object seems to move smoothly to where second object appears {apparent motion}|.
Moving spot triggers motion detectors for two locations.
two locations and spot
How does brain associate two locations with one spot {correspondence problem, motion}? Brain follows spot from one location to next unambiguously. Tracking moving objects requires remembering earlier features and matching with current features. Vision can try all possible matches and, through successive iterations, find matches that yield minimum total distance between presentations.
location and spot
Turning one spot on and off can trigger same motion detector. How does brain associate detector activation at different times with one spot? Brain assumes same location is same object.
processes: three-dimensional space
Motion detectors are for specific locations, distances, object sizes, speeds, and directions. Motion-detector array represents three-dimensional space. Space points have spot-size motion detectors.
processes: speed
Brain action pathway is faster than object-recognition pathway. Brain calculates eye movements faster than voluntary movements.
constraints: continuity constraint
Adjacent points not at edges are at same distance from eye {continuity constraint, vision}.
constraints: uniqueness constraint
Scene features land on one retinal location {uniqueness constraint, vision}.
constraints: spatial frequency
Scene features have different left-retina and right-retina positions. Retina can use low resolution, with low spatial frequency, to analyze big regions and then use higher and higher resolutions.
If an image or light spot appears on a screen and then a second image appears 0.06 seconds later at a randomly different location, people perceive motion from first location to second location {phi phenomenon}. If an image or light spot blinks on and off slowly and then a second image appears at a different location, people see motion. If a green spot blinks on and off slowly and then a red spot appears at a different location, people see motion, and dot appears to change color halfway between locations.
Objects {luminance-defined object}, for example bright spots, can contrast in brightness with background. People see luminance-defined objects move by mechanism that differs from texture-defined object-movement mechanism. Luminance-defined objects have defined edges.
Objects {texture-defined object} {contrast-defined object} can contrast in texture with background. People see luminance-defined objects move by mechanism that differs from texture-defined object-movement mechanism. Contrast changes in patterned ways, with no defined edges.
Luminance changes indicate motion {first-order motion}.
Contrast and texture changes indicate motion {second-order motion}.
Incoming visual information is continuous flow {visual flow}| {optical flow, vision} {optic flow} that brain can analyze for constancies, gradients, motion, and static properties. As head or body moves, head moves through stationary environment. Optical flow reveals whether one is in motion or not. Optical flow reveals planar surfaces. Optical flow is texture movement across eye as animals move.
Optic flow has a point {focus of expansion} (FOE) {expansion focus} where horizon meets motion-direction line. All visual features seem to come out of this straight-ahead point as observer moves closer, making radial movement pattern {radial expansion} [Gibson, 1966] [Gibson, 1979].
Optic flow has information {tau, optic flow} that signals how long until something hits people {time to collision} (TTC) {collision time}. Tau is ratio between retinal-image size and retinal-image-size expansion rate. Tau is directly proportional to time to collision.
Mammals can throw and catch {Throwing and Catching}.
Animal Motions
Animals can move in direction, change direction, turn around, and wiggle. Animals can move faster or slower. Animals move over horizontal ground, climb up and down, jump up and down, swim, dive, and fly.
Predators and Prey
Predators typically intercept moving prey, trying to minimize separation. In reptiles, optic tectum controls visual-orientation movements used in prey-catching behaviors. Prey typically runs away from predators, trying to maximize separation. Animals must account for accelerations and decelerations.
Gravity and Motions
Animals must account for gravity as they move and catch. Some hawks free-fall straight down to surprise prey. Seals can catch thrown balls and can throw balls to targets. Dogs can catch thrown balls and floating frisbees. Cats raise themselves on hind legs to trap or bat thrown-or-bouncing balls with front paws.
Mammal Brain
Reticular formation, hippocampus, and neocortex are only in mammals. Mammal superior colliculus can integrate multisensory information at same spatial location [O'Regan and Noë, 2001]. In mammals, dorsal vision pathway indicates object locations, tracks unconscious motor activity, and guides conscious actions [Bridgeman et al., 1979] [Rossetti and Pisella, 2002] [Ungerleider and Mishkin, 1982] [Yabuta et al., 2001] [Yamagishi et al., 2001].
Allocentric Space
Mammal dorsal visual system converts spatial properties from retinotopic coordinates to spatiotopic coordinates. Using stationary three-dimensional space as fixed reference frame simplifies trajectories perceptual variables. Most motions are two-dimensional rather than three-dimensional. Fixed reference frame separates gravity effects from internally generated motions. Internally generated motion effects are straight-line motions, rather than curved motions.
Human Throwing and Shooting
Only primates can throw, because they can stand upright and have suitable arms and hands. From 45,000 to 35,000 years ago, Homo sapiens and Neanderthal Middle-Paleolithic hunter-gatherers cut and used wooden spears. From 15,000 years ago, Homo sapiens Upper Paleolithic hunter-gatherers cut and used wooden arrows, bows, and spear-throwers. Human hunter-gatherers threw and shot over long trajectories.
Human Catching
Geometric Invariants: Humans can catch objects traveling over long trajectories. Dogs and humans use invariant geometric properties to intercept moving objects.
Trajectory Prediction: To catch baseballs, eyes follow ball while people move toward position where hand can reach ball. In the trajectory prediction strategy [Saxberg, 1987], fielder perceives ball initial direction, velocity, and perhaps acceleration, then computes trajectory and moves straight to where hand can reach ball.
Acceleration Cancellation: When catching ball coming towards him or her, fielder must run under ball so ball appears to move upward at constant speed. In the optical-acceleration-cancellation hypothesis [Chapman, 1968], fielder motion toward or away from ball cancels ball perceived vertical acceleration, making constant upward speed. If ball appears to vertically accelerate, it lands farther than fielder. If it appears to vertically decelerate, it lands shorter. Ball rises until caught, because baseball is always above horizon, far objects are near horizon, and near objects are high above horizon.
Transverse Motion: Fielder controls transverse motion independently of radial motion. When catching ball toward right or left, fielder moves transversely to ball path, holding ball-direction and fielder-direction angle constant.
Linear Trajectory: In linear optical trajectory [McBeath et al., 1995], when catching ball to left or right, fielder runs in a curve toward ball, so ball rises in optical height, not to right or left. Catchable balls appear to go straight. Short balls appear to curve downward. Long balls appear to curve upward. Ratio between ball elevation and azimuth angles stays constant. Fielder coordinates transverse and radial motions. Linear optical trajectory is similar to simple predator-tracking perceptions. Dogs use the linear optical trajectory method to catch frisbees [Shaffer et al., 2004].
Optical Acceleration: Plotting optical-angle tangent changes over time, fielders appear to use optical-acceleration information to catch balls [McLeod et al., 2001]. However, optical trajectories mix fielder motions and ball motions.
Perceptual Invariants: Optical-trajectory features can be invariant with respect to fielder motions. Fielders catch fly balls by controlling ball-trajectory perceptions, such as lateral displacement, rather than by choosing how to move [Marken, 2005].
Brain can count {number perception}. Number perception can relate to time-interval measurement, because both measure number of units [Dehaene, 1997].
Number perception can add energy units to make sum {accumulator model} [Dehaene, 1997].
Number perception can associate objects with ordered-symbol list {numeron list model} [Dehaene, 1997].
Number perception can use mental images in arrays, so objects are separate {object file model} [Dehaene, 1997].
Vision detects smallest visual angle {visual acuity} {acuity, vision}.
If they look at too few lines {undersampling}, people estimate grating size incorrectly {aliasing}.
Visual angles land on retinal areas, which send to larger visual-cortex surface areas {cortical magnification}.
Good vision means that people can see at 20 feet what perfect-vision people can detect at 20 feet {twenty-twenty}. In contrast, 20-40 means that people can see at 20 feet what perfect-vision people can detect at 40 feet.
Scene features have diameter, whose ends define rays that go to eye-lens center to form angle {visual angle}.
Visual perceptual processes can detect local surface properties {surface texture} {texture perception} [Rogers and Collett, 1989] [Yin et al., 1997].
surface texture
Surface textures are point and line patterns, with densities, locations, orientations, and gradients. Surface textures have point and line spatial frequencies [Bergen and Adelson, 1988] [Bülthoff et al., 2002] [Julesz, 1981] [Julesz, 1987] [Julesz and Schumer, 1981] [Lederman et al., 1986] [Malik and Perona, 1990].
occipital lobe
Occipital-lobe complex and hypercomplex cells detect points, lines, surfaces, line orientations, densities, and gradients and send to neuron assemblies that detect point and line spatial frequencies [DeValois and DeValois, 1988] [Hubel and Wiesel, 1959] [Hubel and Wiesel, 1962] [Hubel, 1988] [Livingstone, 1998] [Spillman and Werner, 1990] [Wandell, 1995] [Wilson et al., 1990].
similar statistics
Similar surface textures have similar point and line spatial frequencies and first-order and second-order statistics [Julesz and Miller, 1962].
gradients
Texture gradients are proportional to surface slant, surface tilt, object size, object motion, shape constancy, surface smoothness, and reflectance.
gradients: object
Constant texture gradient indicates one object. Similar texture patterns indicate same surface region.
gradients: texture segmentation
Brain can use texture differences to separate surface regions.
speed
Brain detects many targets rapidly and simultaneously to select and warn about approaching objects. Brain can detect textural changes in less than 150 milliseconds, before attention begins.
machine
Surface-texture detection can use point and line features, such as corner detection, scale-invariant features (SIFT), and speeded-up robust features (SURF) [Wolfe and Bennett, 1997]. For example, in computer vision, the Gradient Location-Orientation Histogram (GLOH) SIFT descriptor uses radial grid locations and gradient angles, then finds principal components, to distinguish surface textures [Mikolajczyk and Schmid, 2005].
Surfaces have small regular repeating units {texel}.
Texture perception uses three local-feature types {texton}: elongated blobs {line segment, texton}, blob ends {end-point}, and blob crossings {texture, texton}. Visual-cortex simple and complex cells detect elongated blobs, terminators, and crossings.
search
Texture perception searches in parallel for texton type and density changes.
attention
Texture discrimination precedes attention.
For texton changes, brain calls attention processes.
similarity
If elongated blobs are same, because blob terminators total same number, texture is same.
statistics
Brain uses first-order texton statistics, such as texton type changes and density gradients, in texture perception.
Retina reference frame and object reference frame must match {viewpoint consistency constraint}.
Visual features can stay the same when observation point changes {viewpoint-invariance, vision}. Brain stores such features for visual recognition.
People have a reference point {visual egocenter} {egocenter, vision} on line passing through nosebridge and head center, for specifying locations and directions.
Brain first processes basic features {early vision}, then prepares to recognize objects and understand scenes, then recognizes objects and understands scenes.
Brain first processes basic features, then prepares to recognize objects and understand scenes {middle vision} {midlevel vision}, then recognizes objects and understands scenes.
Brain first processes basic features, then prepares to recognize objects and understand scenes, then recognizes objects and understands scenes {high-level vision}.
People can distinguish 150 to 200 main colors and seven million different colors {vision, color} {color vision}, by representing the light intensity-frequency spectrum and separating it into categories.
color: spectrum
Colors range continuously from red to scarlet, vermilion, orange, yellow, chartreuse, green, spring green, cyan, turquoise, blue, indigo (ultramarine), violet, magenta, crimson, and back to red. Scarlet is red with some orange. Vermilion is half red and half orange. Chartreuse is half yellow and half green. Cyan is half green and half blue. Turquoise is blue with some green. Indigo is blue with some red. Violet is blue with more red. Magenta is half blue and half red. Crimson is red with some blue.
color: definition
Blue, green, and yellow have definite wavelengths at which they are pure, with no other colors. Red has no definite wavelength at which it is pure. Red excites mainly long-wavelength receptor. Yellow is at long-wavelength-receptor maximum-sensitivity wavelength. Green is at middle-wavelength-receptor maximum-sensitivity wavelength. Blue is at short-wavelength-receptor maximum-sensitivity wavelength.
color: similarities
Similar colors have similar average light-wave frequencies. Colors with more dissimilar average light-wave frequencies are more different.
color: opposites
Complementary colors are opposite colors, and white and black are opposites.
color: animals
Primates have three cone types. Non-mammal vertebrates have one cone type, have no color opponent process, and detect colors from violets to reds, with poorer discrimination than mammals.
Mammals have two cone types. Mammals have short-wavelength receptor and long-wavelength receptor. For example, dogs have receptor with maximum sensitivity at 429 nm, which is blue for people, and receptor with maximum sensitivity at 555 nm, which is yellow-green for people. Mammals can detect colors from violets to reds, with poorer discrimination than people.
With two cone types, mammals have only one color opponency, yellow-blue. Perhaps, mammals cannot see phenomenal colors because color sensations require two opponent processes.
nature: individuality
People's vision processes are similar, so everyone's vision perceptions are similar. All people see the same color spectrum, with the same colors and color sequence. Colorblind people have consistent but incomplete spectra.
nature: objects
Colors are surface properties and are not essential to object identity.
nature: perception
Colors are not symmetric, so colors have unique relations. Colors cannot substitute. Colors relate in only one consistent and complete way, and can mix in only one consistent and complete way.
nature: subjective
No surface or object physical property corresponds to color. Color depends on source illumination and surface reflectance and so is subjective, not objective.
nature: irreducibility
Matter and energy cannot cause color, though experience highly correlates with physical quantities. Light is only electromagnetic waves.
processes: coloring
Three coloring methods are coloring points, coloring areas, or using separate color overlays. Mind colors areas, not points or overlays, because area coloring is discrete and efficient.
processes: edge enhancement
Adjacent colors enhance their contrast by adding each color's complementary color to the other color. Adjacent black and white also have enhanced contrast.
processes: timing
Different color-receptor-system time constants cause color.
processes: precision
People can detect smaller wavelength differences between 500 nm and 600 nm than above 600 nm or below 500 nm, because two cones have maximum sensitivities within that range.
physical: energy and color
Long-wavelength photons have less energy, and short-wavelength photons have more energy, because photon energy relates directly to frequency.
physical: photons
Photons have emissions, absorptions, vibrations, reflections, and transmissions.
physical: reflectance
Color depends on both illumination and surface reflectance [Land, 1977]. Comparing surface reflective properties to other or remembered surface reflective properties results in color.
physical: scattering
Blue light has shorter wavelength and has more refraction and scattering by atoms.
Long-wavelength and medium-wavelength cones have similar wavelength sensitivity maxima, so scattering and refraction are similar. Fovea has no short-wavelength cones, for better length precision.
mixing
Colors from light sources cannot add to make red or to make blue. Colors from pigment reflections cannot add to make red or to make blue.
properties: alerting and calming colors
Psychologically, red is alerting color. Green is neutral color. Blue is calming color.
properties: contraction and expansion by color
Blue objects appear to go farther away and expand, and red objects appear to come closer and contract, because reds appear lighter and blues darker.
properties: color depth
Color can have shallow or deep depth. Yellow is shallow. Green is medium deep. Blue and red are deep.
Perhaps, depth relates to color opponent processes. Red and blue mainly excite one receptor. Yellow and green mainly excite two receptors. Yellow mixes red and green. Green mixes blue and yellow.
properties: light and dark colors
Yellow is the brightest color, comparable to white. In both directions from yellow, darkness grows. Colors darken from yellow toward red. Colors darken from yellow toward green and blue. Green is lighter than blue, which is comparable to black.
properties: sad and glad
Dark colors are sad and light colors are glad, because dark colors are less bright and light colors are more bright.
properties: warm and cool colors
Colors can be relatively warm or cool. Black-body-radiator spectra center on red at 3000 K, blue at 5000 K, and white at 7000 K. Light sources have radiation surface temperature {color temperature} comparable to black-body-radiator surface temperature. However, people call blue cool and red warm, perhaps because water and ice are blue and fires are red, and reds seem to have higher energy output. Warm pigments have more saturation and are lighter than cool pigments. White, gray, and black, as color mixtures, have no net temperature.
properties: hue change
Colors respond differently as hue changes. Reds and blues change more slowly than greens and yellows.
factors
Colors change with illumination intensity, illumination spectrum, background surface, adjacent surface, distance, and viewing angle. Different people vary in what they perceive as unique yellow, unique green, and unique blue. The same person varies in what they perceive as unique yellow, unique green, and unique blue.
realism and subjectivism
Perhaps, color relates to physical objects, events, or properties {color realism} {color objectivism}. Perhaps, color is identical to a physical property {color physicalism}, such as surface spectral reflectance distribution {reflectance physicalism}. Perhaps, colors are independent of subject and condition. Mental processes allow access to physical colors.
Perhaps, colors depend on subject and physical conditions {color relationism} {color relativism}.
Perhaps, things have no color {color eliminativism}, and color is only in mind. Perhaps, colors are mental properties, events, or processes {color subjectivism}. Perhaps, colors are mental properties of mental objects {sense-datum, color}. Perhaps, colors are perceiver mental processes or events {adverbialism, color}. Perhaps, humans perceive real properties that cause phenomenal color. Perhaps, colors are only things that dispose mind to see color {color dispositionalism}. Perhaps, colors depend on action {color enactivism}. Perhaps, colors depend on natural selection requirements {color selectionism}. Perhaps, colors depend on required functions {color functionalism}. Perhaps, colors represent physical properties {color representationalism}. Perhaps, experience has color content {color intentionalism}, which provides information about surface color.
Perhaps, humans know colors, essentially, by experiencing them {doctrine of acquaintance}, though they can also learn information about colors.
Perhaps, colors are identical to mental properties that correspond to color categories {corresponding category constraint}.
Properties {determinable property} can be about categories, such as blue. Properties {determinate property} can be about specific things, such as unique blue, which has no red or green.
Perhaps, there are color illusions due to illumination intensity, illumination spectrum, background surface, adjacent surface, distance, and viewing angle. Human color processing cannot always process the same way or to the same result. Color names and categories have some correspondence with other animals, infants, and cultures, but vary among scientific observers and by introspection.
How can colors be in mind but appear in space? Subjectivism cannot account for the visual field. Objectivism cannot account for the color facts.
Differences among objective object and physical properties, subjective color processing, and relations among surfaces, illumination, background, viewing angle and distance do not explain perceived color differences {explanatory gap, color}.
White, gray, and black have no hue {achromatic} and have color purity zero.
Color can have no definite depth {aperture color}, such as at a hole in a screen.
If eyes completely adapt to dark, people see gray {brain gray} {eigengrau}.
Each opponent system has a relative response for each wavelength {chromatic-response curve}. The brightness-darkness system has maximum response at 560 nm and is symmetric between 500 nm and 650 nm. The red-green system has maximum response at 610 nm and minimum response at 530 nm and is symmetric between 590 nm and 630 nm and between 490 nm and 560 nm. The blue-yellow system has maximum response at 540 nm and minimum response at 430 nm and is symmetric between 520 nm and 560 nm and between 410 nm and 450 nm.
Sight tries to keep surface colors constant {color constancy}. Lower luminance makes more red or green, because that affects red-green opponency more. Higher luminance makes more yellow or blue, because that affects blue-yellow opponency more.
Light polarization can affect sight slightly {Haidinger brush}.
Color relates directly to electromagnetic wave frequency {color, frequency} and intensity.
frequency
Light waves that human can see have frequencies between 420 and 790 million million cycles per second, 420 and 790 teraHertz or THz. Frequency is light speed, 3.02 x 10^8 m/s, divided by wavelength. Vision can detect about one octave of light frequencies.
frequency ranges
Red light has frequency range 420 THz to 480 THz. Orange light has frequency range 480 THz to 510 THz. Yellow light has frequency range 510 THz to 540 THz. Green light has frequency range 540 THz to 600 THz. Blue light has frequency range 600 THz to 690 THz. Indigo or ultramarine light has frequency range 690 THz to 715 THz. Violet light has frequency range 715 THz to 790 THz. Colors differ in frequency range and in range compared to average wavelength. Range is greater and higher percentage for longer wavelengths.
Reds have widest range. Red goes from infrared 720 nm to red-orange 625 nm = 95 nm. 95 nm/683 nm = 14%. Reds have more spread and less definition.
Greens have narrower range. Green goes from chartreuse 560 nm to cyan 500 nm = 60 nm. 60 nm/543 nm = 11%.
Blues have narrowest range. Blue goes from cyan 480 nm to indigo or ultramarine 440 nm = 40 nm. 40 nm/463 nm = 8%. Blues have less spread and more definition.
wavelength ranges
Spectral colors have wavelength ranges: red = 720 nm to 625 nm, orange = 625 nm to 590 nm, yellow = 590 nm to 575 nm, chartreuse = 575 nm to 555 nm, green = 555 nm to 520 nm, cyan = 520 nm to 480 nm, blue = 480 nm to 440 nm, indigo or ultramarine = 440 nm to 420 nm, and violet = 420 nm to 380 nm.
maximum purity frequency
Spectral colors have maximum purity at specific frequencies: red = 436 THz, orange = 497 THz, yellow = 518 THz, chartreuse = 539 THz, green = 556 THz, cyan = 604 THz, blue = 652 THz, indigo or ultramarine = 694 THz, and violet = 740 THz.
maximum purity wavelengths
Spectral colors have maximum purity at specific wavelengths: red = 683 nm, orange = 608 nm, yellow = 583 nm, chartreuse = 560 nm, green = 543 nm, cyan = 500 nm, blue = 463 nm, indigo or ultramarine = 435 nm, and violet = 408 nm. See Figure 1. Magenta is not spectral color but is red-violet, so assume wavelength is 730 nm or 375 nm.
maximum sensitivity wavelengths
Blue is most sensitive at 482 nm, where it just turned blue from greenish-blue. Green is most sensitive at 506 nm, at middle. Yellow is most sensitive at 568 nm, just after greenish-yellow. Red is most sensitive at 680 nm, at middle red.
color-wavelength symmetry
Colors are symmetric around middle of long-wavelength and middle-wavelength receptor maximum-sensitivity wavelengths 550 nm and 530 nm. Wavelength 543 nm has green color. Chartreuse, yellow, orange, and red are on one side. Cyan, blue, indigo or ultramarine, and violet are on other side. Yellow is 583 - 543 = 40 nm from middle. Orange is 608 - 543 = 65 nm from middle. Red is 683 - 543 = 140 nm from middle. Blue is 543 - 463 = 80 nm from middle. Indigo or ultramarine is 543 - 435 = 108 nm from middle. Violet is 543 - 408 = 135 nm from middle.
Cone outputs can subtract and add {opponency} {color opponent process} {opponent color theory} {tetrachromatic theory}.
red-green opponency
Middle-wavelength cone output subtracts from long-wavelength cone output, L - M, to detect blue, green, yellow, orange, pink, and red. Maximum is at red, and minimum is at blue. See Figure 1. Hue calculation is in lateral geniculate nucleus, using neurons with center and surround. Center detects long-wavelengths, and surround detects medium-wavelengths.
blue-yellow opponency
Short-wavelength cone output subtracts from long-wavelength plus middle-wavelength cone output, (L + M) - S, to detect violet, indigo or ultramarine, blue, cyan, green, yellow, and red. Maximum is at chartreuse, minimum is at violet, and red is another minimu is at red. See Figure 1. Saturation calculation is in lateral geniculate nucleus, using neurons with center and surround. Luminance output goes to center, and surround detects short-wavelengths [Hardin, 1988] [Hurvich, 1981] [Katz, 1911] [Lee and Valberg, 1991].
brightness
Long-wavelength and middle-wavelength cones add to detect luminance brightness: L + M. See Figure 1. Short-wavelength cones are few. Luminance calculation is in lateral geniculate nucleus, using neurons with center and surround. Center detects long-wavelengths, and surround detects negative of medium-wavelengths. Brain uses luminance to find edges and motions.
neutral point
When positive and negative contributions are equal, opponent-color processes can give no signal {neutral point}. For the L - M opponent process, red and cyan are complementary colors and mix to make white. For the L + M - S opponent process, blue and yellow are complementary colors and mix to make white. The L + M sense process has no neutral point.
color and cones
Red affects long-wavelength some. Orange affects long-wavelength well. Yellow affects long-wavelength most. Green affects middle-wavelength most. Blue affects short-wavelength most.
Indigo or ultramarine, because it has blue and some red, affects long-wavelength and short-wavelength. Violet, because it has blue and more red, affects long-wavelength more and short-wavelength less. Magenta, because it has half red and half blue, affects long-wavelength and short-wavelength equally. See Figure 1.
White, gray, and black affect long-wavelength receptor and middle-wavelength receptor equally, and long-wavelength receptor plus middle-wavelength receptor and short-wavelength receptor equally. See Figure 1. Complementary colors add to make white, gray, or black.
color and opponencies
For red, L - M is maximum, and L + M - S is maximum. For orange, L - M is positive, and L + M - S is maximum. For yellow, L - M is half, and L + M - S is maximum. For green, L - M is zero, and L + M - S is zero. For blue, L - M is minimum, and L + M - S is minimum. For magenta, L - M is half, and L + M - S is half.
saturation
Adding white, to make more unsaturation, decreases L - M values and increases L + M - S values. See Figure 1.
evolution
For people to see color, the three primate cone receptors must be maximally sensitive at blue, green, and yellow-green, which requires opponency to determine colors and has color complementarity. The three cones do not have maximum sensitivity at red, green, and blue, because each sensor is then for one main color, and system has no complementary colors. Such a system has no opponency, because those opponencies have ambiguous ratios and ambiguous colors.
Photoreceptors can have the same output {univariance problem} {problem of univariance} {univariance principle} {principle of univariance} for an infinite number of stimulus frequency-intensity combinations. Different photon wavelengths have different absorption probabilities, from 0% to 10%. Higher-intensity low-probability wavelengths can make same total absorption as lower-intensity high-probability wavelengths. For example, if frequency A has probability 1% and intensity 2, and frequency B has probability 2% and intensity 1, total absorption is same.
Photon absorption causes one photoreceptor molecule to isomerize. Isomerization reactions are the same for all stimulus frequencies and intensities. Higher intensity increases number of reactions.
Color-vision systems have one or more receptor types, each able to absorb a percentage of quanta at each wavelength {wavelength mixture space}. For all receptor types, different wavelength and intensity combinations can result in same output.
Colors {colors} {color, categories} are distinguishable.
The eleven fundamental color categories are white, black, red, green, blue, orange, yellow, pink, brown, purple (violet), and gray [Byrne and Hilbert, 1997] [Wallach, 1963].
major and minor colors
Major colors are red, yellow, green, and blue. Yellow is red and green. Green is yellow and blue. Minor colors are orange, chartreuse, cyan, and magenta. Orange is red and yellow. Chartreuse is yellow and green {chartreuse, color mixture}. Cyan is green and blue {cyan, color mixture}. Magenta is red and blue. Halftones are between major and minor color categories: red-orange {vermilion, color mixture}, orange-yellow, yellow-chartreuse, chartreuse-green, green-cyan {spring green, color mixture}, cyan-blue {turquoise, color mixture}, blue-violet {indigo, color mixture} {ultramarine, color mixture}, indigo-magenta or blue-magenta {violet, color mixture}, and magenta-red {crimson, color mixture}.
white
White is relatively higher in brightness than adjacent surfaces. Adding white to color makes color lighter. However, increasing colored-light intensity does not make white.
white: intensity
When light is too dim for cones, people see whites, grays, and blacks. When light is intense enough for cones, people see whites, grays, and blacks if no color predominates.
white: complementary colors
Spectral colors have complementary colors. Color and complementary color mix to make white, gray, or black. Two spectral colors mix to make intermediate color, which has a complementary color. Mixing two spectral colors and intermediate-color complementary color makes white, gray, or black.
black
Black is relatively lower in brightness than adjacent surfaces. Black is not absence of visual sense qualities but is a color.
gray
Gray is relatively the same brightness as adjacent surfaces.
red
Red light is absence of blue and green, and so is absence of cyan, its additive complementary color. Red pigment is absence of green, its subtractive complementary color.
red: purity
Spectral red cannot be a mixture of other colors. Pigment red cannot be a mixture of other colors.
red: properties
Red is alerting color. Red is warm color, not cool color. Red is light color.
red: mixing
Red mixes with white to make pink.
Spectral red blends with spectral cyan to make white. Pigment red blends with pigment green to make black. Spectral red blends with spectral yellow to make orange. Pigment red blends with pigment yellow to make brown. Spectral red blends with spectral blue or violet to make purples. Pigment red blends with pigment blue or violet to make purples.
red: distance
People do not see red as well at farther distances.
red: retina
People do not see red as well at visual periphery.
red: range
Red has widest color range because reds have longest wavelengths and largest frequency range.
red: intensity
Red can fade in intensity to brown then black.
red: evolution
Perhaps, red evolved to discriminate food.
blue
Blue light is absence of red and green, so blue is absence of yellow, its additive complementary color. Blue pigment is absence of red and green, so blue is absence of orange, its subtractive complementary color.
blue: purity
Spectral blue cannot be a mixture of other colors. Pigment blue cannot be a mixture of other colors.
blue: properties
Blue is calming color. Blue is cool color, not warm color. Blue is light color.
blue: mixing
Blue mixes with white to make pastel blue.
Spectral blue blends with spectral yellow to make white. Pigment blue blends with pigment yellow to make black. Spectral blue blends with spectral green to make cyan. Pigment blue blends with pigment green to make dark blue-green. Spectral blue blends with spectral red to make purples. Pigment blue blends with pigment red to make purples.
blue: distance
People see blue well at farther distances.
blue: retina
People see blue well at visual periphery.
blue: range
Blue has narrow wavelength range.
blue: evolution
Perhaps, blue evolved to tell when sky is changing or to see certain objects against sky.
blue: saturation
Teal is less saturated cyan.
green
Green light is absence of red and blue, and so magenta, its additive complementary color. Green pigment is absence of red, its subtractive complementary color.
green: purity
Spectral green can mix blue and yellow. Pigment green can mix blue and yellow.
green: properties
Green is neutral color in alertness. Green is cool color. Green is light color.
green: mixing
Green mixes with white to make pastel green.
Spectral green blends with spectral magenta to make white. Pigment green blends with pigment magenta to make black. Spectral green blends with spectral orange to make yellow. Pigment green blends with pigment orange to make brown. Spectral green blends with spectral blue to make cyan. Pigment green blends with pigment blue to make dark blue-green.
green: distance
People see green OK at farther distances.
green: retina
People do not see green well at visual periphery.
green: range
Green has wide wavelength range.
green: evolution
Perhaps, green evolved to discriminate fruit and vegetable ripening.
yellow
Yellow light is absence of blue, because blue is its additive complementary color. Yellow pigment is absence of indigo or violet, its subtractive complementary color.
yellow: purity
Spectral yellow can mix red and green. Pigment yellow cannot be a mixture of other colors.
yellow: properties
Yellow is neutral color in alertness. Yellow is warm color. Yellow is light color.
yellow: mixing
Yellow mixes with white to make pastel yellow.
Spectral yellow blends with spectral blue to make white. Pigment yellow blends with pigment blue to make green. Spectral yellow blends with spectral red to make orange. Pigment yellow blends with pigment red to make brown. Olive is dark low-saturation yellow (dark yellow-green).
yellow: distance
People see yellow OK at farther distances.
yellow: retina
People do not see yellow well at visual periphery.
yellow: range
Yellow has narrow wavelength range.
orange: purity
Spectral orange can mix red and yellow. Pigment orange can mix red and yellow.
orange: properties
Orange is slightly alerting color. Orange is warm color. Orange is light color.
orange: mixing
Orange mixes with white to make pastel orange.
Spectral orange blends with spectral blue-green to make white. Pigment orange blends with pigment blue-green to make black. Spectral orange blends with spectral cyan to make yellow. Pigment orange blends with pigment cyan to make brown. Spectral orange blends with spectral red to make light red-orange. Pigment orange blends with pigment red to make dark red-orange.
orange: distance
People do not see orange well at farther distances.
orange: retina
People do not see orange well at visual periphery.
orange: range
Orange has narrow wavelength range.
violet: purity
Spectral violet can mix blue and red. Pigment violet has red and so is purple.
violet: properties
Violet is calming color. Violet is cool color. Violet is light color.
violet: mixing
Violet mixes with white to make pastel violet.
Spectral violet blends with spectral yellow-green to make white. Pigment violet blends with pigment yellow-green to make black. Spectral violet blends with spectral red to make purples. Pigment violet blends with pigment red to make purples.
violet: distance
People see violet well at farther distances.
violet: retina
People see violet well at visual periphery.
violet: range
Violet has narrow wavelength range.
violet: intensity
Violet can fade in intensity to dark purple then black.
brown: purity
Pigment brown can mix red, yellow, and green. Brown is commonest color but is not spectral color. Brown is like dark orange pigment or dark yellow-orange. Brown color depends on contrast and surface texture.
brown: properties
Brown is not alerting or calming. Brown is warm color. Brown is dark color.
brown: mixing
Brown mixes with white to make pastel brown.
Pigment brown blends with other pigments to make dark brown or black.
brown: distance
People do not see brown well at farther distances.
brown: retina
People do not see brown well at visual periphery.
brown: range
Brown is not spectral color and has no wavelength range.
purple: purity
Purples come from mixing red and blue. They have no green, to which they are complementary. Purples are non-spectral colors, because reds have longer wavelengths and blues have shorter wavelengths.
purple: saturation
Purple is low-saturation magenta.
Hue, brightness, and saturation ranges make all perceivable colors {gamut, color}. Perceivable-color range is greater than three-primary-color additive-combination range. However, allowing subtraction of red makes color gamut.
For subtractive colors, combining three pure color pigments {primary color}, such as red, yellow, and blue, can make most other colors.
secondary color
Mixing primary-color pigments {secondary color} makes magenta from red and blue, green from blue and yellow, and orange from red and yellow.
tertiary color
Mixing primary-color and secondary-color pigment {tertiary color} {intermediate color} makes chartreuse from yellow and green, cyan from blue and green, violet from blue and magenta, red-magenta, red-orange, and yellow-orange.
non-unique
Primary colors are not unique. Besides red, yellow, and blue, other triples can make most colors.
Color can have light surround and appear to reflect light {related color}. Brown and gray can appear only when other colors are present. If background is white, gray appears black. If background is black, gray appears white. Color can have dark surround and appear luminous {unrelated color}.
People can see colors {spectral color}| from illumination sources. Light from sources can have one wavelength.
seven categories
Violets are 380 to 435 nm, with middle 408 nm and range 55 nm. Blues are 435 to 500 nm, with middle 463 nm and range 65 nm. Cyans are 500 to 520 nm, with middle 510 nm and range 20 nm. Greens are 520 to 565 nm, with middle 543 nm and range 45 nm. Yellows are 565 to 590 nm, with middle 583 nm and range 35 nm. Oranges are 590 to 625 nm, with middle 608 nm and range 35 nm. Reds are 625 to 740 nm, with middle 683 nm and range 115 nm.
fifteen categories
Spectral colors start at short-wavelength purplish-blue. Purplish-blues are 400 to 450 nm, with middle 425 nm. Blues are 450 to 482 nm, with middle 465. Greenish-blues are 482 to 487 nm, with middle 485 nm. Blue-greens are 487 to 493 nm, with middle 490 nm. Bluish-greens are 493 to 498 nm, with middle 495 nm. Greens are 498 to 530 nm, with middle 510 nm. Yellowish-greens are 530 to 558 nm, with middle 550 nm. Yellow-greens are 558 to 568 nm, with middle 560 nm. Greenish-yellows are 568 to 572 nm, with middle 570 nm. Yellows are 572 to 578 nm, with middle 575 nm. Yellowish-oranges are 578 to 585, with middle 580 nm. Oranges are 585 to 595 nm, with middle 590 nm. Reddish-oranges and orange-pinks are 595 to 625 nm, with middle 610 nm. Reds and pinks are 625 to 740 nm, with middle 640 nm. Spectral colors end at long-wavelength purplish-red.
People can see colors {non-spectral hue} that have no single wavelength but require two wavelengths. For example, mixing red and blue makes magenta and other reddish purples. Such a mixture stimulates short-wavelength cones and long-wavelength cones but not middle-wavelength cones.
Blue, red, yellow, and green describe pure colors {unique hue}. Unique red occurs only at low brightness, because more brightness adds yellow. Other colors mix unique hues. For example, orange is reddish yellow or yellowish red, and purples are reddish blue or bluish red.
Three-dimensional mathematical spaces {color space} can use signals or signal combinations from the three different cone cells to give colors coordinates.
Circular color scales {color wheel} can show sequence from red to magenta.
simple additive color wheel
Colors on circle circumference can show correct color mixing. See Figure 1. Two-color mixtures have color halfway between the colors. Complementary colors are opposite. Three complementary colors are 120 degrees apart. Red is at left, blue is 120 degrees to left, and green is 120 degrees to right. Yellow is halfway between red and green. Cyan is halfway between blue and green. Magenta is halfway between red and blue. Orange is between yellow and red. Chartreuse is between yellow and green. Indigo or ultramarine is between blue and violet. Violet is between indigo or ultramarine and magenta. Non-spectral colors are in quarter-circle from violet to red. Cone color receptors, at indigo or ultramarine, green, and yellow-green positions, are in approximately half-circle.
simple subtractive color wheel
For subtractive colors, shift bluer colors one position: red opposite green, vermilion opposite cyan, orange opposite blue, yellow opposite indigo, and chartreuse opposite violet. Color subtraction makes darker colors, which are bluer, because short-wavelength receptor has higher weighting than other two receptors. It affects reds and oranges little, greens some, and blues most. Blues and greens shift toward red to add less blue, so complementary colors make black rather than blue-black. See Figure 2.
quantum chromodynamics color circle
Additive color wheel can describe quantum-chromodynamics quark color-charge complex-number vectors. On complex-plane unit circle, red coordinates are (+1, 0*i). Green coordinates are (-1/2, -(3^(0.5))*i/2). Blue coordinates are (-1/2, +(3^(0.5))*i/2). Yellow coordinates are (+1/2, -(3^(0.5))*i/2). Cyan coordinates are (-1, 0*i). Magenta coordinates are (+1/2, +(3^(0.5))*i/2).
To find color mixtures, add vectors. Two quarks add to make muons, which have no color and whose resultant vector is zero. Three quarks add to make protons and neutrons, which have no color and whose resultant vector is zero. Color mixtures that result in non-zero vectors have colors and are not physical.
color wheel by five-percent intervals
Color wheel can separate all colors equally. Divide color circle into 20 parts with 18 degrees each. Red = 0, orange = 2, yellow = 4, chartreuse = 6, green = 8, cyan = 10, blue = 12, indigo or ultramarine = 14, violet = 16, and magenta = 18. Crimson = 19, cyan-blue turquoise at 11, cyan-green at 9, yellow-orange = 3, and red-orange vermilion = 1. Primary colors are at 0, 8, and 12. Secondary colors are at 4, 10, and 18. Tertiary colors are at 2, 6, and 14/16. Complementary colors are opposite. See Figure 3.
color wheel with number line
Set magenta = 0 and green = 1. Red = 0.33, and blue = 0.33. Yellow = 0.67, and cyan = 0.67. Complementary colors add to 1.
color wheel with four points
Blue, green, yellow, and red make a square. Green is halfway between blue and yellow. Yellow is halfway between green and red. Blue is halfway between green and red in other direction. Red is halfway between yellow and blue in other direction. Complementary pigments are opposite. Adding magenta, cyan, chartreuse, and orange makes eight points, like tones of an octave but separated by equal intervals, which can be harmonic ratios: 2/1, 3/2, 4/3, and 5/4.
white and black
Color wheel has no black or white, because they mostly depend on brightness. Adding black, gray, and white makes color cylinder, on which unsaturated colors are pastels or dark colors.
Color-space systems {chromaticity diagram} {CIE Chromaticity Diagram} can use luminance Y and two coordinates, x and y, related to hue and saturation. CIE system uses spectral power distribution (SPD) of light emitted from surfaces.
tristimulus
Retina has three cone types, each with maximum-output stimulus frequency {tristimulus values}, established by eye sensitivity measurements. Using tristimulus values allows factoring out luminance brightness to establish luminance coordinate. Factoring out luminance leaves two chromaticity color coordinates.
color surface
Chromaticity coordinates define border of upside-down U-shaped color space, giving all maximum-saturation hues from 400 to 700 nm. Along the flat bottom border are purples. Plane middle regions represent decreasing saturation from edges to middle, with completely unsaturated white (because already white) in middle. For example, between middle white and border reds and purples are pinks. Central point is where x and y equal 1/3. From border to central white, regions have same color with less saturation [Hardin, 1988]. CIE system can use any three primary colors, not just red, green, and blue.
Color-space systems {Munsell color space} can use color samples spaced by equal differences. Hue is on color-circle circumference, with 100 equal hue intervals. Saturation {chroma, saturation} {chrominance} is along color-circle radius, with 10 to 18 equal intervals, for different hues. Brightness {light value} is along perpendicular above color circle, with black at 0 units and white at 10 units. Magenta is between red and violet. In Munsell system, red and cyan are on same diameter, yellow and blue are on another diameter, and green and magenta are on a diameter [Hardin, 1988].
Color-space systems {Ostwald color space} can use standard samples and depend on reflectance. Colors have three coordinates: percentage of total lumens for main wavelength C, white W, and black B. Wavelength is hue. For given wavelength, higher C gives greater purity, and higher W with lower B gives higher luminance [Hardin, 1988].
Color-space systems {Swedish Natural Color Order System} (NCS) can depend on how primary colors and other colors mix [Hardin, 1988].
If two different colors are adjacent, each color adds its complementary color to the other {color contrast}. If bright color is beside dark color, contrast increases. If white and black areas are adjacent, they add opposite color to each other. If another color overlays background color, brighter color dominates. If brighter color is in background, it shines through overlay. If darker color is in background, overlay hides it.
Two adjacent different-colored objects have enhanced color differences {successive contrast} {simultaneous contrast}.
All colors from surface point can mix {color mixture}.
intermediate color
Two colors mix to make the intermediate color. For example, red and orange make red-orange vermilion. See Figure 1.
colors mix uniquely
Colors blend with other colors differently.
additive color mixture
Colors from light sources add {additive color mixture}. No additive spectral-color mixture can make blue or red. Magenta and orange cannot make red, because magenta has blue, orange has yellow and green, and red has no blue or green. Indigo and cyan cannot make blue, because indigo has red and cyan has green, and blue has no green or red.
subtractive color mixture
Colors from pigmented surfaces have colors from source illumination minus colors absorbed by pigments {subtractive color mixture}. Colors from pigment reflections cannot add to make red or to make blue. Blue and yellow pigments reflect green, because both reflect some green, and sum of greens is more than reflected blue or yellow. Red and yellow pigments reflect orange, because each reflects some orange, and sum of oranges is more than reflected red or yellow.
For subtractive colors, mixing cannot make red, blue, or yellow. Magenta and orange cannot make red, because magenta has blue, orange has yellow and green, and red has no blue or green. Indigo and cyan cannot make blue, because indigo has red and cyan has green, and blue has no red or green. Chartreuse and orange cannot make yellow, because chartreuse has green and some indigo, orange has red and some indigo, and yellow has no indigo.
pastel colors
Colors mix with white to make pastel colors.
similarity
Similar colors mix to make the intermediate color.
primary additive colors
Red, green, and blue are the primary additive colors.
primary subtractive colors
Red, yellow, and blue, or magenta, yellow, and cyan, are the primary subtractive colors.
secondary additive colors
Primary additive-color mixtures make secondary additive colors: yellow from red and green, magenta from red and blue, and cyan from green and blue.
secondary subtractive colors
Primary subtractive-color mixtures make secondary subtractive colors: orange from red and yellow, magenta from red and blue, and green from yellow and blue.
tertiary additive colors
Mixing primary and secondary additive colors makes tertiary additive colors: orange from red and yellow, violet from blue and magenta, and chartreuse from yellow and green.
tertiary subtractive colors
Mixing primary and secondary subtractive colors makes tertiary subtractive colors: cyan from blue and green, violet from blue and magenta, and chartreuse from yellow and green.
Two colors {complementary color}| can add to make white. Complementary colors can be primary, secondary, or tertiary colors.
complementary additive colors
Colors with equal amounts of red, green, and blue make white. Red and cyan, yellow and blue, or green and magenta make white.
Equal red, blue, and green contributions make white light.
complementary subtractive colors
Colors that mix to make equal amounts of red, yellow, and blue make black. Orange and blue, yellow and indigo/violet, or green and red make black. Equal magenta, yellow, and cyan contributions make black.
Grassmann described color-mixing laws {Grassmann's laws} {Grassmann laws}. Grassmann's laws are vector additions and multiplications in wavelength mixture space.
If two pairs of wavelengths at specific intensities result in same color, adding the pairs gives same color: if C1 + C2 = x and C3 + C4 = x, then C1 + C2 + C3 + C4 = x. For example, if blue-and-yellow pair makes green, and two greens together make same green, adding pairs makes same green.
If pair of wavelengths at specific intensities makes color, adding same wavelength and intensity to each makes same color as adding it to the pair. If C1 + C2 = x and C3 = y, then (C1 + C3) + (C2 + C3) = (C1 + C2) + C3 = z. For example, if blue-and-yellow pair makes green, adding red to blue and to yellow makes same color as adding red to the pair.
If pair of wavelengths at specific intensities makes color, changing both intensities equally makes same color as changing pair intensity. If C1 + C2 = x, then n*C1 + n*C2 = n*(C1 + C2) = w. For example, if blue-and-yellow pair makes green, increasing both color intensities by same amount makes same green, only brighter.
Wheel with black and white areas, rotated five Hz to ten Hz to give flicker rate below fusion frequency, in strong light, can produce intense colors {Benham's top} {Benham top} {Benham disk}, because color results from different color-receptor-system time-constants.
Color perception depends on hue, saturation, and brightness. Mostly hue and saturation {chromaticity} make colors. Brightness does not affect chromaticity much [Kandel et al., 1991] [Thompson, 1995].
Spectral colors depend on light wavelength and frequency {hue}. People can distinguish 160 hues, from light of wavelength 400 nm to 700 nm. Therefore, people can distinguish colors differing by approximately 2 nm of wavelength.
color mixtures
Hue can come from light of one wavelength or light mixtures with different wavelengths. Hue takes the weighted average of the wavelengths. Assume colors can have brightness 0 to 100. If red is 100, green is 0, and blue is 0, hue is red at maximum brightness. If red is 50, green is 0, and blue is 0, hue is red at half maximum brightness. If red is 25, green is 0, and blue is 0, hue is red at quarter maximum brightness.
If red is 100, green is 100, and blue is 0, hue is yellow at maximum brightness. If red is 50, green is 50, and blue is 0, hue is yellow at half maximum brightness. If red is 25, green is 25, and blue is 0, hue is yellow at quarter maximum brightness.
If red is 100, green is 50, and blue is 0, hue is orange at maximum brightness. If red is 50, green is 25, and blue is 0, hue is orange at half maximum brightness. If red is 24, green is 12, and blue is 0, hue is orange at quarter maximum brightness.
Fraction of incident light transmitted or reflected diffusely {lightness} {luminance factor}. Lightness sums the three primary-color (red, green, and blue) brightnesses. Assume each color can have brightness 0 to 100. For example, if red is 100, green is 100, and blue is 100, lightness is maximum brightness. If red is 100, green is 100, and blue is 50, lightness is 83% maximum brightness. If red is 100, green is 50, and blue is 50, lightness is 67% maximum brightness. If red is 67, green is 17, and blue is 17, lightness is 33% maximum brightness. If red is 17, green is 17, and blue is 17, lightness is 17% maximum brightness.
Pure saturated color {saturation, color}| {purity, color} has no white, gray, or black. White, gray, and black have zero purity. Spectral colors can have different white, gray, or black percentages (unsaturation). Saturated pigments mixed with black make dark colors, like ochre. Saturated pigments mixed with white make light pastel colors, like pink.
frequency range
The purest most-saturated color has light with one wavelength. Saturated color pigments reflect light with narrow wavelength range. Unsaturated pigments reflect light with wide wavelength range.
colors and saturation
All spectral colors can mix with white. White is lightest and looks least saturated. Yellow is the lightest color. Monochromatic yellows have largest saturation range (as in Munsell color system), change least as saturation changes, and look least saturated (most white) at all saturation levels. Green is second-lightest color. Monochromatic greens have second-largest saturation range, change second-least as saturation changes, and look second-least saturated (second-most white) at all saturation levels. Red is third-lightest color. Monochromatic reds have average saturation range, change third-least as saturation changes, and look third-least saturated (third-most white) at all saturation levels. Blue is darkest color. Monochromatic blues have smallest saturation range, change most as saturation changes, and look fourth-least saturated (least white) at all saturation levels. Black is darkest and looks most saturated.
calculation
Whiteness, grayness, and blackness have all three primary colors (red, green, and blue) in equal amounts. Whiteness, grayness, or blackness level is brightness of lowest-level primary color times three. Subtracting the lowest level from all three primary colors and summing the two highest calculates hue brightness. Total brightness sums primary-color brightnesses. Saturation is hue brightness divided by brightness. Assume colors can have brightness 0 to 100. If red is 100, green is 100, and blue is 100, whiteness is maximum. If red is 50, green is 50, and blue is 50, grayness is half maximum. If red is 25, green is 25, and blue is 25, grayness is quarter maximum.
Assume maximum brightness is 100%. If red is 33%, green is 33%, and blue is 33%, brightness is 100% = (33% + 33% + 33%), whiteness is 100% = (33% + 33% + 33%), hue is white at 0%, and saturation is 0% = (0% / 100%). If red is 17%, green is 17%, and blue is 17%, brightness is 50% = (17% + 17% + 17%), whiteness is 50% = (17% + 17% + 17%), hue is white at 0%, and saturation is 0% = (0% / 50%). If red is 33%, green is 33%, and blue is 17%, brightness is 83% = (33% + 33% + 17%), whiteness is 50% = (17% + 17% + 17%), hue is yellow at 33% = (33% - 17%) + (33% - 17%), and saturation is 40% = (33% / 83%). If red is 67%, green is 17%, and blue is 17%, brightness is 100% = (67% + 17% + 17%), whiteness is 50% = (17% + 17% + 17%), hue is red at 50% = (67% - 17%), and saturation is 50% = (50% / 100%). If red is 100%, green is 0%, and blue is 0%, brightness is 100% = (100% + 0% + 0%), whiteness is 0% = (0% + 0% + 0%), hue is red at 100% = (100% - 0%), and saturation is 100% = (100% / 100%).
Assume colors can have brightness 0 to 100. If red is 100, green is 50, and blue is 50, red is 50 = 100 - 50, green is 0 = 50 - 50, blue is 0 = 50 - 50, brightness is 200, whiteness is 150 = 50 + 50 + 50, and hue is pink with red saturation of 25 = 50 / 200. If red is 100, green is 100, and blue is 50, red is 50 = 100 - 50, green is 50 = 100 - 50, blue is 0 = 50 - 50, brightness is 250, whiteness is 150 = 50 + 50 + 50, and hue is yellow with saturation of 40% = (50 + 50) / 100 = 100 / 250. If red is 75, green is 50, and blue is 25, red is 50 = 75 - 25, green is 25 = 50 - 25, blue is 0 = 25 - 25, brightness is 150, whiteness is 75 = 25 + 25 + 25, and hue is orange with saturation of 50% = (50 + 25) / 150 = 75 / 150.
Hue depends on saturation {Abney effect}.
If luminance is enough to stimulate cones, hue changes as luminance changes {Bezold-Brücke phenomenon} {Bezold-Brücke effect}.
At constant luminance, brightness depends on both saturation and hue {Helmholtz-Kohlrausch effect}. If hue is constant, brightness increases with saturation. If saturation is constant, brightness changes with hue.
Saturation increases as luminance increases {Hunt effect}.
Systems that can perform same visual functions that people perform can have no qualia {absent qualia}. Perhaps, machines can duplicate neuron and synapse functions, as in the China-body system [Block, 1980], and so do anything that human visual system can do. Presumably, system physical states and mechanisms, no matter how complex, do not have or need qualia. System has inputs, processes, and outputs. Perhaps, such systems can have qualia, but complexity, large scale, or inability to measure prevents people from knowing.
Perhaps, hue can be not any combination of red, blue, green, or yellow {alien color}.
Planets {Inverted Earth} {inverted qualia} can have complementary colors of Earth things [Block, 1990]. For same things, its people experience complementary color compared to Earth-people color experience. However, Inverted-Earth people call what they see complementary color names rather than Earth color names, because their vocabulary is different. When seeing tree leaves, Inverted-Earth people see magenta and say green.
If Earth people go to Inverted Earth and wear inverting-color lenses, they see same colors as on Earth and call colors same names as on Earth. When seeing tree leaves, they see green and call them green, because they use Earth language.
If Earth people go to Inverted Earth and do not wear inverting-color lenses, they see complementary colors rather than Earth colors and call them Earth names for complementary colors. However, if they stay there, they learn to use Inverted-Earth language and call complementary colors Inverted-Earth names, though phenomena remain unchanged. When seeing tree leaves, they see magenta and say green. Intentions change though objects remain the same. Therefore, phenomena are not representations.
problems
Intentions probably do not change, because situation requires no adaptations. The representation is fundamentally the same.
Perhaps, qualia do change.
Perhaps, spectrum can invert, so people see short-wavelength light as red and long-wavelength light as blue {inverted spectrum}. Perhaps, phenomena and experiences can be their opposites without affecting moods, emotions, body sensations, perceptions, cognitions, or behaviors. Subject experiences differently, but applies same functions as other people, so subject reactions and initiations are no different than normal. This can start at birth or change through learning and maturation. Perhaps, behavior and perception differences diminish over time by forgetting or adaptation.
representation and phenomena
Seemingly, for inverted spectrum, representations are the same, but inverted phenomena replace phenomena. Functions or physical states remain identical, but qualia differ. If phenomena involve representations, inverted spectra are not metaphysically possible. If phenomena do not involve representations, inverted spectra are metaphysically possible.
inversion can be impossible
Inverted spectra are not necessarily conceptually possible, because they can lead to internal contradictions. Colors do not have exact inversions, because colors mix differently, so no complete and consistent color inversion is possible.
Vision processes can recognize patterns {pattern recognition, vision} {shape perception}.
patterns
Patterns have objects, features, and spatial relations. Patterns can have points, lines, angles, waves, histograms, grids, and geometric figures. Objects have brightness, hue, saturation, size, position, and motion.
patterns: context
Pattern surroundings and/or background have brightness, hue, saturation, shape, size, position, and motion.
patterns: movement
Mind recognizes objects with translation-invariant features more easily if they are moving. People can recognize objects that they see moving behind a pinhole.
patterns: music
Mind recognizes music by rhythm or by intonation differences around main note. People can recognize rhythms and rhythmic groups. People can recognize melodies transformed from another melody. People most easily recognize same melody in another key. People easily recognize melodies that exchange high notes for low. People can recognize melodies in reverse. People sometimes recognize melodies with both reverse and exchange.
factors: attention
Pattern recognition depends on alertness and attention.
factors: memory
Recall easiness varies with attention amount, emotion amount, cue availability, and/or previous-occurrence frequency.
animals
Apes recognize objects using fast multisensory processes and slow single-sense processes. Apes do not transfer learning from one sense to another. Frogs can recognize prey and enemy categories [Lettvin et al., 1959]. Bees can recognize colors, except reds, and do circling and wagging dances, which show food-source angle, direction, distance, and amount.
machines
Machines can find, count, and measure picture object areas; classify object shapes; detect colors and textures; and analyze one image, two stereo images, or image sequences. Recognition algorithms have scale invariance.
process levels
Pattern-precognition processing has three levels. Processing depends on effective inputs and useful outputs {computational level, Marr}. Processing uses functions to go from input to output {algorithmic level, Marr}. Processing machinery performs algorithms {physical level, Marr} [Marr, 1982].
neuron pattern recognition
Neuron dendrite and cell-body synapses contribute different potentials to axon initial region. Input distributions represent patterns, such as geometric figures. Different input-potential combinations can trigger neuron impulse. As in statistical mechanics, because synapse number is high, one input-potential distribution has highest probability. Neurons detect that distribution and no other. Learning and memory change cell and affect distribution detected.
Children and adults immediately recognize their images in mirrors {mirror recognition}. Chimpanzees, orangutans, bonobos, and two-year-old humans, but not gorillas, baboons, and monkeys, can recognize themselves in mirrors after using mirrors for a time [Gallup, 1970].
species member
Animals and human infants recognize that their images in mirrors are species members, but they do not recognize themselves. Perhaps, they have no mirror-reflection concept.
movements
Pigeons, monkeys, and apes can use mirrors to guide movements. Some apes can touch body spots that they see in mirrors. Chimpanzees, orangutans, bonobos, and two-year-old humans, but not gorillas, baboons, and monkeys, can use mirror reflections to perceive body parts and to direct actions [Gallup, 1970].
theory of mind
Autistic children use mirrors normally but appear to have no theory of mind. Animals have no theory of mind.
Will a blind person that knows shapes by touch recognize the shapes if able to see {Molyneux problem}? Testing cataract patients after surgery has not yet resolved this question.
Brain has mechanisms to recognize patterns {pattern recognition, methods} {pattern recognition, mechanisms}.
mechanism: association
The first and main pattern-recognition mechanism is association (associative learning). Complex recognition uses multiple associations.
mechanism: feature recognition
Object or event classification involves high-level feature recognition, not direct object or event identification. Brain extracts features and feeds forward to make hypotheses and classifications. For example, people can recognize meaningful facial expressions and other complex perceptions in simple drawings that have key features [Carr and England, 1995].
mechanism: symbol recognition
To recognize letters, on all four sides, check for point, line, corner, convex curve, W or M shape, or S or squiggle shape. 6^4 = 1296 combinations are available. Letters, numbers, and symbols add to less than 130, so symbol recognition is robust [Pao and Ernst, 1982].
mechanism: templates
Templates have non-accidental and signal properties that define object classes. Categories have rules or criteria. Vision uses structural descriptions to recognize patterns. Brains compare input patterns to template using constraint satisfaction on rules or criteria and then selecting best-fitting match, by score. If input activates one representation strongly and inhibits others, representation sends feedback to visual buffer, which then augments input image and modifies or completes input image by altering size, location, or orientation. If representation and image then match even better, mind recognizes object. If not, mind inhibits or ranks that representation and activates next representation.
mechanism: viewpoint
Vision can reconstruct how object appears from any viewpoint using a minimum of two, and a maximum of six, different-viewpoint images. Vision calculates object positions and motions from three views of four non-coplanar points. To recognize objects, vision interpolates between stored representations. Mind recognizes symmetric objects better than asymmetric objects from new viewpoints. Recognition fails for unusual viewpoints.
importance: frequency
For recognition, frequency is more important than recency.
importance: orientation
Recognition processing ignores left-right orientation.
importance: parts
For recognition, parts are more important for nearby objects.
importance: recency
For recognition, frequency is more important than recency.
importance: size
Recognition processing ignores size.
importance: spatial organization
For recognition, spatial organization and overall pattern are more important than parts.
method: averaging
Averaging removes noise by emphasizing low frequencies and minimizing high frequencies.
method: basis functions
HBF or RBF basis functions can separate scene into multiple dimensions.
method: cluster analysis
Pattern recognition can place classes or subsets in clusters in abstract space.
method: feature deconvolution
Cerebral cortex can separate feature from feature mixture.
method: differentiation
Differentiation subtracts second derivative from intensity and emphasizes high frequencies.
method: generalization
Vision generalizes patterns by eliminating one dimension, using one subpattern, or including outer domains.
method: index number
Patterns can have algorithm-generated unique, unambiguous, and meaningful index numbers. Running reverse algorithm generates pattern from index number. Similar patterns have similar index numbers. Patterns differing by subpattern have index numbers that differ only by ratio or difference. Index numbers have information about shape, parts, and relations, not about size, distance, orientation, incident brightness, incident light color, and viewing angle.
Index numbers can be power series. Term coefficients are weights. Term sums are typically unique numbers. For patterns with many points, index number is large, because information is high.
Patterns have a unique point, like gravity center. Pattern points have unique distances from unique point. Power-series terms are for pattern points. Term sums are typically unique numbers that depend only on coordinates internal to pattern. Patterns differing by subpattern differ by ratio or difference.
method: lines
Pattern recognition uses shortest line, extends line, or links lines.
method: intensity
Pattern recognition uses gray-level changes, not colors. Motion detection uses gray-level and pattern changes.
method: invariance
Features can remain invariant as images deform or move. Holding all variables, except one, constant can find the derivative with respect to the non-constant variable, and so calculate partial differentials to measure changes/differences and find invariants.
method: line orientation
Secondary visual cortex neurons can detect line orientation, have large receptive fields, and have variable topographic mapping.
method: linking
Vision can connect pieces in sequence and fill gaps.
method: optimization
Vision can use dynamic programming to optimize parameters.
method: orientation
Vision accurately knows surface tilt and slant, directly, by tilt angle itself, not by angle function [Bhalla and Proffitt, 1999] [Proffitt et al., 1995].
method: probability
Brain uses statistics to assign probability to patterns recognized.
method: registers
Brain-register network can store pattern information, and brain-register network series can store processes and pattern changes.
method: search
Matching can use heuristic search to find feature or path. Low-resolution search over whole image looks for matches to feature templates.
method: separation into parts
Vision can separate scene into additive parts, by boundaries, rather than using basis functions.
method: sketching
Vision uses contrast for boundary making.
To recognize structure, brain can use information about that structure {instructionism, recognition}.
To recognize structure, brain can compare to multiple variations and select best match {selectionism, recognition}, just as cells try many antibodies to bind antigen.
To identify objects, algorithms can test patterns against feature sets. If patterns have features, algorithms add distinctiveness weight to object distinctiveness-weight sum. If object has sum greater than threshold {detection threshold} {threshold of detection}, algorithm identifies pattern as object. Context sets detection threshold.
In recognition algorithms, object features can have weights {distinctiveness weight}, based on how well feature distinguishes object from other objects. Algorithm designers use feature-vs.-weight tables or automatically build tables using experiences.
Sharp brightness or hue difference indicates edge or line {edge detection}. Point clustering indicates edges. Vision uses edge information to make object boundaries and adds information about boundary positions, shapes, directions, and noise. Neuron assemblies have different spatial scales to detect different-size edges and lines. Tracking and linking connect detected edges.
Algorithms {Gabor transform} {Gabor filter} can make series, whose terms are for independent visual features, have constant amplitude, and have functions. Term sums are series [Palmer et al., 1991]. Visual-cortex complex cells act like Gabor filters with power series. Terms have variables raised to powers. Complex-cell types are for specific surface orientation and object size. Gabor-filter complex cells typically make errors for edge gaps, small textures, blurs, and shadows.
Non-parametric algorithms {histogram density estimate} can calculate density. Algorithm tests various cell sizes by nearest-neighbor method or kernel method. Density is average volume per point.
Using Bayesian theory, algorithms {image segmentation} can extend edges to segment image and surround scene regions.
Algorithms {kernel method} can test various cell sizes, to see how small volume must be to have only one point.
Algorithms {linear discriminant function} (Fischer) can find abstract-space hypersurface boundary between space regions (classes), using region averages and covariances.
Algorithms {memory-based models} (MBM) can match input-pattern components to template-pattern components, using weighted sums, to find highest scoring template. Scores are proportional to similarity. Memory-based models uniquely label component differences. Memory-based recognition, sparse-population coding, generalized radial-basis-function (RBF) networks, and hyper-basis-function (HBF) networks are similar algorithms.
Vision can manipulate images to see if two shapes correspond. Vision can zoom, rotate, stretch, color, and split images {mental rotation} [Shepard and Metzler, 1971] [Shepard and Cooper, 1982].
high level
Images transform by high-level perceptual and motor processing, not sense-level processing. Image movements follow abstract-space trajectories or proposition sequence.
motor cortex
Motor processes transform visual mental images, because spatial representations are under motor control [Shiekh, 1983].
time
People require more time to perform mental rotations that are physically awkward. Vision compares aligned images faster than translated, rotated, or inverted images.
Algorithms {nearest neighbor method} can test various cell sizes to see how many points (nearest neighbor) are in cells.
Algorithms {pattern matching} can try to match two network representations by two parallel searches, starting from each representation. Searches look for similar features, components, or relations. When both searches meet, they excite the intermediate point (not necessarily simultaneously), whose signals indicate matching.
Algorithms {pattern theory} can use feedforward and feedback processes and relaxation methods to move from input pattern toward memory pattern. Algorithm uses probabilities, fuzzy sets, and population coding, not formal logic.
For algorithms or observers, graphs {receiver operating characteristics} (ROC) can show true identification-hit rate versus false-hit rate. If correlation line is 45-degree-angle straight line, observer has as many false hits as true hits. If correlation line has steep slope, observer has mostly true hits and few false hits. If correlation line has maximum slope, observer has zero false hits and all true hits.
Vision finds, separates, and labels visual areas by enlarging spatial features or partitioning scenes {region analysis}.
expanding
Progressive entrainment of larger and larger cell populations builds regions using synchronized firing. Regions form by clustering features, smoothing differences, relaxing/optimizing, and extending lines using edge information.
splitting
Regions can form by splitting spatial features or scenes. Parallel circuits break large domains into similar-texture subdomains for texture analysis. Parallel circuits find edge ends by edge interruptions.
For feature detection, brain can use classifying context or constrain classification {relational matching}.
Algorithms {response bias} can use recognition criteria iteratively set by receiver operability curve.
Vision separates scene features into belonging to object and not belonging {segmentation problem}|. Large-scale analysis is first and then local constraints. Context hierarchically divides image into non-interacting parts.
If brain knows reflectance and illumination, shading {shading}| can reveal shape. Line and edge detectors can find shape from shading.
Motion change and retinal disparity are equivalent perceptual problems, so finding distance from retinal disparity and finding shape from motion {shape from motion} changes use equivalent techniques.
Algorithms {signal detection theory} can find patterns in noisy backgrounds. Patterns have stronger signal strength than noise. Detectors have sensitivity and response criteria.
Vision can label vertices as three-intersecting-line combinations {vertex perception}. Intersections can be convex or concave, to right or to left.
Classification algorithms {production system} can use IF/THEN rules on input to conditionally branch to one feature or object. Production systems have three parts: fact database, production rule, and rule-choosing control algorithm.
database
Fact-database entries code for one state {local representation, database}, allowing memory.
rules
Production rules have form "IF State A, THEN Process N". Rules with same IF clause have one precedence order.
controller
Controller checks all rules, performing steps in sequence {serial processing}. For example, if system is in State A and rule starts "IF State A", then controller performs Process N, which uses fact-database data.
states
Discrete systems have state spaces whose axes represent parameters, with possible values. System starts with initial-state parameter settings and moves from state to state, along a trajectory, as controller applies rules.
Production systems have rules {production rule} for moving from one state to the next. Production rules have form "IF State A, THEN Process N". Rules with same IF clause have one precedence order.
Parallel pattern-recognition mechanisms can fire whenever they detect patterns {ACT production system}. Firing puts new data elements in working memory.
Same production can match same data only once {Data Refractoriness production system}.
Production with best-matched IF-clause can have priority {Degree of Match production system}.
Goals are productions put into working memory. Only one goal can be active at a time {Goal Dominance}, so productions whose output matches active goal have priority.
Recently successful productions can have higher strength {Production Strength production system}.
Parallel pattern-recognition mechanisms can fire whenever they detect particular patterns {Soar production system}. Firing puts new data elements in working memory.
If two productions match same data, production with more-specific IF-clause wins {Specificity production system}.
Neuron assemblies can hold essential knowledge about patterns {explicit representation}, using information not in implicit representation. Mind calculates explicit representation from implicit representation, using feature extraction or neural networks [Kobatake et al., 1998] [Logothetis and Pauls, 1995] [Logothetis et al., 1994] [Sheinberg and Logothetis, 2001].
Neuron or pixel sets can hold object image {implicit representation}, with no higher-level knowledge. Implicit representation samples intensities at positions at times, like bitmaps [Kobatake et al., 1998] [Logothetis and Pauls, 1995] [Logothetis et al., 1994] [Sheinberg and Logothetis, 2001].
Algorithms {generalized cone} can describe three-dimensional objects as conical shapes, with axis length/orientation and circle radius/orientation. Main and subsidiary cones can be solid, hollow, inverted, asymmetric, or symmetric. Cone surfaces have patterns and textures [Marr, 1982]. Cone descriptions can use three-dimensional Fourier spherical harmonics, which have volumes, centroids, inertia moments, and inertia products.
Algorithms {generalized cylinder} can describe three-dimensional objects as cylindrical shapes, with axis length/orientation and circle radius/orientation. Main and subsidiary cylinders can be solid, hollow, inverted, asymmetric, or symmetric. Cylindrical surfaces have patterns and textures. Cylinder descriptions can use three-dimensional Fourier spherical harmonics, which have volumes, centroids, inertia moments, and inertia products.
Representations can describe object parts and spatial relations {structural description}. Structure units can be three-dimensional generalized cylinders (Marr), three-dimensional geons (Biederman), or three-dimensional curved solids {superquadratics} (Pentland). Structural descriptions are only good for simple recognition {entry level recognition}, not for superstructures or substructures. Vision uses viewpoint-dependent recognition, not structural descriptions.
Shape representations {template} can hold information for mechanisms to use to replicate or recognize {template theory} {naive template theory}. Template is like memory, and mechanism is like recall. Template can be coded units, shape, image, model, prototype, or pattern. Artificial templates include clay or wax molds. Natural templates are DNA/RNA. Templates can be abstract-space vectors. Using templates requires templates for all viewpoints, and so many templates.
Representations {vector coding} can be sense-receptor intensity patterns and/or brain-structure neuron outputs, which make feature vectors. Vector coding can identify rigid objects in Euclidean space. Vision uses non-metric projective geometry to find invariances by vector analysis [Staudt, 1847] [Veblen and Young, 1918]. Motor-representation middle and lower levels use code that indicates direction and amount.
The feeling of seeing whole scene {scene, vision} {vision, scene} results from maintaining general scene sense in semantic memory, attending repeatedly to scene objects, and forming object patterns. Vision experiences whole scene (perceptual field), not just isolated points, features, surfaces, or objects. Perceptual field provides background and context, which can identify objects and events.
scale
Scenes have different spatial frequencies in different directions and distances. Scenes can have low spatial frequency and seem open. Low-spatial-frequency scenes have more depth, less expansiveness, and less roughness, and are more natural. Scenes can have high spatial frequency and seem closed. High-spatial-frequency scenes have less depth, more expansiveness, and more roughness, and are more about towns.
Scenes have numbers of objects {set size, scene}.
Scenes have patterns or structures of object and object-property placeholders {spatial layout}, such as smooth texture, rough texture, enclosed space, and open space. In spatial layouts, object and property meanings do not matter, only placeholder pattern. Objects and properties can fill object and object property placeholders to supply meaning. Objects have spatial positions, and relations to other objects, that depend on spacing and order. Spatial relations include object and part separations, feature and part conjunctions, movement and orientation directions, and object resolution.
Scenes have homogeneous color and texture regions {visual unit}.
Vision can recognize geometric features {shape, pattern} {pattern, features}.
lines
Shapes have lines, line orientations, and edges. Contour outlines indicate objects and enhance brightness and contrast. Irregular contours and hatching indicate movement. Contrast enhances contours, for example with Mach bands. Contrast differences divide large surfaces into parts.
axes
Shapes have natural position axes, such as vertical and horizontal, and natural shape axes, such as long axis and short axis. Vision uses horizontal, vertical, and radial axes for structure and composition.
relations
Objects are wholes and have parts. Wholes are part integrations or configurations and are about gist. Parts are standard features and are about details.
surfaces
Shape has surfaces, with surface curvatures, orientations, and vertices. Visual system can label lines and surfaces as convex, concave, or overlapping [Grunewald et al., 2002]. Shapes have shape-density functions, with projections onto axes or chords [Grunewald et al., 2002]. Shapes have distances and natural metrics, such as lines between points.
illuminance
Shapes have illuminance and reflectance.
Shapes have axis and chord ratios {area eccentricity} [Grunewald et al., 2002].
Shapes have perimeter squared divided by area {compactness, shape} [Grunewald et al., 2002].
Shapes have minimum chain-code sequences that make shape classes {concavity tree}, which have maximum and minimum concavity-shape numbers [Grunewald et al., 2002].
Shapes have connectedness {Euler number, shape} [Grunewald et al., 2002].
Pattern recognition can use conscious memory {explicit recognition} [McDougall, 1911] [McDougall, 1923].
Pattern recognition can be automatic {implicit recognition} [McDougall, 1911] [McDougall, 1923], like reflexes.
Figures have three-dimensional representations or forms {gestalt}| built innately by vision, by analyzing stimulus interactions. Gestalt needs no learning.
Gestalt law
Finding stimulus relations or applying organizational laws {insight, Gestalt} allows recognizing figures, solving problems, and performing similar mental tasks. Related gestalt laws can conflict, and they have different relative strengths at different times. Grouping laws depend on figure-ground relationship, proximity, similarity, continuity, closure, connectedness, and context [Ehrenfels, 1891]. Laws {gestalt law} {grouping rule} {Gestalt grouping rule} can replace less-organized patterns with emphasized, complete, or adequate patterns. Gestalt laws are minimizations. Gestalt laws are assumptions about which visual-field parts are most likely to belong to which object.
Perception must separate object {figure, Gestalt} from background, using Gestalt laws [Ehrenfels, 1891]. Regions with one color are figures. Many-colored regions are ground. Smaller region is figure, and nearby larger region is ground.
Edges separate figure and ground. Lateral inhibition distinguishes and sharpens boundaries.
Both figure and ground are homogeneous regions. Surfaces recruit neighboring similar surfaces to expand homogeneous regions by wave entrainment.
Vision separates figure and ground by detecting edges and increasing homogeneous regions, using constraint satisfaction [Crane, 1992].
Perception must separate object figure from background {ground, Gestalt}, using Gestalt laws [Ehrenfels, 1891].
Vision finds simplest possible percept, which has internal consistency and regularity {pragnans} [Ehrenfels, 1891].
Vision tends to perceive incomplete or occluded figures as wholes {closure law} {law of closure}. Closed contour indicates figure [Ehrenfels, 1891].
Vision groups features doing same thing {common fate}, such as moving in same direction or moving away from point [Ehrenfels, 1891].
Vision groups two features that touch or that happen at same time {connectedness, Gestalt} {law of connectedness} [Ehrenfels, 1891].
Vision tends to perceive enclosed region as figure {enclosedness} {law of enclosedness} {surroundedness, Gestalt}. Surrounded region is figure, and surrounding region is ground [Ehrenfels, 1891].
Vision perceives organization that interrupts fewest lines or that lies on one contour {good continuation} {law of good continuation}. Smooth lines, with no sharp angles, are figure parts. Regions with fewer continuous lines, fewer angles, and fewer angle differences are figures [Ehrenfels, 1891]. For example, the good-continuation law reflects probability that aligned edges belong to same object.
Vision groups two parallel contours {parallelism, Gestalt}. Region parallel contours are figure parts, and non-parallel contours are ground parts [Ehrenfels, 1891]. Surfaces have periodic structure that can model periodic structures.
Adjacent features are figure parts {proximity, Gestalt} {law of proximity} [Ehrenfels, 1891].
Vision finds image boundaries, to make perceptual regions, by angles, lines, and distances {segregation, Gestalt} {law of segregation, Gestalt} {differentiation, Gestalt} {law of differentiation} [Ehrenfels, 1891].
Similar shape, color, and size parts go together {similarity, Gestalt} {law of similarity} [Ehrenfels, 1891].
Vision groups symmetrical contours {symmetry, Gestalt}. Symmetrical region is figure, and asymmetrical region is ground. Symmetrical closed region is figure [Ehrenfels, 1891].
Vision groups features that change simultaneously {synchrony, Gestalt}, even if features move in different directions and/or at different speeds [Ehrenfels, 1891].
Illusions {illusion} are perceptions that differ from actual metric measurements. Brain uses rules to interpret sense signals, but rules can have contradictions or ambiguities. Vision sees bent lines, shifted lines, different lengths, or different areas, rather than line or area physical properties. Visual illusions are typically depth-perception errors [Frisby, 1979] [Gregory, 1972] [Heydt et al., 1984] [Kanizsa, 1979] [Peterhans and Heydt, 1991].
perception
Illusion, hallucination, and perception sense qualities do not differ. Mind typically does not notice illusions.
neural channels
Illusory edges and surfaces appear, because neural channels differ for movement and position. See Figure 1 and Figure 2.
contrast illusions
Contrast can cause illusions. Adelson illusion has grid of lighter and darker squares, making same-gray squares look different. Craik-O'Brien-Cornsweet illusion has lighter rectangle beside darker rectangle, making contrast enhancement at boundary. Mach bands have boundaries with enhanced contrast. Simultaneous brightness contrast illusions have same-gray squares in white or black backgrounds, looking like different grays. White illusion has black vertical bars with same-gray rectangle behind bars and adjacently and translucently in front of bars, looking like different grays.
color illusions
Color can cause color-contrast illusions and color and brightness illusions. Assimilation illusions have background effects that group same color points differently. Fading dot illusion has a green disk with blue dot in center, which fades with continued looking. Munker illusion has blue vertical bars with same-color rectangle behind bars or adjacently and translucently in front of bars, looking like different colors. Neon disk has an asterisk with half-white and half-red bars, which spins. Stroop effect has the word green in red, the word red in green.
geometric illusions
Geometry causes Ebbinghaus illusion, Müller-Lyer illusion, Ponzo illusion, and Zöllner illusion. Café-wall illusion has a vertically irregularly spaced black squares and white squares grid, making horizontal lines appear tilted. Distorted squares illusion has squares in concentric circles, making tilted lines. Ehrenstein illusion has radial lines with circle below center and square above center, making circle and square lines change alignment. Frazier spiral has concentric circles that look like a spiral in a spiraling background. Men with sunglasses illusion (Akiyoshi Kitaoka) has alternating color-square grid with two alternating vertical or horizontal dots at corners, making vertical and horizontal lines tilted. Midorigame or green turtle (Akiyoshi Kitaoka) has a grid with slightly tilted squares in one direction and a center grid with squares slightly tilted in other direction, making vertical and horizontal lines tilted. Poggendorf illusion has two vertical lines with diagonal line that goes behind space between lines, and two vertical lines with diagonal line that goes behind space between lines and dotted line on one side, making behind look not aligned.
size and depth illusions
Size and depth illusions are Ames room (Adelbert Ames), corridor illusion, impossible staircase (Maurits C. Escher), impossible triangle (Maurits C. Escher), impossible waterfall (Maurits C. Escher), Necker cube, size distortion illusion, and trapezoidal window (Adelbert Ames).
figure illusions
Imagined lines can cause illusions. Illusory circle has a small space between horizontal and vertical lines that do not meet, making a small circle. Illusory triangle has solid figures with cutouts that make angles in needed directions, which appear as corners of triangles with complete sides. Illusory square has solid figures with cutouts that make angles in needed directions, which appear as corners of squares with complete sides.
ambiguous figures
Ambiguous figures are eskimo-little girl seen from back, father-son, rabbit-duck, skull-two dancers, young woman and hag, and vase-goblet.
unstable figures
Figures can have features that randomly appear and disappear. Hermann's grid has horizontal and vertical lines with gaps at intersections, where dark disks appear and disappear. Rotating spiral snakes (Akiyoshi Kitaoka) have spirals, which make faint opposite spirals appear to rotate. Thatcher illusion has smile and eye corners up or down (Peter Thompson).
alternating illusions
Illusions with two forms show perceptual dominance or are bistable illusions. Vase-and-face illusion switches between alternatives.
Hering illusion
Radial rays, with two horizontal lines, make illusions. See Figure 4.
music
Music can cause illusions.
Necker cube
Wire cube at angle makes illusions. See Figure 3.
Ponzo illusion
If railroad tracks and ties lead into distance, and two horizontal bars, even with different colors, are at different distances, farther bar appears longer (Mario Ponzo) [1913]. See Figure 7. See Figure 8 for modified Ponzo illusions. See Figure 9 for split Ponzo illusions. Perhaps, line tilt, rather than depth perception, causes Ponzo illusion.
Rubin vase
Central vase has profiles that are symmetrical faces (Edgar Rubin). See Figure 5.
Zollner illusion
Vertical lines have equally spaced parallel line segments at 45-degree angles. See Figure 6.
After concentrating on object and then looking at another object, sense qualities opposite to, or shifted away from, original appear {aftereffect}| (CAE) [Blake, 1998] [Blake and Fox, 1974] [Dragoi et al., 2000] [He et al., 1996] [He et al., 1998] [He and MacLeod, 2001] [Koch and Tootell, 1996] [Montaser-Kouhsari et al., 2004].
afterimage
After observing bright light or image with steady gaze, image can persist {afterimage} [Hofstötter et al., 2003]. For one second, afterimage is the same as positive image. Then afterimage has opposite color or brightness {negative afterimage}. Against white ceiling, afterimage appears black. Colored images have complementary-color afterimages. Intensity is the same as image {positive afterimage} if eyes close or if gaze shifts to black background. Afterimage size, shape, brightness, and location can change {figural aftereffect}.
brain
Perhaps, CAEs reflect brain self-calibration. Orientation-specific adaptation is in area V1 or V2.
curves
Aftereffects also appear after prolonged stimulation by curved lines. Distortions associated with converging lines do not change with different brightness or line thickness.
gratings
Horizontal and vertical gratings cause opposite aftereffect {orientation-dependent aftereffect}, even if not perceived.
movement
Background can seem to move after observer stops moving {motion aftereffect, vision}.
stripes
Alternating patterns and prolonged sense stimulation can cause distortions that depend on adapting-field and test-field stripe orientations {contingent perceptual aftereffect}.
theory
Aftereffects appear because sense channels for processing color and orientation overlap {built-in theory} or because separate mechanisms for processing color and orientation overlap during adaptation period {built-up theory}.
tilt
After observing a pattern at an orientation, mind sees vertical lines tilt in opposite direction {tilt aftereffect}.
time
CAEs do not necessarily decay during sleep and can last for days.
Illusions can have two forms. Illusions {bistable illusion} like Necker cube have two forms almost equal in perceptual dominance.
Size, length, and curvature line or edge distortions can make illusions {cafe wall illusion}.
Illusions {Pepper's ghost} {stage ghost} {camera lucida} can depend on brightness differences. Part-reflecting mirrors can superimpose images on objects that people see through glass. Brightening one image while dimming the other makes one appear as the other disappears. If equally illuminated, both images superimpose and are transparent.
Gray patches surrounded by blue are slightly yellow {color contrast effect}. Black is not as black near blue or violet.
Mind can perceive transparency when observing different-color split surfaces {color scission}.
Blue and green appear closer {color stereo effect}. Red appears farther away.
When two objects have interchangeable features, and time or attention is short, mind can switch features to wrong object {conjunction error}.
Experimenter taps sharp pencil five times on wrist, three times on elbow, and two times on upper arm, while subject is not looking {cutaneous rabbit}. It feels like equal steps up arm [Geldard and Sherrick, 1972].
Minds perceive darker objects as heavier than lighter ones {empty suitcase effect}.
Light and dark checkerboards can have light-color dots at central dark-square corners, making curved square sides and curved lines along square edges {flying squirrel illusion}, though lines are really straight (Kitaoka).
Illusory people perceptions {ghost} can be partially transparent and speak.
Radial rays with two horizontal lines can make illusions {Hering illusion}.
Black squares in an array with rows and columns of spaces {Hermann grid} can appear to have gray circles in white spaces where four corners meet.
Lighter areas have apparently greater size than same-size darker areas {irradiation, perception}.
Orientation-specific color aftereffects can appear without perception {McCullough effect}. McCullough effect does not transfer from one eye to the other.
Moon or Sun apparent size varies directly with nearness to horizon {Moon illusion}, until sufficiently above horizon. On horizon, Moon is redder, hazier, lower contrast, and fuzzier edged and has different texture. All these factors affect perceived distance.
elevation
Horizon Moon dominates and elevates scene, but scene seems lower when Moon is higher in sky.
distance
Horizon Moon, blue or black sky, and horizon are apparently at same place. Risen Moon appears in front of black night sky or blue day sky, because it covers blue or black and there is no apparent horizon.
topographic map
Moon illusion and other perspective illusions cause visual-brain topographic image to enlarge or shrink, whereas retinal image is the same.
Illusions can have two forms, and people see mostly one {perceptual dominance}, then other.
In dark, blues seem brighter than reds {Purkinje shift}. In day, reds seem brighter than blues.
Line segments radiating from central imaginary circle {radial lines illusion} make center circle appear brighter. If center circle is black, it looks like background. If center circle has color, it appears brighter and raised {anomalous brightness}. If center circle is gray disk, it appears gray but shimmers {scintillating luster}. If center circle has color and background is black, center circle appears blacker {anomalous darkness}. If center circle has color and gray disk, center circle shimmers gray with complementary color {flashing anomalous color contrast}.
A vertical line segment in a tilted square frame appears to tilt oppositely {rod and frame illusion}, a late-visual-processing pictorial illusion.
If a rectangle is left of midline, with one edge at midline, rectangle appears horizontally shorter, and midline line segment appears to be right of midline {Roelof's effect} {Roelof effect}. If a rectangle is left of midline, with edge nearer midline left of midline, rectangle appears horizontally shorter, and rectangle appears closer to midline.
Central circle with vertical stripes surrounded by annulus with stripes angled to left appears to have stripes tilted to right {simultaneous tilt illusion}, an early visual processing illusion.
If small and large object both have same weight, small object feels heavier in hand than large object {size-weight illusion}. People feel surprise, because larger weight is lighter than expected.
Lighter color contours inside darker color contours spread through interiors {watercolor effect}.
In zero-gravity environments, because eyes shift upward, objects appear to be lower than they actually are {zero-gravity illusion}.
Figures {ambiguous figure}| can have two ways that non-vertical and non-horizontal lines can orient or have two ways to choose background and foreground regions. In constant light, observed ambiguous-figure surface-brightness changes as perception oscillates between figures [Gregory, 1966] [Gregory, 1986] [Gregory, 1987] [Gregory, 1990] [Gregory, 1997] [Seckel, 2000] [Seckel, 2002].
Figures (Jastrow) with duck beaks and rabbit ears make illusions {duck-rabbit illusion}.
Vases with profiles of symmetrical faces (Edgar Rubin) can make illusions {vase and two faces illusion} {Rubin vase}.
Old crone with black hair facing young girl can make illusions {Salem witch and girl illusion}.
Illusions can depend on brightness differences, sound-intensity differences, or line-length and line-spacing differences {Craik-Cornsweet illusion}. Finding differences explains Weber's law and why just noticeable difference increases directly with stimulus magnitude.
People can see logically paradoxical objects {impossible triangle} {impossible staircase}. People can experience paradox perceptually while knowing its solution conceptually. Pictures are essentially paradoxical.
Impossible triangles can make illusions {Kanizsa illusion} {Kanizsa triangle}.
Wire cubes at angles can make illusions {Necker cube}.
Impossible stairs can make illusions {Schroder stairs}.
Minds can combine two features, for example, color and shape, and report perceiving objects that are not in scenes {illusory conjunction} {conjunction, illusory}.
Mind can extend contours to places with no reflectance difference {illusory contour} {contour, illusory}.
Medium-size circle surrounded by smaller circles appears larger than same-size circle surrounded by larger circles {Ebbinghaus illusion} {Titchener circles illusion}, a late-visual-processing pictorial illusion.
Lines with inward-pointing arrowheads and adjacent lines with outward-pointing arrowheads appear to have different lengths {Müller-Lyer illusion}.
If railroad tracks and ties lead into distance, and two horizontal bars, even with different colors, are at different distances, farther bar appears longer (Mario Ponzo) [1913] {Ponzo illusion}. Perhaps, line tilt, rather than depth perception, causes Ponzo illusion.
Vertical lines with equally spaced parallel line segments at 45-degree angles can make illusions {Zollner illusion}.
In homogeneous backgrounds, a single object appears to move around {autokinetic effect} {keyhole illusion} [Zeki et al., 1993].
If line or spot is moving, and another line or spot flashes at same place, the other seems behind first {flash-lag effect} [Eagleman and Sejnowski, 2000] [Krekelberg and Lappe, 2001] [Nijhawan, 1994] [Nijhawan, 1997] [Schlag and Schlag-Rey, 2002] [Sheth et al., 2000]. Flashed object seems slower than moving object.
Rotating two-dimensional objects makes them appear three-dimensional {kinetic depth effect} [Zeki et al., 1993].
Alternating visual-stimulus pairs show apparent movement at special times and separations {Korte's law} [Zeki et al., 1993].
After continuously observing moving objects, when movement stops, stationary objects appear to move {motion aftereffect, illusion}.
If screen has stationary color spots and has randomly moving complementary-color spots behind them, mind sees stationary spots first, then does not see them, then sees them again, and so on {motion-induced blindness} [Bonneh et al., 2001].
Spokes in turning wheels seem to turn in direction opposite from real motion {wagon-wheel illusion} [Gho and Varela, 1988] [Wertheimer, 1912] [Zeki et al., 1993].
If people view scenes with flows, when they look at stationary scenes, they see flow {waterfall illusion}. Waterfall illusion can be a series of still pictures [Cornsweet, 1970].
Multiple sclerosis, neglect, and prosopagnosia can cause vision problems {vision, problems}. Partial or complete color-vision loss makes everything light or dark gray, and even dreams lose color.
Cornea can have different curvature radiuses at different orientations around visual axis and so be non-spherical {astigmatism}|. Unequal lens curvature causes astigmatism.
Vision can turn on and off ten times each second {cinematographic vision} [Sacks, 1970] [Sacks, 1973] [Sacks, 1984] [Sacks, 1995].
Failure to combine or fuse images from both eyes results in double vision {diplopia}.
People can see subjective sparks or light patterns {phosphene}| after deprivation, blows, eyeball pressure, or cortex stimulation.
Genetic condition causes retina degeneration {retinitis pigmentosa}| and affects night vision and peripheral vision.
People with vision in both eyes can lose ability to determine depth by binocular disparity {stereoblindness}.
Extraocular muscles, six for each eye, can fail to synchronize, so one eye converges too much or too little, or one eye turns away from the other {strabismus}|. This can reduce acuity {strabismic amblyopia} {amblyopia}, because image is not on fovea.
Left inferior parietal lobe fusiform gyrus damage causes scene to have no color and be light and dark gray {achromatopsia} [Hess et al., 1990] [Nordby, 1990].
Cone pigments can differ in frequency range or maximum-sensitivity wavelength {anomalous trichromacy}. Moderately colorblind people can have three photopigments, but two are same type: two different long-wavelength cones {deuteranomalous trichromacy}, which is more common, or two different middle-wavelength cones {protanomalous trichromacy} [Asenjo et al., 1994] [Jameson et al., 2001] [Jordan and Mollon, 1993] [Nathans, 1999].
8% of men cannot distinguish between red and green {color blindness} {colorblind} {red-green colorblindness}, but can see blue. They also cannot see colors that are light or have low saturation. Dichromats have only two cone types. Cone monochromats can lack two cone types and cannot distinguish colors well. Rod monochromats can have no cones, have complete color blindness, see only grays, and have low daylight acuity.
People can have all three cones but have one photopigment that differs from normal {color-anomalous}, so two photopigments are similar to each other. They typically have similar medium-wavelength cones and long-wavelength cones and cannot distinguish reds, oranges, yellows, and greens.
People can lack medium-wavelength cones, but have long-wavelength cones and short-wavelength cones {deuteranope}, and cannot distinguish greens, yellows, oranges, and reds.
People can lack long-wavelength cones, but have medium-wavelength cones and short-wavelength cones {protanope}, and cannot distinguish reds, oranges, yellows, and greens.
People can lack short-wavelength cones, but have medium-wavelength cones and long-wavelength cones {tritanope}, and cannot distinguish blue-greens, blues, and violets.
Brain can have wounded or infected areas {lesion, brain}. If lesion is in right hemisphere, loss is on left visual-field side {contralesional field}. If lesion is in right hemisphere, loss is on right visual-field side {ipsilesional field}.
Mediotemporal (MT) damage causes inability to detect motion {akinetopsia}.
Two brain lesions in different places typically cause different defects {double dissociation}.
Lateral-geniculate-nucleus damage causes blindness in half visual field {hemianopia} [Celesia et al., 1991].
Removing both temporal lobes makes monkeys fail to recognize objects {Klüver-Bucy syndrome, lesion}.
Visual-cortex region can have damage {scotoma}|. People do not see black or dark area, but only have no sight [Teuber et al., 1960] [Teuber, 1960].
Visual-nerve damage can cause no or reduced vision in scene regions {visual-field defect}.
People with visual-cortex scotoma can point to and differentiate between fast movements or simple objects but say they cannot see them {blindsight}|. They can perceive shapes, orientations, faces, facial expressions, motions, colors, and event onsets and offsets [Baron-Cohen, 1995] [Cowey and Stoerig, 1991] [Cowey and Stoerig, 1995] [Ffytche et al., 1996] [Holt, 1999] [Kentridge et al., 1997] [Marcel, 1986] [Marcel and Bisiach, 1988] [Marzi, 1999] [Perenin and Rossetti, 1996] [Pöppel et al., 1973] [Rossetti, 1998] [Stoerig and Barth, 2001] [Stoerig et al., 2002] [Weiskrantz, 1986] [Weiskrantz, 1996] [Weiskrantz, 1997] [Wessinger et al., 1997] [Zeki, 1995].
properties: acuity
Visual acuity decreases by two spatial-frequency octaves.
properties: amnesia
Amnesiacs with medial temporal lobe damage can use non-conscious memory.
properties: attention
Events in blind region can alter attention.
properties: color
Color sensitivity is better for red than green.
properties: contrast
Contrast discrimination is less.
properties: dark adaptation
Dark adaptation remains.
properties: face perception
People who cannot see faces can distinguish familiar and unfamiliar faces.
properties: hemianopia
Cortical-hemisphere-damage blindness affects only half visual field.
properties: motion
Complex motion detection is lost. Fast motions, onsets, and offsets can give vague awareness {blindsight type 2}.
People with blindsight can detect movement but not recognize object that moved [Morland, 1999].
properties: perception
Blindsight is not just poor vision sensitivity but has no experience [Weiskrantz, 1997].
properties: reflexes
Vision reflexes still operate.
properties: threshold
Blindsight patients do not have altered thresholds or different criteria about what it means to see [Stoerig and Cowey, 1995].
brain
Blindsight does not require functioning area V1. Vision in intact V1 fields does not cause blindsight [Weiskrantz, 1986]. Brain compensates for visual-cortex damage using midbrain, including superior colliculus, and thalamus visual maps, allowing minimal visual perception but no seeing experience. Right prefrontal cortex has more blood flow. Blindsight uses dorsal pathway and seems different for different visuomotor systems [Milner and Goodale, 1995]. Animals with area V1 damage react differently to same light or no-light stimuli in normal and blindsight regions, with reactions similar to humans, indicating that they have conscious seeing.
senses
People can perceive smells when visual cortex has damage [Weiskrantz, 1997]. People can perceive sounds when visual cortex has damage [Weiskrantz, 1997]. People with parietal lobe damage can use tactile information, though they do not feel touch {numbsense} {blind touch}.
Blindsight patients can be conscious of fast, high-contrast object movements {Riddoch phenomenon}. Retinal output for motion can go to area V5 [Barbur et al., 1993].
If an object moves behind a slit, people can faintly glimpse whole object {anorthoscopic perception}. Object foreshortens along motion direction. People can also recognize an object that they see moving behind a pinhole, because memory and perception work together.
People wearing glasses that make everything appear inverted or rotated {visual distortion} {distortion, vision} soon learn to move around and perform tasks while seeing world upside down. Visual distortion adaptation involves central-nervous-system sense and motor neuron coding changes, not sense-organ or muscle changes. Eye, head, and arm position-sensations change, but retinal-image-position sensations do not change. People do not need to move to adapt to visual distortion.
To try to induce ESP, illumination can be all white or pink with no features, and sound can be white noise {ganzfeld} {autoganzfeld}.
Gratings {grating, vision} have alternating dark bars and light bars. Each visual-angle degree has some bar pairs {cycles per degree}. Gratings have cycles per visual-angle degree {spatial frequency, grating}. Gratings {phase, grating} can have relative visual-image positions. Gratings {sine wave grating} can have luminance variation like sine waves, rather than sharp edges.
Figure sets {Mooney figures}, to display at different orientations or inversions, can show ambiguous faces (C. M. Mooney) [1957]. Faces have analytic face features and different configurations, so people typically perceive only half as faces.
Instruments {ophthalmoscope}| can allow viewing retina and optic nerve.
At one location, many different stimuli can quickly appear and disappear {rapid serial visual presentation} (RSVP), typically eight images per second.
Three-dimensional graphs {spectrogram} can show time on horizontal axis, frequency on vertical axis, and intensity as blue-to-red color or lighter to darker gray.
Picture pairs {stereogram}| can have right-eye and left-eye images, for use in stereoscopes. Without stereoscopes, people can use convergence or divergence {free fusion} to resolve stereograms and fuse images.
If people stare at circle center, circle fades {Troxler test} (Ignaz Troxler) [1804].
If eyes see different images, people see first one image and then the other {binocular rivalry}| [Andrews and Purves, 1997] [Andrews et al., 1997] [Blake, 1989] [Blake, 1998] [Blake and Fox, 1974] [Blake and Logothetis, 2002] [Dacey et al., 2003] [de Lima et al., 1990] [Engel and Singer, 2001] [Engel et al., 1999] [Epstein and Kanwisher, 1998] [Fries et al., 1997] [Fries et al., 2001] [Gail et al., 2004] [Gold and Shadlen, 2002] [Kleinschmidt et al., 1998] [Lee and Blake, 1999] [Lehky and Maunsell, 1996] [Lehky and Sejnowski, 1988] [Leopold and Logothetis, 1996] [Leopold and Logothetis, 1999] [Leopold et al., 2002] [Levelt, 1965] [Logothetis, 1998] [Logothetis, 2003] [Logothetis and Schall, 1989] [Logothetis et al., 1996] [Lumer and Rees, 1999] [Lumer et al., 1998] [Macknik and Martinez-Conde, 2004] [Meenan and Miller, 1994] [Murayama et al., 2000] [Myerson et al., 1981] [Parker and Krug, 2003] [Pettigrew and Miller, 1998] [Polonsky et al., 2000] [Ricci and Blundo, 1990] [Sheinberg and Logothetis, 1997] [Tong and Engel, 2001] [Tong et al., 1998] [Wilkins et al., 1987] [Yang et al., 1992]. Vision has disparity detectors [Blakemore and Greenfield, 1987].
In binocular rivalry, vision sees one image {dominant image} with more contrast, higher spatial frequency, and/or more familiarity for more time.
If eyes see different images and briefly presented stimulus follows one image, that image is less intense and people see other image more {flash suppression} [Krieman et al., 2002] [Sheinberg and Logothetis, 1997] [Wolfe, 1984] [Wolfe, 1999].
Perhaps, physical and phenomenological are different visual-appearance types {modes of presentation} {presentation modes}, with different principles and properties. However, how can people know that both vision modes are about same feature or object or how modes relate.
Perhaps, motor behavior determines visual perception {motor theory of perception}. However, eye movements do not affect simple visual sense qualities.
Perhaps, visual phenomena require concepts {phenomenal concept}. Phenomenal concepts are sensation types, property types, quality relations, memory indexes, or recognition principles. Phenomenal concepts refer to objects, directly or indirectly, by triggering thought or memory. However, if physical concepts are independent of phenomenal concepts, physical knowledge cannot lead to phenomenal concepts.
Perhaps, in response to stimuli, people have non-physical inner images {sense-datum, image}. Physical objects cause sense data. Sense data are representations. Mind introspects sense data to perceive colors, shapes, and spatial relations. For example, perceived colors are relations between perceivers and sense data and so are mental objects. However, sense data are mental objects, but brain, objects, and neural events are physical, and non-physical inner images cannot reduce to physical events.
Perhaps, coordination among sense and motor systems builds visual information structures {sensorimotor theory of vision}. Sense input and motor output have relations {sensorimotor contingency laws}. Body, head, and eye movements position sensors to gather visual information and remember semantic scene descriptions. Objects have no internal representations, only structural descriptions. Vision is activity, and visual perception depends on coordination between behavior and sensation {enactive perception, Noë} [Noë, 2002] [Noë, 2004] [O'Regan, 1992] [O'Regan and Noë, 2001]. However, perception does not require motor behavior.
Perhaps, different species classify colors differently, because they inhabit different niches {adaptivism}. Perhaps, perceived colors are adaptive relations between objects and color experiences, rather than just categories about physical surfaces. However, experiences are mostly about physical quantities.
Perhaps, perceived color states are relations between perceivers and physical objects {adverbialist theories} {adverbialism} and are neural states, not non-physical mental states. However, experiences do not seem to be relations.
Perhaps, colors are dispositions of physical-object properties to produce visual color states {dispositionalism}. Physical properties dispose perceiver to discriminate and generalize among colors. Colors have no mental qualities. Alternatively, physical-object properties dispose perceivers to experience what it is like to experience color physical properties. Mental qualities allow knowing qualitative similarities among colors. However, experienced colors do not look like dispositions.
Perhaps, perceived colors are representations {intentionalist theories} {intentionalism and vision}, with no qualitative properties. However, afterimages have colors but do not represent physical objects.
Perhaps, colors have mental qualitative properties {mental color}. Mental colors are what it is like for perceivers to have color consciousness. However, mental colors can have no outside physical basis, whereas experienced colors correlate with physical quantities.
Perhaps, colors are objective non-relational physical-object properties and are describable in physical terms {physicalism, color}. For example, physical colors are surface-reflectance ratios. Object surface color remains almost constant during brightness and spectrum changes, because surface reflectances stay constant. Because objects with different surface reflectances can cause same color, physical colors are disjunctions of surface reflectances. However, experience does not provide information about surface reflectances or other physical properties.
Perhaps, perceived colors are physical-object properties or brain states experienced in space {projectivist theories} {projectivism, vision}. However, mental locations are not physical locations. Mental properties cannot be physical properties, because mental states differ from objects.
Perhaps, vision can compare blue, red, and green surface-reflectance ratios between image segments to determine color {retinex theory}. Background brightness is ratio average. Surface neutral colors depend on blue, red, and green reflectance ratios [Land, 1977]. However, vision does not use local or global brightness or reflectance averages.
Three coordinates can define all colors that humans can perceive {trichromacy} {trichromatic vision} {trichromatic theory of color vision} {Young-Helmholtz theory}. Humans have three photopigments in three different cone cells that provide the three coordinates. Trichromatic vision is only in Old World monkeys, apes, and humans.
Sense qualities {conscious experience} {phenomenal character} {phenomenal experience} {phenomenal property} {phenomenally conscious mental state} {phenomenological property} {qualitative character} {qualitative state} {raw feel} {sense quality} {sensory quality} {subjective quality} can be what something is like to observer, rather than physically is. Qualia are ways things seem when awake, dreaming, or hallucinating.
comparisons
Experience differs from awareness because it has meaning. Sensations of reality, illusions, and hallucinations are similar. Experience differs from perception because it requires awareness. People can know that they are having experience and can know its type. However, phenomena then are about perception rather than object.
types
Sensations are colors, sounds, touches, temperatures, smells, and tastes. Sensations track feature and object positions, momenta, energies, and times. Sensations correspond to physical intensities, frequencies, materials, and other properties. Tastes are liquid-like. Smells are gaseous-like. Touches are surface contours and motions. Sounds are vibratory. Sights are surfaces.
People hear sounds, which have loudness intensity and tone frequency. People can hear thousands of tones. Sounds have harmonics, with fundamentals and overtones.
People smell air molecules, based on molecule shapes, sizes, rotations, and vibrations, at different intensities. People can smell thousands of smells. Olfaction sense qualities are acrid or vinegary, floral, foul or sulfurous, fruity, minty, ethereal like pear, musky, resinous or camphorous, smoky, and sweet.
People taste molecules dissolved in water, based on molecule polarities and acidities, at different intensities. Gustation sense qualities are saltiness, sourness, sweetness, bitterness, and savoriness.
People feel compression, tension, and torsion pressure, at different intensities. People feel temperature by random molecule motions. People can feel gentle touch, motion, shape, sliding, texture, tickle, vibration, warmth, and coolness.
People can see visible light. People can see millions of hues, including blacks, whites, grays, and browns, with different brightness and saturation.
field
People can be conscious of many events and objects simultaneously. Subject experience has one moving viewpoint, which differs from others' viewpoints. Observation is having sensations. Observed and observer are an observing system. Processing and memory registers are observations, and reader and writer are observers. High-level perception builds scene, perceptual space, or phenomenal world, which is like ovoid including eye, face, periphery, front, and focal point. Unusual body motions can break sense-field coherence [Bayne and Chalmers, 2003] [Cleeremans, 2003].
Perception uses a self-centered egocentric reference frame, which has forward point during motion, receiving point for incoming stimuli, and vestibular-system gravity-aligned vertical axis. Consciousness has world-centered or object-centered allocentric reference frame, which has two horizontal axes and vertical axis.
variation
Verbal reports indicate that most people have similar sensations. Gene alleles, culture, and age can vary experiences. Sense qualities of yellow can change with age. Sensitivity, acuity, precision, accuracy, discrimination, and generalization can vary. Conscious activities change often.
properties
Sensations always have location, size, duration, time, intensity, and phenomenal sense qualities. Phenomena can shift, compress, stretch, twist, rotate, or flip.
Sensations are continuous, with no discontinuities, no gaps, and no units [VanRullen and Koch, 2003]. Inputs from small and large regions, and short and long times, integrate to make continuity [Dainton, 2000].
Sensations are immediate, and so not affected by activity, reasoning, or will [Botero, 1999].
Sensations are incorrigible, and so not correctable or improvable by activity, reasoning, or will.
Sensations are ineffable, with no description except their own existence.
Sensations are intrinsic, with no dependence on external processes [Harman, 1990].
Sensations are private, and so not available for others' observation or measurement.
Sensations are privileged, and so not possible to observe except from first-person viewpoint [Alston, 1971].
Sensations are subjective, and so intrinsic, private, privileged, and not objective [Kriegel, 2005] [Nagel, 1979] [Tye, 1986]. Subject experience belongs only to subject. No one else can have that experience or know it. Physical objects, such as stars, have no owner or have other owners, such as cars.
Perhaps, phenomena belong to mental state rather than to subject.
Sensations are transparent, with no intermediates [Kind, 2003].
Sensations are analytic, and so, like sounds, independent with no mixing.
Sensations are synthetic, and so, like colors, dependent with mixing.
Sensations are not physical.
Sensations have no mass but have a type of density.
Subjective experiences seem not to be ignorable and have self-intimation.
Sensations always feel indubitable.
Sensations seem unerring and infallible.
Sensations always feel irrevocable.
Sensations are not about microscopic things but about macroscopic regions.
Sensations are not relational and not comparable.
Sensations are the only thing that has meaning, because brain uses them for reference. However, sensations do not always have meaning.
Subject experience is not observable by others and so is personal and not directly communicable, because it has no units with which to measure.
non-locality
Physical events happen locally and instantaneously. Mental relations characteristically relate two or more physically separated points, within one psychologically simultaneous time interval, and so are non-local. Mentality requires time to gather information from separated locations to integrate them. Mentality requires space to gather information from separated times, memories and current perceptions, to integrate them. Perceptions unify local sense processing about features, objects, and events. Mentality unifies separate things into structures or processes.
surface property
Sensations are about surfaces from which information began, not about information carrier to sense organ. Intensity energies carry surface information to sense organs but have no sense qualities. Information channels cannot have sense qualities. For example, electromagnetic radiation has no color. Sound waves have no sound.
Only surfaces can have qualities. Color is not about waves traveling through space but is about surface from which waves emanated. Sound is not about waves traveling through medium but is about surface from which waves emanated.
Visual sense qualities are about surface sizes and reflectances. Aural sense qualities are about surface vibration intensities and frequencies. Touch sense qualities are about surface torsion, compression, hardness, and texture. Taste and smell sense qualities are about surface molecular configurations.
Experience is of objects and events, which people can invent or extend. Cognition, category making, distinction finding, and memory are consciousness foundations [Seager, 1999]. Sensations are about objects, events, and features, which cognition later interprets.
brain
Perhaps, sensations are brain events. However, experiences do not seem to be in brain or be like brain. Brain produces perceptions internally but perceives sensations externally, at spatial positions on surfaces. Consciousness itself does not provide knowledge of things external to mind, only of internal mental things [Seager, 1999].
Perhaps, external references are to object and event concepts or properties, rather than to external objects and events.
Sensations can come from inside and outside body. When thinking, people talk to themselves and hear same sounds as if really talking.
Perhaps, sensations are judgments or dispositions to do something about perceptions.
Animal behaviors make it appear that only humans have experiences.
nature
Perhaps, sense phenomena are physical-object qualities. Identical objects then have same phenomena. However, same person can have different phenomena about same object, and different people can have different phenomena about same object.
Perhaps, sense phenomena are experience or object physical properties. However, experience does not provide access to surface-reflectance relations, other physical properties, or experience relations.
Mind and mental states use thoughts, perceptions, emotions, and moods {propositional attitude, phenomena}, which associate phenomenon with representation or intentional content.
Perhaps, sense phenomena are relations to external or internal objects. However, experience seems to be about object features, not about relations.
Sensations are meaningful because they represent something outside mind [Cummins, 1989] [Cummins, 1996] [Darling, 1993] [Papineau, 1987] [Perner, 1993]. Sensations represent physical data only to level useful for acting quickly and correctly in most situations. However, sensations can be different phenomenon, such as inverted spectrum, though intentional content does not vary. Sensations can be the same, by automatic sense processing, though high-level representations differ, such as Inverted Earth. Experiences, such as feeling depressed, can have no representations.
Perhaps, when representation becomes explicit, it is conscious. Implicit representations are not conscious, though implicit activity can become explicit [Adolphs et al., 1999] [Zeki, 2001].
categorization
Sense processes categorize sensations, breaking continuous values into ranges, such as different colors with different brightnesses. Among senses, ability to categorize depends on pairwise comparisons between multisensory neurons. Within sense, ability to categorize depends on pairwise comparisons between sense neurons [Donald, 1991].
media
Like television, brain receives coded information and translates code into visual array. However, sensations have no substrate or medium to carry them. They are not physical and do not need substrates. They are their own medium.
Having experience is not like looking at holograms, printed pages, or television displays. Those displays have boundaries, whereas sensations have no definite boundary. Those displays cover only some visual field, whereas sensations cover all space. Those displays have controls for adjusting display color, brightness, and contrast, but people cannot will sense-quality changes. Those displays often have distortions or false colors, but sensations are consistent and complete. However, they can distort if people take drugs. Size, flatness, and errors can distinguish displays from real world, but sensations are not distinguishable from real world, because people's memories depend on same abilities. Observers can look away from television displays but cannot look away from sensations. However, observers can look at different sense-quality parts, just as people watching television can look around.
memory
Sensations summarize and categorize whole-field and full-spectrum processing results to compress information for storage and recall. Previous experiences affect later experiences, automatically. Repeating similar experiences changes experience.
labeled lines
Sense organs make same sensations no matter which physical energy strikes them. For example, tapping eye causes light flashes. Receptor stimulation and brain-region stimulation cause same response type.
time
Consciousness requires time to integrate. Time is short enough to be psychologically simultaneous and long enough to integrate locations and parts. Psychologically simultaneous events are within 20-millisecond to 50-millisecond intervals. Features, objects, events, and scenes integrated during this interval automatically associate in space and time.
information
All senses require large information amounts.
passivity
Sense qualities require awake or dreaming brain processing but seem not to need conscious effort or will.
emotions
Colors, sounds, touches, smells, and tastes can convey emotion, such as anger, disgust, fear, happiness, sadness, surprise, and remembrance. Most sensations have no associated emotion. Sensations can attract or repel, so people like or dislike them. People can feel doubt or confidence in statements. Feeling level goes from pleasure to pain. Success level goes from reward to punishment.
Imagine one can look inside brain while it is working {autocerebroscope}, to see all physical activity. How can this activity make many different phenomena?
People can attend to intrinsic, non-intentional experience features, sensations, or sense qualities {qualia}| {quale}.
People have mind and experiences {sentience}|. Sentience requires sensation, perception, awareness, mind, and experience. Sentience is state, not process, and requires no thoughts. Perhaps, only humans are sentient.
Consciousness can experience objects without knowing what they are, only that they are something {aspect nature}.
Conscious or dreaming people having above-threshold stimuli are aware of stimulus energy flow, density, pressure, flux, or amplitude {perceptual intensity}. For example, vision has brightness, and hearing has loudness. Conscious or dreaming people having below-threshold stimuli do not experience intensity. Unconscious people have no intensity awareness. For vision, intensity ranges are specular reflection, brilliant white, white, light gray, gray, dark gray, and black. For sound, ranges are whisper, normal, and intense. For touch, ranges are tickle, light pressure, touch, push, and pain. For taste, ranges are hint, full, and intense. For smell, ranges are whiff, signal, light, definite, strong, and pain.
properties
Intensities comes from surfaces. Intensity is about energy flow, not space or time, but has space and time locations. Sense-receptor membrane depolarization measures intensity, and neuron axon-impulse rate measures intensity. Perceptual intensity depends on stimulus intensity, nearby intensities, memories, and expectations, so intensity is relative. Perceptions do not have actual energy. Intensity has just-noticeable, dull, average, acute, and painful levels. Smallest intensity results from several energy quanta. Intensity is continuous, not continual or discrete. Intensity typically changes, flickers, or fades. Intensity has contrasts.
quality
People do not experience pure intensity. For perceived surface points, perceptual processing integrates remembered and current information about physical-stimulus intensity level and energy type, such as light, into non-physical quality, such as phenomenal bright red, pale yellow, or dark brown. Perceptual intensity and quality unite.
Conscious or dreaming people, having above-threshold stimuli, perceive intensity types {perceptual quality}. Conscious or dreaming people having below-threshold stimuli are not aware of qualities. Unconscious people have no awareness of quality. Perhaps, only mammals experience sense qualities.
types
Hearing can detect formant sound frequency bands. Vision can detect color bands: black, gray, white, red, green, blue, yellow, pink, brown, purple/violet, orange, and indigo/ultramarine. Smell can detect air molecule types: esters, ketones, aldehydes, sulfur compounds, aromatics, and alcohols. Taste can detect water molecule types: salts, acids, bases, glutamate, and sugars. Touch can detect pressure types: tickle, tingle, pain, and pleasure. Touch can detect temperature types: warmth and coolness.
categorization
Sense qualities have quality spectra and overlapping categories. Sense categories form continuous ranges, with categories similar to and opposite from other categories.
properties
Qualities are like coded and compressed intensity-frequency spectra. Qualities are on space surfaces. Qualities are continuous, not discrete. Qualities are not about space, time, or energy, but have space and time locations. Whole image determines sense qualities.
intensity
People do not experience pure quality. Only quality has intensity. Quality categories have intensity.
meaning
Sense qualities are the only things that have meaning.
Conscious or dreaming people are aware of seemingly stationary infinite three-dimensional space {perceptual space} {theater of the mind} {subjective space} {sensory field} {visual field} in and around body, bounded by surfaces near and far. Conscious or dreaming people having below-threshold stimuli are still aware of space. Unconscious people have no awareness of space. Smallest space interval is one millisecond of arc.
properties
Sensations always are at three-dimensional-space locations, with directions and distances. Three planes define space outside head: horizontal at ground, vertical pointing straight-ahead, and vertical and parallel to face one meter away. People are aware only of three-dimensional space, not zero-dimensional, one-dimensional, two-dimensional, four-dimensional, or higher-dimensional space. Space is about distance intervals appropriate to body actions, microns to centimeters, not about electrochemical and physical processes taking place at molecular distances. Space does not seem to stretch evenly but can compact and expand. Objects can seem to have longer or shorter extensions depending on nearby-object sizes and orientations. Space does not change, flicker, or fade. Space seems continuous, not discrete. Space has no intensity, density, energy, or mass.
field
People experience sense qualities at different distances. People feel that scenes extend to regions with no sense qualities, such as behind head.
meaning
Space is necessary for meaning, because it provides reference locations.
processing
To construct space, brain processing first constructs body-centered two-dimensional space, then body-centered two-and-a-half dimensional space, which transform during body motions and do not have symbol grounding or sensations.
Three-dimensional space is stationary. Body, head, and eye movements change observer perspective, making different viewpoints. Body, head, and eye movements transform egocentric space coordinates, using mostly translational and vibrational transformations. Sense processing transforms egocentric space coordinates to maintain stationary allocentric space, using mostly rotation transformations. Geometric coordinate transformations maintain spatial relations during eye, head, or body movements. Egocentric-space transformations maintain stationary allocentric space. Sense-processing tensors compensate for body movements that change egocentric space, and coordinate transformations create and maintain allocentric stationary space [Olson et al., 1999] [Pouget and Sejnowski, 1997].
Space uses absolute or relative body-centric and environment-centric coordinates, which are transformed during body movements.
multisensory
All senses seem to share same perceptual space. Cortical vision processing makes three-dimensional perceptual space. Temporal-and-parietal-lobe sound processing makes three-dimensional perceptual space. Hippocampus memory processing makes three-dimensional memory space. Cerebellum sensory-motor processing makes three-dimensional sensory-motor space. Frontal lobe and association cortex merge sensory, memory, and motor spaces to make unified perceptual space.
observer
People feel that they are behind sensory apparatus, observing outward. Observer or self seems to be at three-dimensional-space center.
Conscious or dreaming people are aware of seemingly infinite one-dimensional time {perceptual time}. Conscious or dreaming people having below-threshold stimuli are still aware of time. Unconscious people have no awareness of time. Shortest sensations last one millisecond.
properties
People are aware of one-dimensional time, not zero-dimensional time, two-dimensional time, or higher-dimensional time. Time information must be in real time, so brain does not lose information because processing is too slow, and brain does not need to add information because processing is too fast. Time does not change, flicker, or fade. Time seems continuous, not discrete. Time has past and future, before and after. Time has no intensity or space location.
People experience time flow, which seems faster with more events each second and slower with fewer events each second. Felt time-flow rate differs from brain-processing time-flow rate [Dennett and Kinsbourne, 1992] [Held et al., 1978] [Flaherty, 1999] [Pastor and Artieda, 1996] [Pöppel, 1978] [Pöppel, 1997]. Sense qualities are about time intervals appropriate to body actions, time scale of 20 milliseconds to hours. Sense qualities are not about electrochemical and physical processes at millisecond time intervals nor instantaneous events [Clifford et al., 2003] [Elman, 1990] [Price, 1996].
meaning
Time is necessary for meaning, because it provides references to past, present, and future.
delays
Time consciousness requires time delay. Time delay can use extra loop, temporary store, shuttle, stretch or shrink mechanism, or chemical delays. Circuits can have bypass circuits to adjust time. Main circuit can have inhibition while processing in bypass. Bypass can remove inhibition or overcome it.
multisensory
All senses seem to share same time.
observer
Observer or self seems to be at one-dimensional-time center. Self seems to be observing events in the present, looking backward to memories, and looking forward in imagination. Events circumscribe observer in time, forming envelope around observation point [Sellars, 1963].
Sensations last at least minimum time {minimal perceptual moment}. Perhaps, activation builds until it reaches threshold. Perhaps, positive feedback causes response spiking.
In dangerous situations, people experience shorter moments and decreased time flow {protracted duration}, because they experience more moments per second.
Conscious time seems to cover interval of 1 to 3 seconds {specious present}. Brain processes inputs from many sources, taking time intervals to integrate. Information overlaps over time.
After neurosurgery, memory time markers can move backward in time {backwards referral in time} {subjective referral} {subjective antedating} [Libet, 1993] [Libet et al., 1999].
Consciousness requires minimum stimulation time {Libet's delay} {time-on theory} of 0.5 seconds, no matter what the intensity, to reach neuronal adequacy [Eccles, 1965] [Iggo, 1973] [Koch, 1999] [Libet, 1966] [Libet, 1973] [Libet, 1993] [Libet et al., 1999] [Meador et al., 2000] [Ray et al., 1999].
Consciousness requires minimum stimulation time of 0.5 seconds {neuronal adequacy}, no matter what the intensity [Eccles, 1965] [Iggo, 1973] [Koch, 1999] [Libet, 1966] [Libet, 1973] [Libet, 1993] [Libet et al., 1999] [Meador et al., 2000] [Ray et al., 1999].
Mental faculty {common faculty} {common sense, sensation} compares and associates shapes, sizes, and motions from all senses [Bayne and Chalmers, 2003] [Cleeremans, 2003].
Subject observers can have sensations {observation} of objects observed. Sensations are like reports in parallel. People feel that they are behind sensory apparatus, observing outward. Observations are in three-dimensional space and one-dimensional time. Self seems to be observing events in the present, looking backward to memories, and looking forward in imagination. Events circumscribe observer in time, forming envelope around observation point [Sellars, 1963].
Stimuli can have intensity too low or duration too short for conscious awareness, but information affects behavior {preconscious processing}. EEG and brain blood flows indicate that sense regions, motor regions, association areas, emotion areas, and memory areas are active during unconscious processing.
If attentional load is high, people can be unaware of non-attended stimuli, but information affects behavior. Anesthetized patients can remember and process information, so unconscious processing can affect conscious perceptions. Brain-damaged patients can remember and process information, so unconscious processing can affect conscious perceptions.
Self knows about past, present, and future and can distinguish imagination, memory, and reality {reality monitoring} {reality discrimination} [Sellars, 1963]. People typically can discriminate between what they imagine and what they receive from environment or body [Johnson and Raye, 1981].
Consciousness involves presentation to self {self-presentation} of quality type {cognitive quality}.
Stimuli have three intensity levels that affect same brain regions differently.
objective threshold
Intensity below threshold level {objective threshold, experience} is too low for perception.
perception
Intensity above objective threshold causes non-conscious perception. If stimulus intensity level is above objective threshold but below subjective threshold, stimulus does not become conscious but can influence preferences for same or associated stimuli [Kunst-Wilson and Zajonc, 1980] [Murphy and Zajonc, 1993].
subjective threshold
At higher intensity level {subjective threshold, experience}, people begin to detect sensations. For all senses, consciousness requires intensity level higher than intensity level needed for brain to detect and use stimuli [Dehaene et al., 1998] [Morris et al., 1998] [Morris et al., 1999] [Whalen et al., 1998].
accumulation
Perhaps, activation must build to pass subjective threshold. Building counters dissipative and inhibitory processes and has positive feedback and signal recursion.
feedback
Perhaps, positive feedback must cause response spiking to pass subjective threshold. After spiking, activity falls, but sensations can linger [Cheesman and Merikle, 1984] [Kihlstrom, 1996].
Subjective experiences require relation {symbol grounding, experience} between internal thing or event and external thing or event. External things or events are physical memories or environmental effects. Internal things or events are sensations. Symbol grounding includes both perceptions and mental experiences.
symbol
Symbols are perceptions that label, index, or refer to perceptions or concepts. Both symbol and reference perceptions are mental representations. Perceptions have relations and form reference system. Nothing is intrinsically symbol, because only relations make symbols. As perceptions, symbols have space, time, intensity, and quality. Most symbols are non-conscious, but symbols, such as colors, can be conscious.
symbol system
Most perceptions are objects that are not in systems. Symbols have added meaning, because they have relations in coding systems. Coding systems use symbol sets and have processing mechanisms that have symbol reading, processing, and writing rules. When symbol appears, typically in a symbol series, coding-system processing mechanism follows rules to use symbol. Results/outputs are symbol meaning. Meaning occurs only in symbol systems.
environment
Perhaps, isolated systems cannot have subjective experiences. Perhaps, systems must learn, have memory, or interact with environment. Learning can supply outside information. Memory can supply secondary information sources. Environment can provide intention references [Harnad, 1990] [McGinn, 1987] [McGinn, 1989] [McGinn, 1991] [McGinn, 1999] [Velmans, 1996] [Velmans, 2000]. For example, computer programs on installation CDs do not interact with other information {isolation, system}. They cannot run, receive input, or produce output. Installing programs on computers allows programs to receive environment input, so they can establish references to real things [Chalmers, 1996] [Chalmers, 2000].
People can have different awake-consciousness levels {alertness}|. Alertness can be high, normal, or low. Physiological factors, such as hormones, stimulation level, novelty, nutrient levels, sleepiness, diseases, and moods, set alertness level. All mammals have alertness levels.
Experience seems to happen immediately or in one step {immediacy}. Activity, reasoning, or will does not affect phenomenal-experience generation [Botero, 1999]. People cannot be aware of brain processing. Sensations are after processing. Sensations appear and do not change. Processing does not continue, and quality does not become more refined. (Quality can change with new information.) Perhaps, quality reaches optimum, then equilibrium holds. Perhaps, brain modifies processing to trick consciousness.
Activity, reasoning, or will cannot correct or improve sensations {incorrigibility}| [Seager, 1999]. People cannot be aware of brain processing. Sensations are after processing. Sensations appear and do not change. Processing does not continue, and quality does not become more refined. (Quality can change with new information.) Sensations can misidentify. Sensations can misremember.
Sensations are complex and can have no description except their own existence {ineffability}|. Nothing can substitute for experience. Knowledge about experience requires having the experience [Harman, 1990]. However, language can describe sense-quality properties.
Subjects are integrated sets of sensations, which depend only on internal processes. Experience is a property, state, process, or essence of subjects {intrinsicness}. Experience depends on subject structures and functions. Alternatively, subject can have experiences [Seager, 1999]. Experience does not need screen or external aid. Experience does not depend on external things or events.
Perhaps, consciousness requires ineffability, intrinsicness, immediacy, and privateness {3I-P} {minimal properties of consciousness}.
Object properties seem to belong to object and thus associate {object unity}. For example, object can be red and spherical. Subject perceives red spherical object, not red object and spherical object with a relation. There is only one object, not two objects. Phenomena link in objects [Seager, 1999].
All experiences, including thoughts, moods, and emotions, at one time associate {phenomenal unity}. For example, sight and sound perceived at nearby locations associate. Brain processing adds links that unify them [Seager, 1999].
Sensations are only available to subject, and direct observation from outside cannot measure them {privateness}. No one else can have the same experience [Seager, 1999].
Only subjects, with first-person viewpoints, can have sensations {privileged access} [Alston, 1971] [Gertler, 2003]. Comparing reactions to experiences and subjective-knowledge reports can result in objective knowledge.
Sensations depend on whole scene or image and fill all of space and time {space-filling}, leaving no gaps or overlaps.
Consciousness happens in people for that person only {subjective character}. Experience is phenomenon to subject, and no other subject can have that experience [Davidson, 2001] [Georgalis, 2005] [Kriegel, 2005] [Nagel, 1979] [Shoemaker, 1996] [Tye, 1986].
People do not perceive experience qualities but only object properties and qualities {transparency, consciousness}. Experiences do not themselves have knowable phenomena or properties [Kind, 2003] [Loar, 2002]. After experiencing object properties, people are only aware that they are having experiences of phenomenal character. Subject can perceive no intermediate to experiences, which are immediately available. Hallucinations are only about object properties.
Hearing does not mix tones {analytic sense}. Analytic senses analyze signals from source into independent elements. Touch, smell, and taste are both synthetic and analytic.
Vision mixes colors {synthetic sense}. Synthetic senses mix signals from source to synthesize resultant sensation. Touch, smell, and taste are both synthetic and analytic.
Outline of Knowledge Database Home Page
Description of Outline of Knowledge Database
Date Modified: 2022.0225