Brain can perceive motion {motion perception} {motion detector}. Motion analysis is independent of other visual analyses.
properties: adaptation
Motion detector neurons adapt quickly.
properties: direction
Most cortical motion-detector neurons detect motion direction.
properties: distance
Most cortical motion-detector neurons are for specific distance.
properties: fatigue
Motion-detector neurons can fatigue.
properties: location
Most cortical motion-detector neurons are for specific space direction.
properties: object size
Most cortical motion-detector neurons are for specific object spot or line size. To detect larger or smaller objects, motion-detector neurons have larger or smaller receptive fields.
properties: rotation
To have right and left requires asymmetry, such as dot or shape. In rotation, one side appears to go backward while the other goes forward, which makes whole thing stand still.
properties: speed
Most cortical motion-detector neurons detect motion speed.
processes: brain
Area-V5 neurons detect different speed motions in different directions at different distances and locations for different object spot or line sizes. Motion detectors are for one direction, object size, distance, and speed relative to background. Other neurons detect expansion, contraction, and right or left rotation [Thier et al., 1999].
processes: frame
Spot motion from one place to another is like appearance at location and then appearance at another location. Spot must excite motion-detector neuron for that direction and distance.
processes: opposite motions
Motion detectors interact, so motion inhibits opposed motion, making motion contrasts. For example, motion in one direction excites motion detectors for that direction and inhibits motion detectors for opposite direction.
processes: retina image speed
Retinal radial-image speed relates to object distance.
processes: timing
Motion-detector-neuron comparison is not simultaneous addition but has delay or hold from first neuron to wait for second excitation. Delay can be long, with many intermediate neurons, far-apart neurons, or slow motion, or short, with one intermediate neuron, close neurons, or fast motion.
processes: trajectory
Motion detectors work together to detect trajectory or measure distances, velocities, and accelerations. Higher-level neurons connect motion detection units to detect straight and curved motions (Werner Reichardt). As motion follows trajectory, memory shifts to predict future motions.
Animal species have movement patterns {biological motion}. Distinctive motion patterns, such as falling leaf, pouncing cat, and swooping bat, allow object recognition and future position prediction.
Vision can detect that surface is approaching eye {looming response}. Looming response helps control flying and mating.
For moving objects, eyes keep object on fovea, then fall behind, then jump to put object back on fovea {smooth pursuit}. Smooth pursuit is automatic. People cannot voluntarily use smooth pursuit. Smooth pursuit happens even if people have no sensations of moving objects [Thiele et al., 2002].
Three-month-old infants understand {Theory of Body} that when moving objects hit other objects, other objects move. Later, infants understand {Theory of Mind Mechanism} self-propelled motion and goals. Later, infants understand {Theory of Mind Mechanism-2} how mental states relate to behaviors. Primates can understand that acting on objects moves contacted objects.
Head or body movement causes scene retinal displacement. Nearer objects displace more, and farther objects displace less {motion parallax}| {movement parallax}. If eye moves to right while looking straight-ahead, objects appear to move to left. See Figure 1.
Nearer objects move greater visual angle. Farther objects move smaller visual angle and appear almost stationary. See Figure 2.
movement sequence
Object sequence can change with movement. See Figure 3.
depth
Brain can use geometric information about two different positions at different times to calculate relative object depth. Brain can also use geometric information about two different positions at same time, using both eyes.
While observer is moving, nearer objects seem to move backwards while farther ones move in same direction as observer {monocular movement parallax}.
When viewing moving object through small opening, motion direction can be ambiguous {aperture problem}, because moving spot or two on-off spots can trigger motion detectors. Are both spots in window aperture same object? Motion detectors solve the problem by finding shortest-distance motion.
When people see objects, first at one location, then very short time later at another location, and do not see object anywhere between locations, first object seems to move smoothly to where second object appears {apparent motion}|.
Moving spot triggers motion detectors for two locations.
two locations and spot
How does brain associate two locations with one spot {correspondence problem, motion}? Brain follows spot from one location to next unambiguously. Tracking moving objects requires remembering earlier features and matching with current features. Vision can try all possible matches and, through successive iterations, find matches that yield minimum total distance between presentations.
location and spot
Turning one spot on and off can trigger same motion detector. How does brain associate detector activation at different times with one spot? Brain assumes same location is same object.
processes: three-dimensional space
Motion detectors are for specific locations, distances, object sizes, speeds, and directions. Motion-detector array represents three-dimensional space. Space points have spot-size motion detectors.
processes: speed
Brain action pathway is faster than object-recognition pathway. Brain calculates eye movements faster than voluntary movements.
constraints: continuity constraint
Adjacent points not at edges are at same distance from eye {continuity constraint, vision}.
constraints: uniqueness constraint
Scene features land on one retinal location {uniqueness constraint, vision}.
constraints: spatial frequency
Scene features have different left-retina and right-retina positions. Retina can use low resolution, with low spatial frequency, to analyze big regions and then use higher and higher resolutions.
If an image or light spot appears on a screen and then a second image appears 0.06 seconds later at a randomly different location, people perceive motion from first location to second location {phi phenomenon}. If an image or light spot blinks on and off slowly and then a second image appears at a different location, people see motion. If a green spot blinks on and off slowly and then a red spot appears at a different location, people see motion, and dot appears to change color halfway between locations.
Objects {luminance-defined object}, for example bright spots, can contrast in brightness with background. People see luminance-defined objects move by mechanism that differs from texture-defined object-movement mechanism. Luminance-defined objects have defined edges.
Objects {texture-defined object} {contrast-defined object} can contrast in texture with background. People see luminance-defined objects move by mechanism that differs from texture-defined object-movement mechanism. Contrast changes in patterned ways, with no defined edges.
Luminance changes indicate motion {first-order motion}.
Contrast and texture changes indicate motion {second-order motion}.
Incoming visual information is continuous flow {visual flow}| {optical flow, vision} {optic flow} that brain can analyze for constancies, gradients, motion, and static properties. As head or body moves, head moves through stationary environment. Optical flow reveals whether one is in motion or not. Optical flow reveals planar surfaces. Optical flow is texture movement across eye as animals move.
Optic flow has a point {focus of expansion} (FOE) {expansion focus} where horizon meets motion-direction line. All visual features seem to come out of this straight-ahead point as observer moves closer, making radial movement pattern {radial expansion} [Gibson, 1966] [Gibson, 1979].
Optic flow has information {tau, optic flow} that signals how long until something hits people {time to collision} (TTC) {collision time}. Tau is ratio between retinal-image size and retinal-image-size expansion rate. Tau is directly proportional to time to collision.
Mammals can throw and catch {Throwing and Catching}.
Animal Motions
Animals can move in direction, change direction, turn around, and wiggle. Animals can move faster or slower. Animals move over horizontal ground, climb up and down, jump up and down, swim, dive, and fly.
Predators and Prey
Predators typically intercept moving prey, trying to minimize separation. In reptiles, optic tectum controls visual-orientation movements used in prey-catching behaviors. Prey typically runs away from predators, trying to maximize separation. Animals must account for accelerations and decelerations.
Gravity and Motions
Animals must account for gravity as they move and catch. Some hawks free-fall straight down to surprise prey. Seals can catch thrown balls and can throw balls to targets. Dogs can catch thrown balls and floating frisbees. Cats raise themselves on hind legs to trap or bat thrown-or-bouncing balls with front paws.
Mammal Brain
Reticular formation, hippocampus, and neocortex are only in mammals. Mammal superior colliculus can integrate multisensory information at same spatial location [O'Regan and Noë, 2001]. In mammals, dorsal vision pathway indicates object locations, tracks unconscious motor activity, and guides conscious actions [Bridgeman et al., 1979] [Rossetti and Pisella, 2002] [Ungerleider and Mishkin, 1982] [Yabuta et al., 2001] [Yamagishi et al., 2001].
Allocentric Space
Mammal dorsal visual system converts spatial properties from retinotopic coordinates to spatiotopic coordinates. Using stationary three-dimensional space as fixed reference frame simplifies trajectories perceptual variables. Most motions are two-dimensional rather than three-dimensional. Fixed reference frame separates gravity effects from internally generated motions. Internally generated motion effects are straight-line motions, rather than curved motions.
Human Throwing and Shooting
Only primates can throw, because they can stand upright and have suitable arms and hands. From 45,000 to 35,000 years ago, Homo sapiens and Neanderthal Middle-Paleolithic hunter-gatherers cut and used wooden spears. From 15,000 years ago, Homo sapiens Upper Paleolithic hunter-gatherers cut and used wooden arrows, bows, and spear-throwers. Human hunter-gatherers threw and shot over long trajectories.
Human Catching
Geometric Invariants: Humans can catch objects traveling over long trajectories. Dogs and humans use invariant geometric properties to intercept moving objects.
Trajectory Prediction: To catch baseballs, eyes follow ball while people move toward position where hand can reach ball. In the trajectory prediction strategy [Saxberg, 1987], fielder perceives ball initial direction, velocity, and perhaps acceleration, then computes trajectory and moves straight to where hand can reach ball.
Acceleration Cancellation: When catching ball coming towards him or her, fielder must run under ball so ball appears to move upward at constant speed. In the optical-acceleration-cancellation hypothesis [Chapman, 1968], fielder motion toward or away from ball cancels ball perceived vertical acceleration, making constant upward speed. If ball appears to vertically accelerate, it lands farther than fielder. If it appears to vertically decelerate, it lands shorter. Ball rises until caught, because baseball is always above horizon, far objects are near horizon, and near objects are high above horizon.
Transverse Motion: Fielder controls transverse motion independently of radial motion. When catching ball toward right or left, fielder moves transversely to ball path, holding ball-direction and fielder-direction angle constant.
Linear Trajectory: In linear optical trajectory [McBeath et al., 1995], when catching ball to left or right, fielder runs in a curve toward ball, so ball rises in optical height, not to right or left. Catchable balls appear to go straight. Short balls appear to curve downward. Long balls appear to curve upward. Ratio between ball elevation and azimuth angles stays constant. Fielder coordinates transverse and radial motions. Linear optical trajectory is similar to simple predator-tracking perceptions. Dogs use the linear optical trajectory method to catch frisbees [Shaffer et al., 2004].
Optical Acceleration: Plotting optical-angle tangent changes over time, fielders appear to use optical-acceleration information to catch balls [McLeod et al., 2001]. However, optical trajectories mix fielder motions and ball motions.
Perceptual Invariants: Optical-trajectory features can be invariant with respect to fielder motions. Fielders catch fly balls by controlling ball-trajectory perceptions, such as lateral displacement, rather than by choosing how to move [Marken, 2005].
1-Consciousness-Sense-Vision-Physiology
Outline of Knowledge Database Home Page
Description of Outline of Knowledge Database
Date Modified: 2022.0225