Perhaps, mental states are brain functions {functionalism}. Consciousness is inputs, processing, and outputs about stimuli, behaviors, beliefs, goals, and algorithms. Functionalism uses input-output relations to explain physical and biological processes. If mental states are conscious, they have special functions [Armstrong and Malcolm, 1984] [Armstrong, 1962] [Armstrong, 1968] [Armstrong, 1980] [Churchland, 1986] [Churchland, 1988] [Churchland, 1995] [Churchland, 2002].
The same functional process can have different physical representations. The same physical state can represent different functions.
mental states
Mental states do not necessarily correspond to anatomy or physiology, but are like software and algorithms. Mental states are internal, with no public behavior. Mental states are objective, with no need for subjective feelings. Mental states are perception, memory, emotion, and will effects. Mental states cause motions.
phenomenal functions
Phenomena can cause behavior by translating stimuli into goals, energies, or actions. Different physical states can have same phenomena.
types
Perhaps, having conscious experience is mental functioning, and having particular experience is neurophysiological {physicalist-functionalism}. Perhaps, mental properties are identical to functional properties {psychofunctionalism}. Perhaps, conscious system must have functions, selected for in the past {reductive teleofunctionalism}. Perhaps, both conscious and unconscious mental capacities are for adaptation {teleological functionalism}. Perhaps, functional brain parts can explain mind {decompositional functionalism}. Perhaps, mind can be computer programs {computation-representation functionalism}. Perhaps, mental states can be functional states {metaphysical functionalism}, based on input, output, and causal relations.
types: interactionism
Interactionism includes functionalism and has non-physical reality {mind-stuff} to provide mental states. However, it is typically materialist, involving hardware, such as brain {wetware}.
Perhaps, mental states represent ideas and cause linguistic responses. Mental states, which can be conscious or unconscious, are about similarities or relations, and relations determine linguistic-response patterns, which are conscious. Language reports mental states using signs. Because mental states vary widely, natural occurrences have incompatible linguistic explanations. People react to natural occurrences to establish conscious linguistic responses {causal theory of reference} [Putnam, 1975] [Putnam, 1981] [Putnam, 1988] [Putnam, 1992].
Perhaps, brain agents compete for expression and control {cognitive pandemonium}. Local and global winners emerge. Global winner becomes conscious {cerebral celebrity} [Dennett, 1991].
Perhaps, non-conscious information processing can perform all processes needed for survival and all processes performed by consciousness {computational functionalism} {conscious inessentialism} {computational theory} {computational hypothesis}.
symbols
Symbol manipulation causes thoughts. Symbols represent high-level concepts and directly relate to knowledge structures. Symbols are either present or absent. Symbols in combination make propositions. Computational manipulations follow language syntax. Syntax and symbol meaning can give overall meaning.
computers
Computers are general symbol manipulators. If symbol manipulation can cause thoughts, computers can think like people.
experience
However, symbols cannot represent images, tastes, sounds, touch, and smell. Symbols are either present or absent and do not have magnitude or certainty. Symbols have no partial effects or gradations {brittleness, function symbol}. Symbols do not have meaningful parts or units. They do not have formation or development process. Symbols do not receive more certainty by repetition or conjunction. Statistical processes do not affect symbol meaning or relations. Small symbol changes typically greatly change meaning or accuracy.
Symbols can be complex wholes, whose meanings depend on pattern parts. Sense qualities combine fundamental features, and similar sense qualities have similar combinations.
Perhaps, consciousness is an executive system {executive system} that focuses attention, issues reports, and guides actions.
Perhaps, mental outputs become conscious when they are available for concepts/thoughts {first-order representational theory}. However, all brain system outputs are similar in physiology and can travel indirectly to all brain regions.
Perhaps, consciousness and subjective experience are viewpoint-specific functions in thalamocortical complex {global workspace} [Baars, 1988] [Baars, 1997] [Baars, 2002] [Changeux, 1983] [Dehaene and Naccache, 2001] [Dehaene, 2001] [Dehaene et al., 2003]. Consciousness is shared workspace, representation system, or working memory that communicates with brain modules/agents that perform unconscious functions. Global workspace allows information exchange and coordination.
modules
Brain algorithms get information from global workspace, broadcast their information there, compete and cooperate to place information there, and interact in global workspace to resolve uncertainties in interpretation and action. Unconscious processing is parallel processing and uses large memory.
output
Eventually, global workspace reaches consensus, makes output, and stores representation or will in long-term memory.
consciousness
Attention systems make global workspace contents known to consciousness, so global-workspace information is consciousness contents. Consciousness involves information exchange. Conscious processing integrates unconscious processing.
levels
There can be more or less consciousness, as shown by comparing conscious and unconscious brain processing {contrastive analysis}. Fugue, multiple personality, and depersonalization have amnesia and changed sense of self. Brains have beliefs, goals, and consciousness {self-concept}. Self-concept is consciousness contents. Bodies are agents and perceivers {self-system}. Self-systems have sense qualities, which are fundamental context {deep context} in the context hierarchy. However, sense-quality salience or intensity does not relate to high-level processing. People can have more than one consciousness, rather than one context hierarchy. Even early mammals have senses and brains that can allow consciousness.
Perhaps, conscious states are higher-level perceptions about lower-level perceptions {higher-order sense theory} {HOS theory} {inner-sense theory}. Brain has a faculty that works on sense perceptions to make perception about perception. Perceptions do not have intentions/concepts and are analogs. Perceptions can be non-conscious, and no perceptions are necessarily conscious. However, no evidence for brain inner-sense exists. Higher-order sense is a representational theory. First-order theories say that consciousness happens when outputs are available for concepts.
Perhaps, conscious states are higher-level thoughts about lower-level states {higher-order thought theory, functionalism} {HOT theory} {higher order monitoring theory}. Perhaps, conscious states are mental states about which people have higher-level beliefs that people have mental states. Higher-order thought is a representational theory.
process
Perceptions do not have intentions, but thoughts have intentions. Consciousness can link current perceptions in occipital and other lobes to concepts, emotions, plans, memories and values in frontal, temporal, and parietal lobes. Only mental states can be conscious. People can be, but are not typically, conscious of beliefs. Perceptions can be non-conscious.
types
When perceiving or emoting, people can have thoughts that they are perceiving or emoting, and thoughts bring experience {actualist higher-order thought theory}. Thoughts can happen at same time as perceptions or can be about memories. People have higher-order thoughts, and some perceptions and emotions are available for use {dispositionalist higher-order thought theory}. Percepts can be both first-order and higher-order {dual-content theory}. Higher-order thought system can use information, and such uses determine experience {inferential-role semantics} {consumer semantics}. Semantics can be only about input information and symbol grounding {informational semantics} {input-side semantics}.
problems
Conscious states can have no thought [Rosenthal, 1991].
Perhaps, visual sensory information goes to many brain places, where dendrites detect spectral and time information about perceptions. Brains can later extract and transform stored information to give conscious awareness {holographic brain theory} {holonomic theory}. Holograms can change {holonomy}. People cannot know both spectral and time values exactly. Neurons minimize information loss by reorganizing their structures to have minimum entropy and maximum information. Consciousness is experiencing stored spectral-information transformation. No one or thing views holographic images [Pribram, 1971] [Pribram, 1974] [Pribram, 1991].
image
Perhaps, brains can make holograms without using reference signals. They can record scene wavefronts and later restore wavefronts by reversing calculation.
Perhaps, consciousness is information integration {information integration theory}. More integration makes more consciousness. Integrating different neuron types and modules makes more consciousness. Different integration types make different consciousness types.
brain
Thalamocortical region integrates information from various and many neurons and modules, whereas other brain regions have smaller integration.
time
Integration takes 0.1 to 3 milliseconds.
information
Scenes are scene selections and so have high information. Integration measures are effective information passed from system part to system part. Effective information is second-part entropy when first-part output is noise, and vice versa. Their sum is integration amount.
information: system
Systems have parts. Part pairs are whole-system subsystems. Complexity depends on pair and integration amounts. Subsystems can have lower information integration than others {minimum information bipartition}. Parts can make subsystems. Whole brain has maximum entropy and integration. Systems that integrate enough information are conscious.
Perhaps, brains are computers with fixed code, registers, and programs {instructionism, mind theory}. Coded brain input, from environment and body, makes coded brain output.
Programs {Intelligent Distribution Agent} {intelligent distribution} based on global-workspace architecture can assign jobs to sailors [Franklin et al., 1998].
Perhaps, phenomenal properties are representational properties {representationalism}.
causes: stimulation
Stimuli make sense-data. Perception sense-data, ideas, and impression are mental internal representations. Representations are mental states and are like phenomena.
causes: intention
Alternatively, people need no stimuli, only intentional statements. Intentions and representations are about external things or possible external things. Intentions can make representations but are not mental states. Representations are not like phenomena but are coded information.
representation: similarity
Something can represent something else by being similar to it. Similarity is reciprocal. However, real representations have only one direction. Similarity can be more or less. Similarity relations need similarity-level information.
representation: covariance
Something can represent something else by being caused to co-vary by second thing. Covariance is reciprocal. However, real representations have only one direction. Covariance has strength. Covariance relations need causation-strength information.
representation: function
Something can represent something else using representational functions. Such representation requires indicating function and strength. Systems have basic representational functions {systemic representation} that can change to create new representations {acquired representation}. Natural representations evolve.
representation: function and evolution
Something can represent something else, because evolution shaped it to do so. Such representation requires evolutionary benefits and selection strengths.
phenomena
Perhaps, representations completely specify conscious phenomena {exhaustion thesis}. Perhaps, representations need other mental attributes.
phenomena: external or internal
Conscious phenomena appear in environment {externalism, phenomena}. Conscious phenomena are in mind {internalism, phenomena}. If consciousness is a mental state, representations can project {projectivism, phenomena} onto external surfaces {literal projectivism} or seem to do so {figurative projectivism}.
phenomena: higher order
Perhaps, representational mental states can be "perceived" by higher-level mental abilities {representational theory, representationalism} {higher-order perception}. Consciousness links perceptions, in occipital lobe, to concepts, emotions, plans, memories, and values, in frontal, temporal, and parietal lobes.
phenomena: consciousness
Perhaps, consciousness is natural representations. However, some conscious states have no perception [Dretske, 1988] [Dretske, 1995].
Perhaps, machines can mimic mental functions in logic and language, using symbols and rules {symbolicism} {Good Old-Fashioned Artificial Intelligence} (GOFAI) {rule-and-symbol AI} [Barr and Feigenbaum, 1981].
Perhaps, matter and energy predate mind and consciousness. Brain evolved to create symbols {symbolism, mind theory} to make representations used for action. Mind is distinct from matter, because complex organization brought forth new properties.
Mind forms matter and energy representations from matter and energy. Representations use matter and energy structures, just as music is physical-energy patterns, electrochemical-signal patterns, and mental experience. Because mental states are complex matter-and-energy patterns, they can act on matter at all levels. People cannot be conscious of symbol creation, use, or representation processes.
Perhaps, computers with complex enough programs have minds {strong AI}.
Perhaps, computers with complex enough programs simulate mental functions {weak AI}.
Outline of Knowledge Database Home Page
Description of Outline of Knowledge Database
Date Modified: 2022.0225