Search results
1 – 10 of 12Neuroscientists act as proxies for implied anthropomorphic signal-processing beings within the brain, Homunculi. The latter examine the arriving neuronal spike-trains to infer…
Abstract
Purpose
Neuroscientists act as proxies for implied anthropomorphic signal-processing beings within the brain, Homunculi. The latter examine the arriving neuronal spike-trains to infer internal and external states. But a Homunculus needs a brain of its own, to coordinate its capabilities – a brain that necessarily contains a Homunculus and so on indefinitely. Such infinity is impossible – and in well-cited papers, Attneave and later Dennett claim to eliminate it. How do their approaches differ and do they (in fact) obviate the Homunculi?
Design/methodology/approach
The Attneave and Dennett approaches are carefully scrutinized. To Attneave, Homunculi are effectively “decision-making” neurons that control behaviors. Attneave presumes that Homunculi, when successively nested, become successively “stupider”, limiting their numbers by diminishing their responsibilities. Dennett likewise postulates neuronal Homunculi that become “stupider” – but brain-wards, where greater sophistication might have been expected.
Findings
Attneave’s argument is Reductionist and it simply assumes-away the Homuncular infinity. Dennett’s scheme, which evidently derives from Attneave’s, ultimately involves the same mistakes. Attneave and Dennett fail, because they attempt to reduce intentionality to non-intentionality.
Research limitations/implications
Homunculus has been successively recognized over the centuries by philosophers, psychologists and (some) neuroscientists as a crucial conundrum of cognitive science. It still is.
Practical implications
Cognitive-science researchers need to recognize that Reductionist explanations of cognition may actually devolve to Homunculi, rather than eliminating them.
Originality/value
Two notable Reductionist arguments against the infinity of Homunculi are proven wrong. In their place, a non-Reductionist treatment of the mind, “Emergence”, is discussed as a means of rendering Homunculi irrelevant.
Details
Keywords
This paper aims to extend the companion paper on “infant psychophysics”, which concentrated on the role of in-lab observers (watchers). Infants cannot report their own…
Abstract
Purpose
This paper aims to extend the companion paper on “infant psychophysics”, which concentrated on the role of in-lab observers (watchers). Infants cannot report their own perceptions, so for five decades their detection thresholds for sensory stimuli were inferred from their stimulus-evoked behavior, judged by watchers. The inferred thresholds were revealed to inevitably be those of the watcher–infant duo, and, more broadly, the entire Laboratory. Such thresholds are unlikely to represent the finest stimuli that the infant can detect. What, then, do they represent?
Design/methodology/approach
Infants’ inferred stimulus-detection thresholds are hypothesized to be attentional thresholds, representing more-salient stimuli that overcome distraction.
Findings
Empirical psychometric functions, which show “detection” performance versus stimulus intensity, have shallower slopes for infants than for adults. This (and other evidence) substantiates the attentional hypothesis.
Research limitations/implications
An observer can only infer the mechanisms underlying an infant’s perceptions, not know them; infants’ minds are “Black Boxes”. Nonetheless, infants’ physiological responses have been used for decades to infer stimulus-detection thresholds. But those inferences ultimately depend upon observer-chosen statistical criteria of normality. Again, stimulus-detection thresholds are probably overestimated.
Practical implications
Owing to exaggerated stimulus-detection thresholds, infants may be misdiagnosed as “hearing impaired”, then needlessly fitted with electronic implants.
Originality/value
Infants’ stimulus-detection thresholds are re-interpreted as attentional thresholds. Also, a cybernetics concept, the “Black Box”, is extended to infants, reinforcing the conclusions of the companion paper that the infant-as-research-subject cannot be conceptually separated from the attending laboratory staff. Indeed, infant and staff altogether constitute a new, reflexive whole, one that has proven too resilient for anybody’s good.
Details
Keywords
This study aims to examine the observer’s role in “infant psychophysics”. Infant psychophysics was developed because the diagnosis of perceptual deficits should be done as early…
Abstract
Purpose
This study aims to examine the observer’s role in “infant psychophysics”. Infant psychophysics was developed because the diagnosis of perceptual deficits should be done as early in a patient’s life as possible, to provide efficacious treatment and thereby reduce potential long-term costs. Infants, however, cannot report their perceptions. Hence, the intensity of a stimulus at which the infant can detect it, the “threshold”, must be inferred from the infant’s behavior, as judged by observers (watchers). But whose abilities are actually being inferred? The answer affects all behavior-based conclusions about infants’ perceptions, including the well-proselytized notion that auditory stimulus-detection thresholds improve rapidly during infancy.
Design/methodology/approach
In total, 55 years of infant psychophysics is scrutinized, starting with seminal studies in infant vision, followed by the studies that they inspired in infant hearing.
Findings
The inferred stimulus-detection thresholds are those of the infant-plus-watcher and, more broadly, the entire laboratory. The thresholds are therefore tenuous, because infants’ actions may differ with stimulus intensity; expressiveness may differ between infants; different watchers may judge infants differently; etc. Particularly, the watcher’s ability to “read” the infant may improve with the infant’s age, confounding any interpretation of perceptual maturation. Further, the infant’s gaze duration, an assumed cue to stimulus detection, may lengthen or shorten nonlinearly with infant age.
Research limitations/implications
Infant psychophysics investigators have neglected the role of the observer, resulting in an accumulation of data that requires substantial re-interpretation. Altogether, infant psychophysics has proven far too resilient for its own good.
Originality/value
Infant psychophysics is examined for the first time through second-order cybernetics. The approach reveals serious unresolved issues.
Details
Keywords
A key cybernetics concept, information transmitted in a system, was quantified by Shannon. It quickly gained prominence, inspiring a version by Harvard psychologists Garner and…
Abstract
Purpose
A key cybernetics concept, information transmitted in a system, was quantified by Shannon. It quickly gained prominence, inspiring a version by Harvard psychologists Garner and Hake for “absolute identification” experiments. There, human subjects “categorize” sensory stimuli, affording “information transmitted” in perception. The Garner-Hake formulation has been in continuous use for 62 years, exerting enormous influence. But some experienced theorists and reviewers have criticized it as uninformative. They could not explain why, and were ignored. Here, the “why” is answered. The paper aims to discuss these issues.
Design/methodology/approach
A key Shannon data-organizing tool is the confusion matrix. Its columns and rows are, respectively, labeled by “symbol sent” (event) and “symbol received” (outcome), such that matrix entries represent how often outcomes actually corresponded to events. Garner and Hake made their own version of the matrix, which deserves scrutiny, and is minutely examined here.
Findings
The Garner-Hake confusion-matrix columns represent “stimulus categories”, ranges of some physical stimulus attribute (usually intensity), and its rows represent “response categories” of the subject's identification of the attribute. The matrix entries thus show how often an identification empirically corresponds to an intensity, such that “outcomes” and “events” differ in kind (unlike Shannon's). Obtaining a true “information transmitted” therefore requires stimulus categorizations to be converted to hypothetical evoking stimuli, achievable (in principle) by relating categorization to sensation to intensity. But those relations are actually unknown, perhaps unknowable.
Originality/value
The author achieves an important understanding: why “absolute identification” experiments do not illuminate sensory processes.
Details
Keywords
In the last half-century, individual sensory neurons have been bestowed with characteristics of the whole human being, such as behavior and its oft-presumed precursor…
Abstract
Purpose
In the last half-century, individual sensory neurons have been bestowed with characteristics of the whole human being, such as behavior and its oft-presumed precursor, consciousness. This anthropomorphization is pervasive in the literature. It is also absurd, given what we know about neurons, and it needs to be abolished. This study aims to first understand how it happened, and hence why it persists.
Design/methodology/approach
The peer-reviewed sensory-neurophysiology literature extends to hundreds (perhaps thousands) of papers. Here, more than 90 mainstream papers were scrutinized.
Findings
Anthropomorphization arose because single neurons were cast as “observers” who “identify”, “categorize”, “recognize”, “distinguish” or “discriminate” the stimuli, using math-based algorithms that reduce (“decode”) the stimulus-evoked spike trains to the particular stimuli inferred to elicit them. Without “decoding”, there is supposedly no perception. However, “decoding” is both unnecessary and unconfirmed. The neuronal “observer” in fact consists of the laboratory staff and the greater society that supports them. In anthropomorphization, the neuron becomes the collective.
Research limitations/implications
Anthropomorphization underlies the widespread application to neurons Information Theory and Signal Detection Theory, making both approaches incorrect.
Practical implications
A great deal of time, money and effort has been wasted on anthropomorphic Reductionist approaches to understanding perception and consciousness. Those resources should be diverted into more-fruitful approaches.
Originality/value
A long-overdue scrutiny of sensory-neuroscience literature reveals that anthropomorphization, a form of Reductionism that involves the presumption of single-neuron consciousness, has run amok in neuroscience. Consciousness is more likely to be an emergent property of the brain.
Details
Keywords
For half a century, neuroscientists have used Shannon Information Theory to calculate “information transmitted,” a hypothetical measure of how well neurons “discriminate” amongst…
Abstract
Purpose
For half a century, neuroscientists have used Shannon Information Theory to calculate “information transmitted,” a hypothetical measure of how well neurons “discriminate” amongst stimuli. Neuroscientists’ computations, however, fail to meet even the technical requirements for credibility. Ultimately, the reasons must be conceptual. That conclusion is confirmed here, with crucial implications for neuroscience. The paper aims to discuss these issues.
Design/methodology/approach
Shannon Information Theory depends upon a physical model, Shannon’s “general communication system.” Neuroscientists’ interpretation of that model is scrutinized here.
Findings
In Shannon’s system, a recipient receives a message composed of symbols. The symbols received, the symbols sent, and their hypothetical occurrence probabilities altogether allow calculation of “information transmitted.” Significantly, Shannon’s system’s “reception” (decoding) side physically mirrors its “transmission” (encoding) side. However, neurons lack the “reception” side; neuroscientists nonetheless insisted that decoding must happen. They turned to Homunculus, an internal humanoid who infers stimuli from neuronal firing. However, Homunculus must contain a Homunculus, and so on ad infinitum – unless it is super-human. But any need for Homunculi, as in “theories of consciousness,” is obviated if consciousness proves to be “emergent.”
Research limitations/implications
Neuroscientists’ “information transmitted” indicates, at best, how well neuroscientists themselves can use neuronal firing to discriminate amongst the stimuli given to the research animal.
Originality/value
A long-overdue examination unmasks a hidden element in neuroscientists’ use of Shannon Information Theory, namely, Homunculus. Almost 50 years’ worth of computations are recognized as irrelevant, mandating fresh approaches to understanding “discriminability.”
Details
Keywords
The purpose of this paper is to ask whether a first‐order‐cybernetics concept, Shannon's Information Theory, actually allows a far‐reaching mathematics of perception allegedly…
Abstract
Purpose
The purpose of this paper is to ask whether a first‐order‐cybernetics concept, Shannon's Information Theory, actually allows a far‐reaching mathematics of perception allegedly derived from it, Norwich et al.'s “Entropy Theory of Perception”.
Design/methodology/approach
All of The Entropy Theory, 35 years of publications, was scrutinized for its characterization of what underlies Shannon Information Theory: Shannon's “general communication system”. There, “events” are passed by a “source” to a “transmitter”, thence through a “noisy channel” to a “receiver”, that passes “outcomes” (received events) to a “destination”.
Findings
In the entropy theory, “events” were sometimes interactions with the stimulus, but could be microscopic stimulus conditions. “Outcomes” often went unnamed; sometimes, the stimulus, or the interaction with it, or the resulting sensation, were “outcomes”. A “source” was often implied to be a “transmitter”, which frequently was a primary afferent neuron; elsewhere, the stimulus was the “transmitter” and perhaps also the “source”. “Channel” was rarely named; once, it was the whole eye; once, the incident photons; elsewhere, the primary or secondary afferent. “Receiver” was usually the sensory receptor, but could be an afferent. “Destination” went unmentioned. In sum, the entropy theory's idea of Shannon's “general communication system” was entirely ambiguous.
Research limitations/implications
The ambiguities indicate that, contrary to claim, the entropy theory cannot be an “information theoretical description of the process of perception”.
Originality/value
Scrutiny of the entropy theory's use of information theory was overdue and reveals incompatibilities that force a reconsideration of information theory's possible role in perception models. A second‐order‐cybernetics approach is suggested.
Details
Keywords
The purpose of this paper is to examine the popular “information transmitted” interpretation of absolute judgments, and to provide an alternative interpretation if one is needed.
Abstract
Purpose
The purpose of this paper is to examine the popular “information transmitted” interpretation of absolute judgments, and to provide an alternative interpretation if one is needed.
Design/methodology/approach
The psychologists Garner and Hake and their successors used Shannon's Information Theory to quantify information transmitted in absolute judgments of sensory stimuli. Here, information theory is briefly reviewed, followed by a description of the absolute judgment experiment, and its information theory analysis. Empirical channel capacities are scrutinized. A remarkable coincidence, the similarity of maximum information transmitted to human memory capacity, is described. Over 60 representative psychology papers on “information transmitted” are inspected for evidence of memory involvement in absolute judgment. Finally, memory is conceptually integrated into absolute judgment through a novel qualitative model that correctly predicts how judgments change with increase in the number of judged stimuli.
Findings
Garner and Hake gave conflicting accounts of how absolute judgments represent information transmission. Further, “channel capacity” is an illusion caused by sampling bias and wishful thinking; information transmitted actually peaks and then declines, the peak coinciding with memory capacity. Absolute judgments themselves have numerous idiosyncracies that are incompatible with a Shannon general communication system but which clearly imply memory dependence.
Research limitations/implications
Memory capacity limits the correctness of absolute judgments. Memory capacity is already well measured by other means, making redundant the informational analysis of absolute judgments.
Originality/value
This paper presents a long‐overdue comprehensive critical review of the established interpretation of absolute judgments in terms of “information transmitted”. An inevitable conclusion is reached: that published measurements of information transmitted actually measure memory capacity. A new, qualitative model is offered for the role of memory in absolute judgments. The model is well supported by recently revealed empirical properties of absolute judgments.
Details
Keywords