At the end of Personal Knowledge, Polanyi discusses human development, arguing for a view of the human person as emerging out of but not constituted by its material substrate. As part of this view, he argues that the human person can never be likened to a computer, an inference machine, or a neural model because all are based in formalized processes of automation, processes that cannot account for the contribution of unformalizable, tacit knowing. This paper revisits Polanyi’s discussion of the emergence of consciousness and his rejection of neural models in light of recent developments in connectionism. Connectionist neural modeling proposes an emergentist account of brain structure and, in many ways, is compatible with Polanyi’s philosophy, even if it ultimately neglects questions of meaning. In his discussion of evolution in “The Rise of Man” at the end of Personal Knowledge, Polanyi touches on the emergent properties of human development. He argues in this section that the movement from embryo to fully developed human person cannot be explained either as mere preprogrammed maturation or as the result of an “external creative agency” (395). Rather, human development involves something he calls the “intensification of individuality” (395). According to this view, stages of development—new achievements of a developing human person—arise in a manner similar to the emergence of new scientific discoveries: both processes require the crossing of a “gap,” a heuristic gap in the case of the scientist or an ontological gap in the case of the human person. Just as the scientist strives toward a truth that can only be intimated, so too does the infant passionately strive toward an achievement yet to be realized but intimated as possible. The result of such striving is the emergence of personhood, achieved most fully when a child enters into the “traditional noosphere,” his or her culture’s “lasting articulate framework of thought” (388). For Polanyi, this intensification of individuality at the level of human development is consistent with his view that higher-order structures and characteristics of the human mind are not predetermined in the material substrate of biology but emerge indeterminately as a result of an individual’s personal commitment (395-397). In this way, his view of the emergence of human consciousness is part of his larger refutation of a Laplacean conception of the universe as reducible to the laws of physics and chemistry. Related to Polanyi’s discussion at the end of PK, the concept of emergence has recently begun to gain prominence among cognitive neuroscientists who model brain function using connectionism. Connectionist models of brain architecture assume that higher-order cognitive functions can only be understood globally in terms of patterns of activity distributed over multiple connections in the brain. In this sense, and for readers familiar with Juarrero’s work, it could almost be called a dynamical systems approach to human cognition (e.g., McClelland et al. 2010). Connectionism stands in opposition to “grandmother cell” theories that try to locate thoughts in specific neurons or groups of neurons; representational nativists (e.g., Pinker and Chomsky) who argue that humans are born with significant domain-specific knowledge located in specialized, predetermined modules in the brain; probabilistic models of cognition, which adTradition & Discovery: The Polanyi Society Periodical, 41:3 21 vocate a top-down modeling approach to study cognitive processes; and other computational theories of mind (e.g., Fodor and Pylyshyn) which argue that the brain operates like a digital computer. In contrast to these other theories and approaches, connectionists argue that the complex brain architecture of an adult is emergent from simpler neural structures (Rumelhart 1987; McClelland et al. 2010, 348; McClelland 2011, 134). These structures, rather than being pre-programmed to mature a certain way or to specialize for pre-determined functions, acquire their abilities by encountering inputs in their environment (McClelland 2010, 753). As a revision of the brain-as-computer metaphor, connectionism informs many of the most prominent contemporary discussions about the mind-body relation. It provides a foundation for neurophilosophy, eliminative materialism, embodied cognition, dynamic core theory, and work in artificial intelligence.1 Because connectionism is so fruitful in the cognitive sciences, it is worth considering how it might agree with or depart from Polanyi’s understanding of emergence, especially as it relates to his larger arguments about human development and the relationship between mind and body. Below I explain how many of Polanyi’s objections to the neural model in PK do not apply to current neural models based on connectionist assumptions. Connectionism, which favors pattern recognition rather than logic as a descriptor of cognitive processing, agrees with many of Polanyi’s points about the nature of tacit knowing. Despite such agreement, however, there remains a divergence regarding the status of the human person as an active center. Connectionism: An Overview Connectionism traces its origins to the 1940’s with the McCulloch and Pitts neural model, but it is generally understood to have begun to take its current form in the 1980’s as a result of studies in artificial intelligence (McCleod et al. 1998, 314). It was at this time that David Rumelhart, James McClelland, Geoff Hinton and others developed computer models of brain function that operated through parallel distributed processing (PDP). PDP involves many small “neuron-like” units operating simultaneously over a multilayered network.2 In these networks, information is not carried in whole chunks (such as binary units of 1 or 0). Instead, it is conveyed as a pattern of activity among many units. For example, whereas a localist or symbolic representational system might assign a whole concept, such as “dog,” to a single neuron, a small group of neurons, or a single unit in a computer network, distributed systems do not represent or store such a concept in any single place. Rather, a representation of “dog” would arise from a pattern of activity among many different units, units which are also used to represent other concepts, like cats or coyotes (Elman et al. 1999, 90-91). This pattern of activity is generated and stored as a potential in the weights between connections in the network. These weights reflect the probability that a unit will activate given various levels of input (McClelland 2000, 583). Thus, connectionism treats the human brain primarily as an information processor. But unlike other brain-as-computer theories, connectionism rejects the notion that the brain operates through symbolic processing, with preprogrammed and sequential steps, local storage of memory, and discrete packets of information. Instead, they propose that it is more likely that the brain operates through weighted connections that store and generate information over a distributed network, with units operating in parallel and with larger systems emerging from simpler architectures (Elman et al. 1999, 50-56).3 Since the 1980s, parallel distributed processing models of cognitive function have shown that significant cognitive tasks, such as learning the meaning of words and identifying similarities and differences between objects, can be performed by multiple simple units working in parallel in layered networks (Rumelhart and Todd 1993, 14-15; Elman 1990, 200). Much current work in connectionism focuses on modeling human learning and development, and researchers in the field have built computer models that mimic how humans acquire and perform higher-order cognitive tasks such as learning how to pronounce
The Emergence of Mind: Personal Knowledge and Connectionism
Published 2014 in Unknown venue
ABSTRACT
PUBLICATION RECORD
- Publication year
2014
- Venue
Unknown venue
- Publication date
2014-11-01
- Fields of study
Philosophy, Psychology
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-24 of 24 references · Page 1 of 1
CITED BY
- No citing papers are available for this paper.
Showing 0-0 of 0 citing papers · Page 1 of 1