2019 CCN Workshop: Semantic processing and semantic knowledge
Organizers: Jim Haxby, Ida Gobbini, Hervé Abdi, Vassiki Chauhan, Alex Huth, Sam Nastase, Adina Roskies
Dates: August 22 and August 23
Location: The Hanover Inn, Hanover, NH
PROGRAM
CODE OF CONDUCT
Speakers
Peter Gärdenfors, Lund University, Sweden
Conceptual spaces as a model of semantics
Abstract:
The problem in focus is how to identity the structure of our semantic representations. I will begin by presenting three main alternatives that have been proposed within the cognitive sciences: symbolic systems, neural networks, and spatial (geometric) representations. The advantages and drawbacks of these approaches are compared. I present my theory of conceptual spaces as an example of a spatial representation system. Evidence from concept formation and language acquistion supports the spatial approach to semantics. I will conclude by discussing some connections to cognitive neuroscience, in particular the idea that the grid cells in the entorhinal cortex function as a kind of universal coordinate system for conceptual spaces.
Jim Haxby, Dartmouth College, USA
Modeling the structure of information encoded in fine-scale cortical topographies in distributed cortical systems
Alex Huth, UT Austin, USA
Beyond distributional embeddings for modeling brain responses to language
Abstract:
For more than a decade, word embeddings constructed using distributional properties, i.e. how words co-occur across large text corpora, have been the best models for predicting brain responses to language measured using fMRI. These word embeddings are typically used to fit voxelwise encoding models, which separately predict responses in each voxel. Here we will discuss two newer methods that can beat these distributional baselines. First, we show that visually-grounded word embeddings that incorporate information about the visual appearance of the things to which words refer can outperform distributional models in many brain areas. This suggests that the semantic space used by some areas in the brain is strongly influenced by visual information. Second, we show that contextual models that incorporate information about not just which words appeared, but the order in which they appeared, strongly outperform distributional models in nearly every brain area. Although interpreting these contextual models is difficult, these results suggest that much more can be learned about how the meaning of language is represented in the brain using these techniques.
Ray Jackendoff, Tufts University, USA
The structure of knowledge
Abstract:
The overall framework in which I’ve been working is called the Parallel Architecture. The goal is an integrated theory of phonological, syntactic, and semantic knowledge, and the relation of this knowledge to the rest of the mind.
The architecture comprises independent structures for phonology, syntax, and semantics, plus interface components that establish correspondences among the three structures. The interface components include words, which link fragments of structure in the three domains. Rules of grammar are stated as declarative schemas, which are also pieces of linguistic structure, in the same format as words, but containing variables. Schemas can be used either to create novel structures (their generative role) or to capture patterns of generalization within the lexicon (their relational role).
I will sketch results in two domains of semantics. The first domain is the human understanding of physical objects – their shapes, spatial configurations, motions, and affordances for action. This is shared between two levels of representation: a geometric/topological Spatial Structure that also serves as the upper end of visual, haptic, and proprioceptive perception, and an abstract algebraic Conceptual Structure that coordinates types and tokens, expresses taxonomic relations, distinguishes perception from imagery – and facilitates linguistic expression.
The second domain is social cognition, which makes use predominantly of Conceptual Structure. Notions encoded in this domain, based on the notion of Person, include kinship, group membership, dominance hierarchies, the value of objects and actions to oneself and to others, joint intention and joint action, reciprocity, fairness, rights and obligations, and exchange. The linguistic expression of these notions encompasses a wide range of interesting predicates.
Unlike syntax and phonology, both these domains have clear antecedents in primate cognition, suggesting intriguing questions about the evolution of human cognitive capacities.
Marcel Just, CMU, USA
The new science of thought imaging: Applying machine learning and dimension reduction to fMRI to understand the structure of thoughts
Abstract:
Recent computational techniques, particularly machine learning, are being applied to fMRI brain imaging data, making it possible for the first time to relate patterns of brain activity to specific thoughts. Our early work focused on the identification of the neural signatures of individual concrete concepts, like the thought of an appleor a hammer. It progressed to identifying many other types of concepts, such as emotions, numbers, and abstract concepts.
The more recent work is progressing towards understanding the structure of thought in terms of its neural components. One facet of this approach consists of identifying the neurosemantic dimensions that underlie the representation of concepts in a given domain. For example, the dimensions of periodicity and causal motion underlie many elementary physics concepts. Another facet applies the approach to multi-concept thoughts, such as the neural representation of a paragraph or a sentence.
In addition to the theory development, two areas of application of this approach are producing fruitful results. One application is to neuropsychiatry, where it has been possible to identify suicidal ideation in terms of alterations of a normative pattern of concept representation. Another application is to STEM instruction, where it is possible to assess the neural structure of STEM concepts.
The scientific significance is that we are beginning to understand the basic neurocognitive building blocks of more and more types of thought from simple to complex. This research is in its infancy, but it is advancing rapidly and is providing a new perspective on the brain’s organizational system for representing individual concepts and larger constellations of thought and knowledge.
Branka Milivojevic, Radboud University, The Netherlands
Story maps - How we infer meaning from narratives
Abstract:
Most everyday activities, such as watching a movie or shopping for groceries, require us to remember what has happened before, so that we can correctly infer the meaning of ongoing experiences and, if required, perform relevant actions. To accomplish this, the brain must relate what is happening now to relevant experiences from the past. This is computationally difficult when the environment, and ensuing sensory input, are continuously changing, and the relevant experiences are non-adjacent in space and/or time because of potentially many irrelevant intervening experiences. I propose that we bridge across these non-adjacencies by forming cognitive maps of events which enable us to infer the relationship between individual events and link them together into narratives. I refer to these cognitive maps of events as story maps and will discuss a series of experiments, where we used a combination of realistic stimuli (movies and animated videos), fMRI and across-voxel pattern similarity to examine whether formation of narrative-based memory networks relies on hippocampal mechanisms which are critically involved in formation of cognitive maps in other domains (most notably space and concepts). In combination, these studies suggest that inferences about the relationships between events in narratives may be based on map-like representations in the hippocampus, and that these story maps serve as context for integration of separate experiences in memory and organisation of ongoing experiences.
Sam Nastase, Princeton University, Princeton, NJ, USA
Estimating a shared response space across heterogeneous naturalistic story-listening data sets
Abstract:
Connectivity hyperalignment can be used to estimate a single shared response space across disjoint datasets. We describe a variant of connectivity hyperalignment—the connectivity-based shared response model—that factorizes aggregated fMRI datasets into a single reduced-dimension shared connectivity space and subject-specific topographic transformations. These transformations resolve idiosyncratic functional topographies and can be used to project response time series into shared space. We benchmark this algorithm on a large, heterogeneous collection of story-listening functional MRI datasets assembled over the course of approximately seven years. This data collection comprises 10 unique auditory story stimuli across 300 scans with 160 unique subjects. Projecting subject data into shared space dramatically improves between-subject story time-segment classification and increases the dimensionality of shared information across subjects. This improvement generalizes to subjects and stories excluded when estimating the shared space. We demonstrate that estimating a simple semantic encoding model in shared space improves between-subject forward encoding and inverted encoding model performance. The shared space estimated across all datasets is distinct from the shared space derived from any particular constituent dataset; the algorithm leverages shared connectivity to yield a consensus shared space conjoining diverse story stimuli.
Una Stojnic, Columbia University, USA
Nonnegotiable meanings
Abstract:
Tradition teaches us that a precondition on successful communication is that interlocutors share mutual knowledge of the meanings of expressions of a language. Semantics characterizes what competent speakers know, and expect others to know, about the meanings of expressions of their language (Lewis (1969), Schiffer (1972) and Higginbotham (1992)), which, in turn, facilitates the encoding and recovery of the intended message. Yet, often there are instances of successful communication without prior knowledge of linguistic conventions. The phenomenon of lexical innovation, whereby a speaker uses a sentence containing a pairing of a basic expression with a meaning novel to the speaker and/or hearer, presents just one such kind of case. In response, increasingly influential efforts to account for our capacity for sharing conventional (semantic) information through linguistic communication but without presupposing prior knowledge of this linguistic meaning invoke a Dynamic Meaning Hypothesis. The core idea is that meanings—or the linguistic conventions that fix them—are dynamic, constantly changing, and are potentially (re-)negotiated by the members of the linguistic community even during the course of a single conversation (e.g. Armstrong, 2016, Capellen, 2018, Carston 2002, Davidson, 1986, Haslanger 2012, Ludlow 2014, Plunket and Sundell 2013). Agents can come to update—or (re-)negotiate—existing linguistic conventions: not only can they do so by introducing new expression-meaning pairings on the fly (as in lexical innovation), but they can completely change the standing meaning of an extant expression, or adjust the background conventions in a way that broadens or narrows the extension determined by the earlier standing meaning. We, however, disagree, and shall argue that meanings are non-negotiable. The kind of negotiation the dynamic meaning hypothesis posits cannot, normally, affect or change the meaning of a word, nor can it secure a mutually shared semantic content of the sort that standard theories of communication presuppose.
Simone Viganò, University of Trento, Italy
Navigating a novel semantic space with distance and directional codes
Abstract:
A recent proposal posits that humans might use the same neuronal machinery to support representations of both spatial and non-spatial information, organising knowledge of the world in an internal “cognitive map” of concepts that is partially based on spatial codes. However, experimental evidence remains elusive. In my talk, I will discuss the results of a recent experiment where adult participants familiarized with a novel conceptual space composed by labelled categories of multisensory objects: a semantic space. Before and after non-spatial categorical training, they were presented with pseudorandom sequences of both objects and words during a functional MRI session, while performing a non-spatial categorization task. We reasoned that a subsequent presentation of two stimuli referring to different categories implied a movement in the novel semantic space, with specific travelled distances and directions. Crucially, we showed that after learning the medial prefrontal cortex (mPFC) encoded a reliable representation of the distances between the recently acquired concepts, as revealed by both fMRI adaptation and model-based RSA, and that the activity in the right entorhinal cortex (EHC) showed a periodic modulation as a function of travelled direction, also encoding, to a weaker extent, information about the travelled distance. In both cases, it was possible to recover a faithful bi-dimensional representation of the novel concepts directly from neural data. Our results indicate that the brain regions and specific coding schemes known to encode relations and movements between spatial locations in mammals, in the human species can also be used, or recycled, to represent a bi-dimensional multisensory semantic space during a categorisation task.
Leila Wehbe, CMU, USA
Using insights from the human brain to interpret and improve NLP models
Abstract:
There has been much progress in neural network models for Natural Language Processing (NLP) that are able to produce embeddings that represent the meaning of individual words and word sequences. This has allowed us to investigate the brain representation of natural text and begin to unravel the mechanisms the brain uses to make sense of language. At the same time, the brain, being the only processing system we have that actually produces and understands language, could be a valuable source of insight on how to build useful representations of language. In this talk I will describe recent work on identifying brain regions that process language meaning at different length of context. Based on these results, I will also describe recent work that employs brain activity recordings (from subjects reading natural text) to interpret and improve the representations learned by powerful NLP algorithms that were able to beat many performance benchmarks at their release, such as ELMo or BERT. We show that modifications to BERT's architecture that make its representations more predictive of brain activity also improve BERT's performance on a series of NLP tasks. Under this perspective, the cognitive neuroscience of language and NLP can evolve in a symbiotic partnership where progress in one field can illuminate the path for the other.
Adina Williams, Facebook
Mutual information as a measure of "semantic" overlap: Two case studies in the nominal domain
Abstract:
Since Shannon originally proposed his mathematical theory of communication in the middle of the 20th century, information theory has been an influential scientific perspective at the interface between linguistics, cognitive science, and computer science. In this talk, I adopt a guiding assumption that meaningful utterances are those which are informative, i.e., those that transmit a reasonable amount of "information". Recently, powerful state-of-the-art NLP systems have gained in popularity, allowing us to utilize large-scale, multilingual corpora as the basis for measuring information theoretic quantities. More specifically, we can now measure whether linguistic types in large, multilingual corpora significantly overlap in "meaning", i.e., whether they mutually share a statistically significant amount of information between them. In this talk, I present two studies that apply these methods to noun-phrase phenomena that are known to be non-deterministic, by asking how arbitrary are: (i) pairings of classifiers with nouns in Mandarin Chinese (Liu et al., 2019) and (ii), pairings of gender on nouns with adjectives or verbs that share a dependency relation with those nouns in six languages (Williams et al., 2019/submitted). If these pairings are significantly non-arbitrary, we will say that a “semantic” overlap exists between the two linguistic types. In both cases, a significant amount of mutual information was uncovered. These studies can be taken as a step towards a general methodological program which measures information theoretic quantities in ecologically natural, written corpora as a way to shed light on cognitive scientific questions about informativity and idiosyncrasy in language use.