latest findings

Explain me this: Creativity, Competition, and the Partial Productivity of Constructions. Adele E. Goldberg (2019, Princeton University Press). Table of Contents.

Metaphorical language processing and amgydala activation in L1 and L2. Francesca M.M. Citron, Nora Michaelis, Adele E. Goldberg (submitted)

The present study aims to investigate the neural correlates of processing conventional figurative language in non-native speakers in a comparison with native speakers. Italian proficient L2 learners of German and German native speakers read conventional metaphorical statements as well as literal paraphrases that were comparable on a range of psycholinguistic variables. Results confirm previous findings that native speakers show increased activity for metaphorical processing, and left amygdala activation increases with increasing Metaphoricity. At the whole-brain level, L2 learners showed the expected overall differences in activation when compared to native speakers (in the frontal-temporal network).  But L2 speakers did not show any distinctive activation outside the caudate nucleus as Metaphoricity increased, suggesting that the L2 speakers were less affected by increasing Metaphoricity than native speakers were. Yet, with small volume correction, a peak of activation on the amygdala is evident, consistent with the interpretation that metaphorical language is more engaging for proficient L2 speakers as well as for native speakers.

The Sufficiency Principle Hyperinflates the Price of Productivity. Adele E. Goldberg (to appear, Linguistic Approaches to Bilingualism).

Whenever the number of exceptions to a rule reaches its maximum, as it often does in the examples cited, the SP claims that learners must witness and retain all other cases that potentially follow a rule actually following the rule, in order for the rule to become “productive.” While children have been argued to be conservative learners, the SP takes conservatism to a whole new level.

Modeling the Acquisition of Words with Multiple Meanings. Libby Barak, Sammy Floyd, and AEG. To appear (2019). Society for Computation in Linguistics (SCiL). 

 Learning vocabulary is essential to successful communication. Complicating this task is the underappreciated fact that most common words are associated with multiple senses (are polysemous ) (e.g., baseball cap  vs. cap  of a bottle), while other words are homonymous, evoking meanings that are unrelated to one another (e.g., baseball bat  vs. flying bat ). Models of human word learning have thus far failed to represent this level of naturalistic complexity. We extend a feature-based computational model to allow for multiple meanings, while capturing the gradient distinction between polysemy and homonymy by using structured sets of features. Results confirm that the present model correlates better with human data on novel word learning tasks than the existing feature-based model.

When regularization gets it wrong: Children over-simplify language input only in production.  2018.  Jessie Schwab, Casey Lew-Williams and Adele E. Goldberg (2018, Journal of Child Language[online]

Children tend to regularize their productions when exposed to artificial languages, an advantageous response to unpredictable variation. But generalizations in natural languages are typically conditioned by factors that children ultimately learn. In two experiments, adult and six-year-old learners witnessed two novel classifiers, probabilistically conditioned by semantics. Whereas adults displayed high accuracy in their productions—applying the semantic criteria to familiar and novel items—children were oblivious to the semantic conditioning. Instead, children regularized their productions, over-relying on only one classifier. However, in a two-alternative forced-choice task, children’s performance revealed greater respect for the system’s complexity: they selected both classifiers equally, without bias toward one or the other, and displayed better accuracy on familiar items. Given that natural languages are conditioned by multiple factors that children successfully learn, we suggest that their tendency to simplify in production stems from retrieval difficulty when a complex system has not yet been fully learned.

L2 speakers are more accepting of unconventional language than native speakers. Karina Tachihara & Adele E. Goldberg (submitted), CUNY 2018, poster [pdf].  (Paper under review).

Native speakers strongly disprefer novel or unconventional combinations of verbs in argument structure constructions when a conventional competing alternative (“CA”) already exists (e.g., ?He forced that she compete; CA: He forced her to compete). L2 speakers also rate unconventional sentences as less acceptable than familiar formulations, but they are somewhat more tolerant than L1 speakers are (Experiment 1). In a paraphrase task, L2 speakers were less likely than L1 speakers to provide CAs as paraphrases of the unconventional sentences (Experiment 2), and in a 2-alternative-forced-choice task, L2 speakers were somewhat less likely than L1 speakers to prefer the CAs over the unconventional formulations (Experiment 3). Even when L2 speakers did prefer the CAs in the 2AFC task, they were more generous with their ratings of the unconventional alternatives than L1 speakers were (Exp. 3). We investigated whether inducing a greater awareness of relevant CAs would lead L2’s judgments on unconventional sentences to align more closely with native speakers,’ but found no effect (Experiment 4). A final verbatim memory recognition test showed lower performance for L2 speakers, with performance correlating with judgements (Experiment 5). A final meta-analysis of judgment data across studies finds proficiency to be a modest predictor of judgments, while transfer effects are not predictive in the present studies in any straightforward way.

English passive priming is due to By: another look at Bock and Loebell (1990) (CUNY 2018, poster)Jayden Zeigler, Giulia Bencini, AEG, and Jesse Snedecker (paper to be submitted)

Linguistic generalization on the basis of function and constraints on the basis of statistical preemption. Florent Perek & Adele E. Goldberg. 2017. Cognition. [pdf]

How do people learn to use language in creative but constrained ways? Experiment 1 investigates linguistic creativity by exposing adult participants to two novel word order constructions that differ in terms of their semantics: One construction exclusively describes actions that have a strong effect; the other construction describes actions with a weaker but otherwise similar effect. One group of participants witnessed novel verbs only appearing in one construction or the other, while another group witnessed a minority of verbs alternating between constructions. Subsequent production and judgment results demonstrate that participants in both conditions extended and accepted verbs in whichever construction best described the intended message.  Unlike related previous work, this finding is not naturally attributable to prior knowledge of the likely division of labor between verbs and constructions. In order to investigate how speakers learn to constrain generalizations, Experiment 2 includes one verb (out of 6) that was witnessed in a single construction to describe both strong and weak effects, essentially statistically preempting the use of the other construction. In this case, participants were much more lexically conservative with this verb and other verbs, while they nonetheless displayed an appreciation of the distinct semantics of the constructions with new novel verbs. Results indicate that the need to better express an intended message encourages generalization, while statistical preemption constrains generalization by providing evidence that verbs are restricted in their distribution.

Modeling the Partial Productivity of Constructions. Libby Barak, Adele E. Goldberg.  American Association for Artificial Intelligence Spring Symposium (AAAI), 2017. [pdf]

People regularly produce novel sentences that sound native-like (e.g., she googled us the information), while they also recognize that other novel sentences sound odd, even though they are interpretable (e.g., ? She explained us the information). This work offers a Bayesian, incremental model that learns clusters that correspond to grammatical constructions of different type and token frequencies. Without specifying in advance the number of constructions, their semantic contributions, nor whether any two constructions compete with one another, the model successfully generalizes when appropriate while identifying and suggesting an alternative when faced with overgeneralization errors. Results are consistent with recent psycholinguistic work that demonstrates that the existence of competing alternatives and the frequencies of those alternatives play a key role in the partial productivity of grammatical constructions. The model also goes beyond the psycholinguistic work in that it investigates a role for constructions’ overall frequency.

The Blowfish Effect: Children and Adults Use Atypical Exemplars to Infer More Narrow Categories during Word Learning. (submitted) Lauren L. Emberson,    Nicole Loncar, Carolyn Mazzei, Isaac Treves, Adele E. Goldberg. 

Learners preferentially interpret novel nouns at the basic level (“dog”) rather than at a more narrow level (“Labrador”). This “basic-level bias” is mitigated by statistics: children and adults are more likely to interpret a novel noun at a more narrow label if they witness “a suspicious coincidence” — the word applied to 3 exemplars of the same narrow category.   Independent work has found that exemplar typicalityinfluences learners’ inferences and category learning. We bring these lines of work together to investigate whether the content (typicality) of a single exemplar affects the level of interpretation of words and whether an atypicality effect interacts with input statistics.  Results demonstrate that both 4-5 year olds and adults tend to assign a narrowerinterpretation to a word if it is exemplified by an atypicalcategory member.  This atypicality effect is roughly as strong as, and independent of, the suspicious coincidence effect, which is replicated.

Comparing Computational Cognitive Models of Generalization in a Language Acquisition Task. Libby Barak, Adele E. Goldberg and Suzanne Stevenson. EMNLP 2016. [pdf]

Natural language acquisition relies on appropriate generalization: the ability to produce novel sentences, while learning to restrict productions to acceptable forms in the language. Psycholinguists have proposed various properties that might play a role in guiding appropriate generalizations, looking at learning of verb alternations as a testbed. Several computational cognitive models have explored aspects of this phenomenon, but their results are hard to compare given the high variability in the linguistic properties represented in their input. In this paper, we directly compare two recent approaches, a Bayesian model and a connectionist model, in their ability to replicate human judgments of appropriate generalizations. We find that the Bayesian model more accurately mimics the judgments due to its richer learning mechanism that can exploit distributional properties of the input in a manner consistent with human behaviour.

Subtle Implicit Language Facts Emerge from the Functions of Constructions. Adele E. Goldberg. 2016. Frontiers in Psychology 6. doi: 10.3389/fpsyg.2015.02019 [web-link]

Much has been written about the unlikelihood of innate, syntax-specific, universal knowledge of language (Universal Grammar) on the grounds that it is biologically implausible, unresponsive to cross-linguistic facts, theoretically inelegant, and implausible and unnecessary from the perspective of language acquisition. While relevant, much of this discussion fails to address the sorts of facts that generative linguists often take as evidence in favor of the Universal Grammar Hypothesis: subtle, intricate, knowledge about language that speakers implicitly know without being taught. This paper revisits a few often-cited such cases and argues that, although the facts are sometimes even more complex and subtle than is generally appreciated, appeals to Universal Grammar fail to explain the phenomena. Instead, such facts are strongly motivated by the functions of the constructions  involved. The following specific cases are discussed: (a) the distribution and interpretation of anaphoric one , (b) constraints on long-distance dependencies, (c) subject-auxiliary inversion, and (d) cross-linguistic linking generalizations between semantics and syntax.

Ellipsis by constructions. Adele E. Goldberg and Florent Perek. In Handbook of Ellipsis. Edited by Jeroen van Craenenbroeck & Tanja Temmerman. Oxford University Press:

The existence of elliptical constructions in languages is motivated by the Gricean preference to avoid saying more than is necessary. We suggest interpretation is recovered by a semantic pointer function that is independently motivated and consistent with psycholinguistic evidence. This article reviews evidence that ellipsis in language is best understood as licensed by particular constructions, each with its own form and functional properties. Our account of gapping is quite general in that it includes cases of traditional “argument cluster coordination”; it is at the same time more restrictive than other accounts in including a constraint on register.  We compare gapping and other ellipsis constructions in French and English with an emphasis on their differences. Finally, we discuss derivational approaches to ellipsis, and concluded that a constructionist approach is more promising than a single, over-arching rule-based approach since it is in a better position to capture distinctions as well as commonalities among ellipsis constructions, within and across languages.

Conventional metaphors in longer passages evoke affective brain response. Francesca MM Citron,  Jeremie Güsten, Nora Michaelis, Adele E. Goldberg. 2016. NeuroImage. [pdf]. link

Conventional metaphorical sentences such as She’s a sweet child have been found to elicit greater amygdala activation than matched literal sentences (e.g., She’s a kind child). In the present fMRI study, this finding is strengthened and extended with naturalistic stimuli involving longer passages and a range of conventional metaphors. In particular, a greater number of activation peaks (four) were found in the bilateral amygdala when passages containing conventional metaphors were read than when their matched literal versions were read (a single peak); while the direct contrast between metaphorical and literal passages did not show significant amygdala activation, a parametric analysis revealed that BOLD signal changes in the left amygdala correlated with an increase in metaphoricity ratings across all stories. Moreover, while a measure of complexity was positively correlated with increase in activation of a broad bilateral network mainly involving the temporal lobes, complexity was not predictive of amygdala activity. Thus, the results suggest that amygdala activation is not simply a result of stronger overall activity related to language comprehension, but is more specific to the processing of metaphorical language.

Neural systems involved in processing novel linguistic constructions and their visual referents. Matthew A. Johnson, Nick Turk-Browne, and Adele E. Goldberg. 2015.  Language, Cognition, and Neuroscience:

In language, abstract phrasal patterns provide an important source of meaning, but little is known about whether or how such constructions are used to predict upcoming visual scenes. Findings from two fMRI studies indicate that initial exposure to a novel construction allows its semantics to be used for such predictions. Specifically, greater activity in the ventral striatum, a region sensitive to prediction errors, was linked to worse overall comprehension of a novel construction. Moreover, activity in occipital cortex was attenuated when a visual event could be inferred from a learned construction, which may reflect predictive coding of the event. These effects disappeared when predictions were unlikely: that is, when phrases provided no additional information about visual events. These findings support the idea that learners create and evaluate predictions about new instances during comprehension of novel linguistic constructions.

Judgment evidence for statistical preemption: It is relatively better to vanish than to disappear a rabbit, but a lifeguard can equally well backstroke or swim children to shore. Clarice Robenalt & Adele E. Goldberg. 2015. Cognitive Linguistics. 26.3: 467-504.[pdf]

How do speakers know when they can use language creatively and when they cannot? Prior research indicates that higher frequency verbs are more resistant to overgeneralization than lower frequency verbs with similar meaning and argument structure constraints. This result has been interpreted as evidence for conservatism via entrenchment, which proposes that people prefer to use verbs in ways they have heard before, with the strength of dispreference for novel uses increasing with overall verb frequency. This paper investigates whether verb frequency is actually always relevant in judging the acceptability of novel sentences or whether it only matters when there is a readily available alternative way to express the intended message with the chosen verb, as is predicted by statistical preemption. Two experiments are reported in which participants rated novel uses of high and low frequency verbs in argument structure constructions in which those verbs do not normally appear. Separate norming studies were used to divide the sentences into those with and without an agreed-upon preferred alternative phrasing which would compete with the novel use for acceptability. Experiment 2 controls for construction type: all target stimuli are instances of the caused-motion construction. In both experiments, we replicate the stronger dispreference for a novel use with a high frequency verb relative to its lower frequency counterpart, but only for those sentences for which there exists a competing alternative phrasing. When there is no consensus about a preferred way to phrase a sentence, verb frequency is not a predictive factor in sentences’ ratings. We interpret this to mean that while speakers prefer familiar formulations to novel ones, they are willing to extend verbs creatively if there is no readily available alternative way to express the intended meaning.

L2 learners do not take competing alternative expressions into account the way L1 learners do. Clarice Robenalt & Adele E. Goldberg. 2015. Language Learning: [pdf]

The present study replicates the findings in Robenalt & Goldberg (to appear, CogLing–immediately above) with a group of native speakers and critically extends the paradigm to non-native speakers. Recent findings in second language acquisition suggest that second language (L2) learners are less able to generate online expectations during language processing, which in turn predicts a reduced ability to differentiate between novel sentences that have a competing alternative and those that do not. We test this prediction and confirm that while L2 speakers display evidence of learning from positive exemplars, they show no evidence of taking competing grammatical alternatives into account, except at the highest quartile of speaking proficiency in which case L2 judgments align with native speakers.

A-adjectives, statistical preemption, and the evidence: Reply to Yang (2015). Adele E. Goldberg and Jeremy K. Boyd. 2015. Language 91 11: 184-197. pdf

A certain class of English adjectives known as a-adjectives resists appearing attributively as prenominal modifiers (e.g., ??the afraid boy, ??the asleep man). Boyd & Goldberg (2011) had offered experimental evidence suggesting that the dispreference is learnable on the basis of categorization and statistical preemption: repeatedly witnessing predicative formulations in contexts in which the attributive form would otherwise be appropriate. The present paper addresses Yang (2015)’s counterproposal for how a-adjectives are learned, and his instructive critique of statistical preemption. The counterproposal is that children receive evidence that a-adjectives behave like locative particles in occurring with certain adverbs such as far and right. However, in an analysis of the 450 million word COCA corpus, the suggested adverbial evidence is virtually non-existent (e.g., *far alive; *straight afraid). In fact, these adverbs occur much more frequently with typical adjectives (e.g., far greater, straight alphabetical). Furthermore, relating a-adjectives to locative particles does not provide evidence of the restriction, because locative particles themselves can appear as prenominal modifiers (the down payment, the outside world). The critique of statistical preemption is based on a 4.3 million word corpus analysis of child directed speech that suggests that children cannot amass the requisite evidence before they are three years old. While we clarify which sorts of data are relevant to statistical preemption, we concur that the required data is relatively sparsely represented in the input. In fact, recent evidence suggests that children are not actually cognizant of the restriction until they are roughly ten years old, an indication that input of an order of magnitude more than 4.3 million words may be required. We conclude that a combination of categorization and statistical preemption is consistent with the available evidence of how the restriction on a-adjectives is learned.

One among many: anaphoric one and its relationship to numeral one. Adele E. Goldberg & Laura A. Michaelis. 2016.  Cognitive Science: [pdf]

One anaphora (e.g., She has a better one) has been used as a key diagnostic in syntactic analyses of the English noun phrase, and ‘one-replacement’ has also figured prominently in debates about the learnability of language. However, much of this work has been based on faulty premises, as a few perceptive researchers, including Ray Jackendoff, have made clear. Abandoning the view of anaphoric one (a-one) as a form of syntactic replacement allows us to take a fresh look at various uses of the word one. In the present work, we investigate its use as a cardinal number (1-one) in order to better understand its anaphoric use. Like all cardinal numbers, 1-one can only quantify an individuated entity and provides an indefinite reading by default. Owing to unique combinatoric properties, cardinal numbers defy consistent classification as determiners, quantifiers, adjectives or nouns. Once the semantics and distribution of cardinal numbers including 1-one are appreciated, many properties of a-one follow with minimal stipulation. We claim that 1-one and a-one are distinct but very closely related lexemes. When 1-one appears without a noun (e.g., Take one), it is nearly indistinguishable from a-one (e.g., take one)—the only differences being interpretive (1-one foregrounds its cardinality while a-one does not) and prosodic (presence versus absence of primary accent). While we ultimately argue that a family of constructions is required to describe the full range of syntactic contexts in which one appears, the proposed network accounts for properties of a-one by allowing it to share (inherit) most of its syntactic and interpretive constraints from its historical predecessor, 1-one.

Generalizing beyond the input: the functions of the constructions matter: Florent Perek & Adele Goldberg. 2015.  Journal of Memory and Language 84: 109-127.

A growing emphasis on statistics in language learning raises the question of whether learning a language consists wholly in extracting statistical regularities from the input. In this paper we explore the hypothesis that the functions of learned constructions can lead learners to use language in ways that go beyond the statistical regularities that have been witnessed. The present work exposes adults to two novel word order constructions that differed in terms of their functions: one construction but not the other was exclusively used with pronoun undergoers. In Experiment 1, participants in a lexicalist condition witnessed three novel verbs used exclusively in one construction and three exclusively in the other construction; a distinct group, the alternating condition, witnessed two verbs occurring in both constructions and two other verbs in each of the constructions exclusively. Production and judgment results demonstrate that participants in the alternating condition accepted all verbs in whichever construction was more appropriate, even though they had seen just two out of six verbs alternating. The lexicalist group was somewhat less productive, but even they displayed a tendency to extend verbs to new uses. Thus participants tended to generalize the constructions for use in appropriate discourse contexts, ignoring evidence of verb-specific behavior, especially when even a minority of verbs were witnessed alternating. A second experiment demonstrated that participants’ behavior was not likely due to an inability to learn which verbs had occurred in which constructions. Our results suggest that construction learning involves an interaction of witnessed usage together with the functions of the constructions involved.

Tuning in to the verb-particle construction in English. Adele E. Goldberg. to appear. In Léa Nash and Pollet Samvelian (eds.) Approaches to Complex Predicates: 

This work investigates English verb particle combinations (e.g., put on) and argues that item-specific and general information are needed and should be related within a default inheritance hierarchy. When verb particle combinations appear within verb phrases, a tripartite phrasal syntax is defended, whether or not the V and P are adjacent (e.g., She put on the wrong shoes; she put the wrong shoes on). The < V NP P > order is motivated as the default word order by explicitly relating a verb-particle construction to the caused-motion construction (e.g., she put the shoes on her feet). Well-known and independently needed processing considerations related to complement length, information status, and semantics motivate system-wide generalizations that can serve to override the default word order.   Lexical verb-particle combinations (e.g., a pickup truck; a showdown) and an idiomatic case, V-off are also briefly discussed as providing further evidence for the need for both item-specific and more general constructions.

Compositionality. Adele E. Goldberg. 2016. In N. Reimer (ed.) Routledge Handbook of Semantics. 419-430.

How do people glean meaning from language? A Principle of Compositionality is generally understood to entail that the meaning of every expression in a language must be a function of the meaning of its immediate constituents and the syntactic rule used to combine them. This paper explores perspectives that range from acceptance of the principle as a truism, to rejection of the principle as false. Controversy has arisen basedon the role of extra-constituent linguistic meaning (idioms; certain cases of paradigmatic morphology; constructional meanings; intonation), and context (e.g., metonymy; the resolution of ambiguity and vagueness).