Skip to main content

Joe Pater

University of Massachusetts

Friday, December 4th at 12:30 PM in AP&M 4301. 

Grammatical Agent-Based Modeling of Linguistic Typology

(joint work with Jennifer Culbertson, Coral Hughto, and Robert Staubs)

Abstract: What effect does learning have on the typological predictions of a theory of grammar? One way to answer this question is to examine the output of agent-based models (ABMs), in which learning can shape the distribution over languages that result from agent interaction. Prior research on ABMs and language has tended to assume relatively simple agent-internal representations of language, with the goal of showing how linguistic structure can emerge without being postulated a priori (e.g. Kirby and Hurford 2002, Wedel 2007). In this paper we show that when agents operate with more articulated grammatical representations, typological skews emerge in the output of the models that are not directly encoded in the grammatical system itself. This of course has deep consequences for grammatical theory construction, which often makes fairly direct inferences from typology to properties of the grammatical system. We argue that abstracting from learning may lead to missed opportunities in typological explanation, as well as to faulty inferences about the nature of grammar. By adding learning to typological explanation, grammatical ABMs allow for accounts of typological tendencies, such as the tendency toward uniform syntactic headedness (Greenberg 1963, Dryer 1992). In addition, incorporating learning can lead to predicted near-zeros in typology. We show this with the case of unrealistically large stress windows, which can be generated by a weighted constraint system, but which have near-zero frequency in the output of our ABM incorporating the same constraints. The too-large-window prediction is one of the few in the extant literature arguing for Optimality Theory’s ranked constraints over weighted ones.