Keynote Speakers' Summaries

Linguistics for Everyone.

Wayne O'Neil, Department of Linguistics & Philosophy, M.I.T..

(The Full Paper is also available.)

Knowledge of language is a, perhaps the, defining mental faculty of the human mind/brain. Yet, while it is certainly the best understood of the human mental faculties, it is the least studied in a general education despite the fact that its study, pursued in the way to be discussed here, can quite quickly turn the apparent mysteries of language into problems with a clear and interesting range of explanations.

Linguistic inquiry of the sort we have in mind also provides an opportunity for students to examine a host of deeply ingrained language attitudes and prejudices that they need to seriously challenge and change if their world is to be a better place than the one they were born into.

Nonmonotonicity in Linguistics: From Transformations to Constraint Ranking in Optimality Theory.

K.P. Mohanan, National University of Singapore.

A central ingredient of the scientific understanding of nature is the search for laws that underlie patterns of observable phenomena. A law (e.g. 'The pressure of a given body of gas is proportional to the volume.' 'The force of attraction between two bodies is inversely proportional to the square of the distance between them.') is a proposition that expresses a regularity. Laws of language structure are formally expressed in theoretical linguistics variously as rules, constraints, principles, templates, filters, conventions, conditions, and so on.

Experience in the formulation of laws in most domains reveals that what appears to a complex pattern is often the result of the interaction between fairly simple laws. Thus, the laws governing gravitational fields and magnetic fields are by themselves quite simple, but the interaction between the Earth's gravitational field and two or more magnetic fields around a moving pendulum can result in an extremely complex orbit.

An important type of interaction between laws is one that involves conflict resolution, that is, the calculation of the outcome when two laws entail mutually contradictory requirements in a given situation. The resolution of conflicts between laws of language structure have been formally expressed in theoretical linguistics in terms of the mechanisms of transformational rules, rule ordering, underspecification, constraint ranking, repair, structure preservation, elsewhere condition, default, and so on.

From the perspective of formal logic, systems of reasoning that permit conflict resolution have the property of nonmonotonicity. The formal mechanisms of conflict resolution, ranging from transformational rules to defaults and constraint ranking, may therefore be viewed as various procedural and declarative implementations of nonmonotonic reasoning. In my talk, I will examine each of these nonmonotonic devices, and try to unearth what they can and cannot capture. Rather than using a collection of alternative mechanisms for the same function, it stands to reason that we should select a single mechanism. In order to avoid the proliferation of nuts and bolts solutions and unnecessary debates on mechanics, it is essential that we have a clear understanding of the formal and conceptual nature of the nonmonotonic devices available to us.

Categorizing Morphemes: Content Morphemes and Two Types of Functional Elements.

Carol Myers-Scotton, Linguistics Program, University of South Carolina.

This paper argues that the two accepted ways of distinguishing lexical categories (the open/closed class distinction and the thematic/ functional distinction) are inadequate to account for the facts of language. Neither explains observed surface forms in Second Language Acquisition, language contact phenomena (e.g., code switching), and Broca's aphasia. Moreover, these distinctions do not account for how morphemes are entered in the mental lexicon and accessed in productin. This paper demonstrates that an "election-based classification" of morphemes as either content morphemes or one of two types of functional elements (=system morphemes) results in a predictive model of the mental lexicon and language structure. Bock and Levelt(1994) argue that lemmas supporting content morphemes are directly-elected at the conceptual level, while functional elements are concurrently indreictly-elected. This paper ebndorses a modified view: some system morphemes are activated at the conceptual level ("indirectly-elected system morphemes"); however, while the slots for a second type of system morpheme ("structurally-assigned system morphemes") aree also projected then, the morphemes themselves are not activated until a later stage (Jake and Myers-Scotton 1997, Myers- Scotton and Jake 1997). This view of language competence and production explains many phenomena, e.g., acquisition order in Second Language Acquisition, differential language loss in aphasia, and structures in bilingual language (codeswitching, for example). For example, in SLa, 3rd person -s, past -ed, progressive -ing, and past particple -ed are all considered closed class items and functinal elements; yet, there are statistically significant differences in their acquisition patterns which only the approach taken here explains. Learners acquire progressive -ing and past participle forms before 3rd person -s and past -ed.

The paper reports on a number of studies of quantitative data. It demonstrates how analyzing sources of external evidence, where linguistic structures are psycholinguistics "magnified", reveals the differential roles of morphemes, and therefore, the nature of lexical structure.