Languages with two i’s

Doing some research on underspecification and its use as an exceptionality-generating device for our work on Logical Phonology, I have encountered an common pattern: there are quite a few languages, of wide genetic and geographic distribution, whose description suggests their phonological inventory contains two underlying segments ordinarily realized as [i(ː)] but showing different phonological behaviors. The following is a brief catalog followed by some discussion.

Barrow Inupiaq

I have already discussed this well-known case, which is discussed by Dresher (2009:§7.2.1), who in turn has the data from Kaplan (1981:§3.22). In a number of dialects of Eskimo-Aleut, a four-phoneme vowel system *i, *u, *ə, *a has been simplified to an ordinary three-vowel system by the merger of *ə into *i. However, in Barrow Inupiaq, there is a “strong i” (< *i) which triggers palatalization of a following coronal consonant whereas “weak i” (< *ə) does not. 

(1) Barrow Inupiaq (Kaplan 1981:§3.22, his 27-29):

a. iglu ‘house’, iglulu ‘and a house’, iglunik ‘houses’
b. ini ‘place’, inilu ‘and a place’, ininik ‘places’
c. iki ‘wound’, ikiʎu ‘and a wound’, ikiɲik ‘wounds’

Presumably, the stem-final in (1b) is weak and the one in (1c) is strong. One possible way to derive this (slightly different from what I said in my earlier post) is to treat palatalization as triggered by [−Back, +High] strong i, leave weak i underspecified for Back, and then use the following late redundancy rule to fill in the missing specification for weak i.

(2) [+High] ⊔ {−Back}

Given the definition of unification, this will non-vacuously apply to weak i, vacuously apply to strong i, and unification with [+High, +Back] /u/ will fail; /a/ does not meet the structural description because it is [−High].

Czech

Anderson & Browne (1973; henceforth AB) discuss several vocalic alternations in two registers of Czech. I will focus on the literary language (“LL” for AB; henceforth just Czech) and put aside additional complications associated with their analysis of the common spoken register, which makes some possibly problematic simplifying assumptions for sake of argument.

The surface front vowels of Czech are [i, iː, ɛ, ɛː].1 However, specific instances of these three vowels either trigger or do not trigger palatalization of the preceding consonant.2 For example, the masculine animate nom.pl. -i [-i] is palatalizing, but the feminine nom.pl -y [-i] and the gen.pl. -ych [-ix] are not:

(3) sestřin [sɛstr̝in] ‘sister’s’, sestřini [sɛstr̝iɲi] masc.anim. nom.pl., 
sestřiny [sɛstr̝ini] fem. nom.pl., sestřinych [sɛstr̝inix] gen.pl.

The same is true of [ɛ]-initial suffixes. For example, the [-ɛ] prep.sg. suffix is palatalizating, but the -ech [-ɛx] prep.pl. is not (note the -o suffix in the citation form is the neut. nom.sg.):

(4) okno [okno] ‘window’, okně [okɲɛ] prep.sg., oknech [oknɛx] prep.pl.

Czech [i, iː] are reflexes of both Old Czech i(ː) and ɨ(ː), the latter two merging into the former two respectively. This distinction is preserved in the writing system as [i, iː] from earlier ɨ(ː) is written <y, ý> respectively. AB adopt the sensible proposal that this distinction is also preserved phonologically. That is, they suggest non-palatalizing <y, ý> are underlyingly [+High, −Low, −Round, +Back] and palatalizing <i, í> are [+High, −Low, −Round, −Back]. While they do not sketch out a full analysis, presumably <y, ý> are fronted after palatalization has applied. Alternatively (and more in tune with the approach of Gorman and Reiss 2024), one might instead propose that <y, ý> are [+High, −Low, −Round] and underspecified for [Back]. Palatalization will be triggered by a following [−Back] vowel, and then the following unification rule will (non-vacuously) front <y, ý>:

(5) [−Low, −Round] ⊔ {−Back}

As written, this will also also handle the parallel behavior of non-palatalizing <ě>, which gives it the same featural representation as palatalizing <e> [ɛ]. Under either account, though, Czech has two i‘s (in long and short variants) and two e‘s too.

Hungarian

Charles Reiss (p.c.) draws my attention to the existence of a roughly 60  Hungarian noun stems (Siptár & Törkenczy 2000:§3.2.2; henceforth ST) which are apparent exceptions to harmony in that they have a front vowel but select for back-vowel allomorphs. For instance, “regular” víz ‘water’ has the harmonic dative vízneand ablative víztől, and so on. But there are some exceptions, which ST call “antiharmonic”.

(6) Antiharmonic stems (after ST:68, their 33):
a. híd ‘bridge’, hídnak dat.sg. (*hídnek), hídtól abl.sg. (*hídtől)
b. szid ‘scold’, szidhat ‘may scold’, szidó ‘scolding’
c. héj ‘crust’, héjnak dat.sg., héjtól abl.sg.

According to ST, most antiharmonic stems are in i or í as in (6ab), and just a few are in é as in (6c). While these are traditionally understood as lexical (morpheme-level) exceptions, one possible account is to treat antiharmonic í, i, or é as having a different underlying specification than harmonic í, i, and é.

There are a  number of possible ways this might work out. According to ST, í, i, and é are not so much “harmonic” but rather “neutral”. They claim the front-harmonic suffix allomorphs (like –neand –től) are the default allomorphs. Indeed, stems consisting of a non-neutral vowel like á [aː], [ɒ], [o] or ó [oː] followed by í, i, and é select the back allomorphs, and do so without exception, suggesting í, i, and é are simply transparent to harmony.

(6) Back-neutral stems (after ST: loc. cit., their 34b):
a. papír ‘paper’, papírnak dat.sg., papírtól abl.sg.
b. dózis ‘dose’, dózisnak dat.sg., dózistól abl.sg.
c. kávé ‘coffee’, kávénak dat.sg., kávétól abl.sg.

If we adopt ST’s perspective, then one possibility is that harmonic-neutral í, i, and é are specified  [−Low, −Round] and antiharmonic vowels are [−Low, +Back, −Round]—keeping them distinct from underlying [u], ú [uː], and ó [oː]—and become [−Back] later in the derivation. The following context-free rules, applied in order after harmony has been computed, ought to suffice.3

(7) [−Low, −Round] ∖ {+Back}
(8) [-Low, -Round] ⊔ {−Back}

(7) gives harmony-neutral and antiharmonic front vowels the same representation, applying non-vacuously only to the latter, and (8) insures that both are realized as [−Back] on the surface. The [-Round] condition is essential to exempt front round vowels—namely ü [y], ű [yː], ö [ø], and ő [øː]—and the [-Low] condition exempts low vowels, which are back-harmonic.

Kashaya

Buckley (1994) describes four unusual rules in Kashaya, given below in extensional (i.e., non-featural) notation.

(9) i -> a / m _
(10) i -> u / d _
(11) V -> a / q _
(12) V -> o / qw _

While (9-10) appear to be quite regular processes, he notes that 3 out of 21 i-initial suffixes (the inchoative -ibic and the reflexives -iyic’ and -ic’) fail to undergo them. Similarly, most i‘s undergo (11-12) but not the previously mentioned i-initial suffixes. With (11) the result is […ki…] instead of *[…qa…]; with (12) the result is […qo…] instead of *[…qwo…].

Buckley proposes that Kashaya has two i‘s—mutable /i/ and inalterable /î/—which are surface-identical except with respect to (9-12). He proposes that /î/ is [+High] whereas /i/ is underlyingly underspecified for it. This analysis can be easily translated into Logical Phonology. For instance, (9) can be respecified as follows.4

(13)  [+Vocalic] ⊔ {-High, +Low} / [+Labial, +Nasal, …] _

(13) applies non-vacuously to only one segment: underspecified /i/. Both High and Low must be specified since Kashaya also has mid vowels /e, o/. Then a context-free redundancy rule can then be used to “merge” /i, î/, non-vacuously applying to any underspecified /i/ remaining after (13) and the Logical Phonology analogues of (10-12).

(14) [+Vocalic] ⊔ {+High, -Low}

Spanish

The final case concerns the so-called raising verbs of the Spanish 3rd conjugation. This is a well-known case and you can read my our take on it in Gorman & Reiss 2024 (§4) here. Briefly, we argue that the “raising” verbs have an underlying vowel underspecified for High which is realized as except when it is followed by a [-Back, +High] vowel in the following syllable. Verbs with the underspecified i-like vowel, like the pedir-pido ‘ask for (s.t.)’ are thus distinct from verb stems with non-alternating (e.g., vivir-vivo ‘live’) or (sumergirsumerjo ‘submerge’).

Discussion

Summarizing our analyses:

  • Barrow Inupiaq has two i‘s, one “catalytic” and one “quiescient” with respect to palatalization.
  • Czech has two i‘s (ignoring phonemic length; and two e’s, also ignoring phonemic length), one catalytic and one quiescient with respect to palatalization.
  • Hungarian has two i‘s (ignoring phonemic length; and two long e’s), one catalytic and one quiescient with respect to vowel harmony.
  • Kashaya has two i’s, one catalytic and inalterable with respect to several processes, and one quiescent and mutable with respect to those same processes.
  • Spanish has two i‘s, one mutable and one inalterable with respect to i-dissimilation.

The approach to underspecification developed by Sharon Inkelas and colleagues, whose slogan is “prespecification as inalterability”, is well-equipped to handle Kashaya and Spanish, where i‘s are underspecified to generate mutability and prespecified to generate inalterability. However, it has nothing to say about Barrow Inupiaq, Czech, or Hungarian, a point which Inkelas & Cho (1993:556, fn. 26) concede. One of the innovations of Logical Phonology is that it generalizes their treatment of targets to triggers, adding a new slogan—”prespecification as catalysis”—allowing for all five cases to receive a more-uniform, purely phonological treatment. 

Is there something about i, and perhaps and a in some occasions, which might account for its apparent tendency to either mutate or resist mutation, or to either trigger or fail to trigger mutation? Let us call these tendencies duality. First, it should be said that this generalization is based on a handful of reasonably-well characterized languages, and it is not clear that it is a meaningful tendency. But, granting the significance of this tendency for sake of argument, one might be tempted to derive the duality of i from some phonetic—acoustic-auditory or articulatory—property of high front vowels. One can imagine many just-so stories. For instance, /i/ is very commonly as the default and/or epenthetic vowel, and I have been told that epenthetic segments are longer on average than non-epenthetic segments. Perhaps its default or epenthetic status, or an associated shorter temporal duration, accounts for its duality, though it is not clear how. Or perhaps duality derives from something about its phonetic cues. Logical Phonology, however, is a strictly substance-free approach, and as such, such explanations, however interesting, must be extra-grammatical, of the sort studied by Juliette Blevins and John Ohala among others.

Endnotes

  1. I have taken the liberty of adapting AB’s transcriptions into IPA. I also follow standard conventions in transcribing the mid front vowels as [ɛ(ː)] rather than their [e(ː)], though nothing depends on this.
  2. I put aside two details here. First, there appears to be a separate rule of velar palatalization which is not conditioned on the palatalizing/non-palatalizing contrast under discussion. Secondly, palatalization triggered by palatalizing e is realized as [j] insertion when the preceding consonant is a plain consonant (e.g., /v/) lacking a palatal allophone.
  3. In Logical Phonology, feature-changing processes are always expressed as a deletion rule followed by a unification rule, so the apparent redundancy in (7-8) is intentional. This separation allows us to dispense with the distinction between feature-filling and feature-changing rules, and has other desirable properties discussed elsewhere.
  4. In this rule I am using an ellipsis for the specific features of /m/ since the exact specification of this segment is unimportant here.

References

Anderson, S. A., and Browne, W. 1973. On keeping exchange rules in Czech. Papers in Linguistics 6: 445-482.
Buckley, Eugene. 1994. Prespecification of default features: the two /i/’s of Kashaya. In Proceedings of the 24th Annual Meeting of NELS, pages 17-30.
Dresher, B. E. 2009. The Contrastive Hierarchy in Phonology. Cambridge University Press.
Gorman, K., and Reiss, C. 2024. Metaphony in Substance Free Logical Phonology. Submitted. URL: https://lingbuzz.net/lingbuzz/008634.
Inkelas, S. and Cho, Y.-M. 1993. Inalterability as prespecification. Language 69: 529-574.
Kaplan, L. D. 1981. Phonological Issues in North Alaskan Inupiaq. Alaska Native Language Center.
Siptár, P., and Törkenczy, M. 2000. The Phonology of Hungarian. Oxford University Press.

Professional organizations in linguistics

I am a member of the Linguistic Society of America (LSA) and the Association for Computational Linguistics (ACL), US-based professional organizations for linguists and computational linguists, respectively. (More precisely, I am usually a member. I think my memberships both lapsed during the pandemic and I renewed once I started going to their respective conferences again.)

I attend LSA meetings when they’re conveniently located (next year’s in Philly and we’re doing a workshop on Logical Phonology), and roughly one ACL-hosted meeting a year as well. As a (relatively) senior scholar I don’t find the former that useful (the scholarship is hit-or-miss and the LSA is dominated by a pandemonium of anti-generativists who are best just ignored), but the networking can be good. The *CL meetings tend to have more relevant science (or at least they did before prompt engineering…) but they’re expensive and rarely held in the ACELA corridor.

While the LSA and the ACL are called professional organizations, their real purview is mostly to host conferences. The LSA does some other stuff of course: they run Language, the institutes, and occasionally engage in lobbying, etc. But they do not have much to say about the lives of workers in these fields. The LSA doesn’t tell you about the benefits of unionizing your workplace. The ACL doesn’t give you ethics tips about what to do if your boss wants you to spy on protestors.  They don’t really help you get jobs in these fields either. They could; they just don’t.

There is an interesting contrast here with another professional organization I was once a member of: the Institute of Electrical and Electronics Engineers (IEEE, pronounced “aye Tripoli”). Obviously, I am not an electrical engineer, but electrical engineering was historically the home of speech technology research and their ASRU and SLT conferences are quite good in that subfield. During the year or so I was an IEEE member, I received their monthly magazine. Roughly half of it is in fact just stories of general interest to electrical engineers; one that stuck with me argued that the laws of physics preclude the existence of “directed energy weapons” claimed to cause Havana Syndrome. But the other half were specifically about the professional life of electrical engineers, including stuff about interviewing, the labor market outlook, and working conditions.

Imagine if Language had a quarterly professional column or if the ACL Anthology had a blog-post series…

Hiring season

It’s hiring season and your dean has approved your linguistics department for a new tenure line. Naturally, you’re looking to hire an exciting young “hyphenate” type who can, among other things, strengthen your computational linguistics offerings, help students transition into industry roles and perhaps even incorporate generative AI into more mundane parts of your curriculum (sigh). There are two problems I see with this. First, most people applying for these positions don’t actually have relevant industry experience, so while they can certainly teach your students to code, they don’t know much about industry practices. Secondly, an awful lot of them would probably prefer to be a full-time software engineer, all things considered, and are going to take leave—if not quit outright—if the opportunity ever becomes available. (“Many such cases.”) The only way to avoid this scenario, as I see it, is to find people who have already been software engineers and don’t want to be them anymore, and fortunately, there are several of us.

The dark triad professoriate

[I once again need to state that I am not responding any person or recent event. But remember the Law of the Subtweet: if you see yourself in some negative description but are not explicitly named, you can just keep that to yourself.]

There is a long debate about the effects of birth order on stable personality traits. A recent article in PNAS1 claims the effects are near null once proper controls are in place; the commentary it’s paired with suggests the whole thing is a zombie theory. Anyways, one of the claims I remember hearing was that older siblings were more likely to exhibit subclinical “Dark Triad” (DT) traits: Machiavellianism, narcissism, and psychopathy. Alas, this probably isn’t true, but it is easy to tell a story about why this might be adaptive. Time for some game theory. In a zero-sum scenario, if you’re the most mature (and biggest) of your siblings, you probably have more to gain from non-cooperative behaviors, and DT traits ought to select for said behaviors. A concrete (if contrived example): you can either hog or share the toy, and the eldest is by more likely to get away with hogging.

I wonder whether the scarcity of faculty positions—even if overstated (and it is)—might also be adaptive for dark triad traits. I know plenty of evil Boomer professors, but not many that are actually DT, and if I had to guess, these traits (particularly the narcissism) are much more common in younger (Gen X and Millennial) cohorts. Then again, this could be age-grading, since anti-social behaviors peak in adolescence and decline afterwards.

Endnotes

  1. This is actually a “direct submission”, not one of those mostly-phony “Prearranged Editor” pieces. So it might be legit.

Our vocation

If you’re a linguist: well, why?

One thing that stands out about the life of the professional linguist is what Chomsky has the responsibility of intellectuals, to “speak to the truth and to expose lies”, in this case uncomfortable truths about language and its role in society. Certainly this responsibility—and privilege, as Chomsky also points out—is inspiration for many linguists. But other motives abound. I for one am more drawn to learning about (an admittedly narrow corner) of human nature than I am to speaking truth to power, and most likely would have ended in some other area of social science had I not discovered the field. And there’s nothing wrong with a linguist who is most of all drawn to little logic puzzles, so long as these puzzles are ultimately grounded in those questions about human nature. (I do reject, categorically, those who say that linguists ought to be doing nothing than “Word Sudoku” or “Wordle with more steps”. Maybe there are people who work solely in those modes, and if so I wish them a very happy alt-ac career transition.)

I think the truths about human nature uncovered by the epistemology-obsessed generativists—including those of the armchair variety—has something to say about the proper organization of society. But one is more likely to get such messages from sociolinguists. Sociolinguists correctly point out that we have unexamined, corrosive ideologies about language, languages, and their speakers that are mostly contrary to the liberal values most of us profess, and they certainly are well-positioned to speak these truths. That said, I do not agree with an often-implicit assumption that sociolinguistics is somehow a more noble vocation than other topics in the field. The “discourse” on this is often fought as a proxy war over hiring: e.g., one I’ve heard before is “Why doesn’t MIT’s linguistics faculty include a sociolinguist?” First off, it sort of does: it includes one of the world’s foremost creolists, who has written extensively about the role of creole studies in neocolonialism and white supremacy. Whether or not a creolist is a sociolinguist is probably more a matter of self-identity than one of observable fact, but there’s no question that creole studies has a lot to give to—but also a both a lot to answer for on—the problem of linguistic equality. Should the well-rounded linguist have studied sociolinguistics? Absolutely. But there are probably many other areas, topics, or even theories you think that any well-rounded linguist ought to have studied but which are not required or widely taught, and these rarely provoke such discourse.

Optimality Theory on exceptionality

[This post is part of a series on theories of lexical exceptionality.]

I now take a large jump in exceptionality theory from the late ’70s to the mid-aughts. (I am skipping over a characteristically ’90s approach, which I’ll cover in my final post.) I will focus on a particular approach—not the only one, but arguably the most robust one—to exceptionality in Optimality Theory (OT). This proposal is as old as OT itself, but is developed most clearly in Pater (2006), and Pater & Coetzee (2005) propose how it might be learned. I will also briefly discuss the application of the morpheme-specific approach to the yer patterns characteristic of Slavic languages by Gouskova and Rysling, and Rubach’s critique of this approach.

I will have very little to say about the cophonology approach to exceptionality that has also been sporadically entertained in OT.  Cophonology holds that morphemes may have arbitrarily different constraint rankings. Pater (henceforth P) is quite critical of this approach throughout his 2006 paper: among other criticisms, he regards it as completely unconstrained. I agree: it makes few novel predictions, and would challenge cophonologists (if any exist in 2024) to consider how cophonology might be constrained so as to derive interesting predictions about exceptionality.

Indexed constraints

Even the earliest work in Optimality Theory supposed that some constraints might be specific to particularly grammatical categories or morphemes. This of course is a loosening of the idea that Con, the universal constraint family, is language-universal and finite, but it seems to be a necessary assumption. P claims that this device is powerful enough to handle all known instances of exceptionality in phonology. The basic idea is extremely simple: for every constraint X there may also exist indexed constraints of the form X(i) whose violations are only recorded when the violation occurs in the context of some morpheme i.1 There are then two general schemas that produce interesting results.

(1) M(i) >> F >> M
(2) F(i) >> M >> F

Here M stands for markedness and F for faithfulness. As will be seen below, (1) has a close connection to the notions of mutability and catalysis introduced in my earlier post; (2) in turn has a close connection with quiescence and inalterability.

One of P’s goals is to demonstrate that this approach can be applied to Piro syncope. His proposal is not quite as detailed as one might wish, but it is still worth discussing and trying to fill in the gaps. For P, the general syncope pattern arises from the ranking Align-Suf-C >> Max; in prose, it is permissible to delete a segment if doing so brings the suffix in contact with a consonant. This also naturally derives the non-derived environment condition since it specifically mentions suffixhood. P derives the avoidance of tautomorphemic clusters, previously expressed with the VC_CV environment, with the markedness constraint *CCC. This gives us *CCC >> Align-Suf-C >> Max thus far.  This should suffice for derivations whose roots are all mutable and catalytic.

For P, inalterable roots are distinguished from mutable ones by an undominated, indexed clone of Max which I’ll call Max(inalt), giving us a partial ranking like so.

(3) Max(inalt) >> Align-Suf-C >> Max

This is of course an instance of schema (2). Note that since the ranking without the indexing is just Align-Suf-C >> Max, it seemingly treats mutability as the default and inalterability as exceptional, a point I’ll return to shortly.

Quiescent roots in P’s analysis are distinguished from catalytic ones by a lexically specific clone of Align-Suf-C; here the lexically indexed one targets the catalytic suffixes, so we’ll write it Align-Suf-C(cat), giving us the following partial ranking.

(4) Align-Suf-C(cat) >> Max >> Align-Suf-C

This is an instance of schema (1). It is interesting to note that the Align constraint bridges the distinction between target and trigger, since the markedness is a property of the boundary itself. Note also that it also seems to treat quiescence as the default and catalysis as exceptional.

Putting this together we obtain the full ranking below.

(5) *CCC, Max(inalt) >> Align-Suf-C(cat) >> Max >> Align-Suf-C

P, unfortunately, does not take the time to compare his analysis to Kisseberth’s (1970) proposal, or to contrast it with Zonneveld’s (1978) critiques, which I discussed in detail in the earlier post. I do observe one potential improvement over Kisseberth. Recall that Kisseberth had trouble with the example /w-čokoruha-ha-nu-lu/ [wčokoruhahanru] ‘let’s harpoon it’ , because /-ha/ is quiescent and having a quiescent suffix in the left environment is predicted counterfactually to block deletion in /-nu/. As far as I can tell this is not a problem for P; the following suffix /-lu/ is on the Align-Suf-C(cat) lexical list and /-nu/ is not on the Max(inalt) list and that’s all that matters. Presumably, P gets this effect because of the joint operation of the two flavors of Align-Suf-C  and *CCC means has properly localized the catalysis/quiescence component of the exceptionality. However, P’s analysis does not seem to generate the right-to-left application; it has no reason to favor the attested /n-xipa-lu-ne/ [nxipalne] ‘my sweet potato’ over *[nxiplune]. This reflects a general issue in OT in accounting for directional application.

As I mentioned above, P’s analysis of Piro treats mutability and quiescence as productive and inalterability and catalysis as exceptional. Indeed, it predicts mutability and quiescence in the absence of any indexing, and one might hypothesize that Piro speakers would treat a new suffix of the appropriate shape as mutable and quiescent. I know of no reason to suppose this is correct; for Matteson (1965), these are arbitrary and there is no obvious default, whereas my impression is that Kisseberth views mutability (like P) and catalysis (unlike P) as the default. This question of productivity is one that I’ll return to below as I consider how indexing might be learned.

Learning indexed constraints

Pater and Coetzee (2005, henceforth P&C) propose indexed constraint rankings can be learned using a variant of the Biased Constraint Demotion (BCD) algorithm developed earlier by Prince and Tesar (2004). Most of the details of that algorithm are not strictly relevant here; I will focus on the ones that are. BCD supposes that learners are able to accumulate UR/SR pairs and then use the current state of their constraint hierarchy to record them as a data structure called a called mark-data pair. These give, for each constraint violation, whether that violation prefers the actual SR or a non-optimal candidate. From a collection of these pairs it is possible to rank constraints via iterative demotion.2 The presence of lexical exceptionality produces a case where it is not possible to for vanilla BCD to advance the demotion because a conflict exists: some morphemes favor one ranking whereas others favor another. P&C propose that in this scenario, indexed constraints will be introduced to resolve the conflict.

P&C are less than formal in specifying how this cloning process works, so let us consider how it might function. Their example, a toy, concerns syllable shape. They suppose that they are dealing with a language in which /CVC/ is marked (via NoCoda) but there are a few words of this shape which surface faithfully (via Max). They suppose that this results in a ranking paradox which cannot be resolved with the existing constraints. As stated, I have to disagree: their toy provides no motivation for NoCoda >> Max.3 Let us suppose, though, for sake of argument that there is some positive evidence, after all, for that ranking. Perhaps we have the following.

(6) Toy grammar (after P&C):
a. /kap/ – > [ka]
b. /gub/ -> [gu]
c. /net/ -> [net]
d. /mat/ -> [mat]

Let us also suppose that there is some positive evidence that /kap, gub/ are the correct URs so they are not changed to faithful URs via Lexicon Optimization. Then, (6ab) favor NoCoda >> Max but (6cd) favor Max >> NoCoda. P&C suppose this is resolved by cloning (i.e., generating an indexed variant of) Max, producing a variant for each faithfully-surfacing /CVC/ morpheme. If these morphemes are /net/ and /mat/, then we obtain the following partial ranking after BCD.

(7) Max(net), Max(mat) >> NoCoda >> Max

This is another instance of schema (2); there are just multiple indexed constraints in the highest stratum. Indeed, P&C imagine various mechanisms by which Max(net) and Max(mat) might be collapsed or conflated at a later stage of learning.

It is crucial to the P&C’s proposal that the child actual observes the exceptional morphemes both of (6cd) surfacing faithfully; however, it is not necessary to observe (6ab), just to observe some morphemes in which, like in (6ab), a coda consonant is deleted so as to trigger cloning. The critical sample for (7), then, is either (6acd) or (6bcd). It is not necessary to see both (6a) and (6b), but it is necessary to see both of (6cd). Thus, there is some very real sense in which this analysis treats coda deletion as the productive default and coda retention as exceptional behavior, much like how P’s analysis of Piro treated mutability and quiescence as productive. However, it seems like P&C could have instead adapted schema (1) and proposed that what is cloned is NoCoda, obtaining the following ranking.

(8) NoCoda(kap), NoCoda(gub) >> Max >> NoCoda

Then, for this analysis, the crucial sample is either (6abc) or (6abd), and there is a similar sense in which coda retention is now the default behavior.

P&C give no reason to prefer (7) over (8). Reading between the lines, I suspect they imagine that the relative frequency (i.e., number of morpheme types) which either retain or lose their coda is the crucial issue, and perhaps they would appeal to an informal “majority-rules” principle. That is, if forms like (6ab) are more frequent than those like (6cd) they would probably prefer (7) and would prefer (8) if the opposite is true. However, I think P&C should have taken up this question and explained what is cloned when. Indeed, there is an alternative possibility: perhaps cloning produces all of the following constraints in addition to Max and NoCoda.

(9) Max(kap), Max(gub), Max(net), Max(mat), NoCoda(kap), NoCoda(gub), NoCoda(net), NoCoda(mat)

While I am not sure, I think BCD would be able to proceed and would either converge on (7) or (8), depending on how it resolves apparent “ties”.

Another related issue, which also may lead to the proliferation of indexed constraints, is that P&C have little to say about how constraint cloning words in complex words. Perhaps the cloning module is able to localize the violation to particular morphemes. For instance, it seems plausible that one could inspect a Max violation, like the ones produced by Piro syncope, to determine which morpheme is unfaithful and thus mutable. However, if we wish to preserve P’s treatment of mutability as the default (and that inalterable morphemes have a high-ranked Max clone), we instead need to do something more complex: we need to determine that a certain morpheme does not violate Max (good so far), but also that under a counterfactual ranking of this constraint and its “antagonist” Align-Suf-C, would have done so; this may be something which can be read off of mark-data pairs, but I am not sure. Similarly, to preserve P’s treatment of quiescence as the default, we need to determine that a certain suffix has an Align-Suf-C violation (again, good so far), but also that under a counterfactual ranking of this constraint and its antagonist, it would have not done so.

While I am unsure if this counterfactual reasoning part of the equation can be done in general, I can think of least one case where the localization reasoning cannot be done: epenthesis at morpheme boundaries, as in the [-əd] allomorph of the English regular past. Here there is no sense in which the Dep violation can be identified to a particular morpheme. Indeed, Dep violations are defined by the absence of correspondence. This is a perhaps an unfortunate example for P&C’s approach. English has a number of few “semiweak” past tense forms (e.g. from Myers 1987: bitbledhidmet, spedledreadfedlitslid) which are characterized by a final dental consonant and shortening of the long nucleus of the present tense form. Given related pairs like keep-kept, one might suppose that these bear a regular /-d/ suffix, but fail to trigger epenthesis (thus *[baɪtəd], etc.). To make this work, we assume the following.

(10) Properties of semiweak pasts:
a.  Verbs with semiweak pasts are exceptionally indexed to a high-ranking Dep constraint which dominates relevant syllable structure markedness constraints.
b. Verbs with semiweak pasts are exceptionally indexed to high-ranking markedness constraint(s) triggering “Shortening” (in the sense of Myers 1987)
c. A general (i.e., non-indexed) markedness constraint against hetero-voiced obstruent clusters dominates antagonistic voice faithfulness constraints.

The issue is this: how do children localize the failure of epenthesis in (10a) to the root and not the suffix, given that the counterfactual epenthetic segment is not an exponent of either, occurring rather at the boundary between the two? Should one reject the sketchy analysis given in (10), there are surely many other cases where correspondence alone is insufficient; for example, consider vowels which coalesce in hiatus.

The yers

I have again already gone on quite long, but before I stop I should briefly discuss the famous Slavic yers as they relate to this theory.

In a very interesting paper, Gouskova (2012) presents an analysis of the yers in modern Russian. In Russian, certain instances of the vowels e and alternate with zero in certain contexts. These alternating vowels are termed yers in traditional Slavic grammar. A tradition, going back to early work by Lightner (1965), treats yers in Russian and other Slavic languages, as underlyingly distinct from non-alternating e and o, either featurally or, in later work, prosodically. For example, лев [lʲev] ‘lion’ has a genitive singular (gen.sg.) льва [lʲva] and мох [mox] ‘moss’ has a gen.sg. [mxa].

Gouskova (henceforth G) wishes to argue that yer patterns are better analyzed using indexed constraints, thus treating morphemes with yer alternations as exceptional rather than treating the yer segments as underspecified. In terms of the constraint indexing technology, G’s analysis is straightforward. Alternating vowels are underlyingly present in all cases, and their deletion is triggered by a high-ranked constraint *Mid (which disfavors mid vowels, naturally) which is indexed to target exactly those morphemes which contain yers. Additional phonotactic constraints relating to consonant sequences are used to prevent deletion that produces word-final consonant clusters. Roughly, then, the analysis is:

(11) *CC]σ >> *Mid(yer morphemes) >> Max-V >> *Mid

As G writes (99-100, fn. 18): “In Russian, deletion is the exception rather than the rule: most morphemes do not have deletion, and neither do loanwords…”

It should be noted that G’s analysis departs from the traditional (“Lightnerian”) analysis in ways not directly to the question of localizing exceptionality (i.e., in the morpheme vs. the segment). For one, (11) seems to frame retention of a mid vowel as a default. In contrast, the traditional analysis does not seem to have any opinion on the matter. In that analysis, whether or not a mid vowel is alternating is a property of its underlying form, and should thus be arbitrary in the Saussurean sense. This is not to say that we expect to find yers in arbitrary contexts. There are historical  reasons why yers are mostly found in the final syllable—this is the one of the few places where the historical sound change called Havlík’s Law, operating more or less blindly, could introduce synchronic yer/zero alternations in the first place (in many other contexts the yers were simply lost), and in other positions it is impossible to ascertain whether or not a mid vowel is a yer. Whether or not an alternative versions of the sound change could have produced an alternative-universe Russian where yers target the first syllable is an unknowable counterfactual given that we live in our universe, with our universe’s version of Havlík’s Law. Secondly, the traditional analysis (see Bailyn & Nevins 2008 for a recent exemplar) usually conditions the retention of yers on the presence of a yer (which may or may not be itself retained) in the following syllable. In contrast, G does not seem to posit yers for this purpose nor does she condition their retention on the presence of nearby yers. In the traditional analysis, these conditioning yers are motivated by the behavior of yers in prefixes and suffixes in derivational morphology, and much of this hinges on apparent cyclicity. G provides an appendix in which she attempts to handle several of these issues in her theory, but it remains to be seen whether this has been successful in dismissing all the concerns one might raise.

G provides a few arguments as to why the exceptional morpheme analysis is superior to the traditional analysis. G wishes to establish that mid vowels are in fact marked in Russian, so that yer deletion can take a something of a “free ride” on this constraint. As such, she claims that yer deletion is related to the reduction of mid vowels in unstressed syllables. But how do we know that these these facts are connected? And, if they are in fact connected, is it possible that there is an extra-grammatical explanation? For instance, there may be a “channel bias” in production and/or perception that disfavors faithful realization of mid vowels (and thus imposes a systematic bias in favor of reduction and deletion) compared to the more extreme phonemic vowels (in her analysis, /a, i, u/) which caused the actuation of both changes. Phenomenologically speaking, it is true that there are two ways in which certain Russian mid vowels are unfaithful, but this is just one of a infinite set of true statements about Russian phonology, and there is something “just so” about this one.

Before I conclude, let us now turn briefly to Polish. Like Russian, this language has mid-vowels which alternate with zero in certain contexts. (Unlike Russian, for whatever reason, the vast majority of alternating vowels are e; there are just three morphemes which have an alternating o.)

Rubach (2013, 2016), explicitly critiques constraint indexation using data from Polish. Rubach argues that G’s analysis cannot be generalized straightforwardly to Russian. He draws attention to stems that contain multiple mid vowels, only one of which is a yer (e.g., sfeter/sfetri ‘sweater’); and concludes that it is not necessarily possible to determine which (or both) should undergo deletion in an “exceptional” morpheme. The only mechanism with which one might handle this is a rather complex series of markedness constraints on consonant sequences. Unfortunately, Polish is quite permissive of complex consonant clusters and this mechanism cannot always be relied upon to deliver the correct answer. He also draws attention to the behavior of derivational morphology such as double diminuitives. In contrast, Rysling (2016) attempts to generalize G’s indexed constraint analysis of yers to Polish. However, her analysis differs from G’s analysis of Russian in that she derives the yers from epenthesis to avoid word-final consonant clusters. Furthermore, for Rysling, epenthesis, in the relevant phonotactic contexts (to a first approximation, certain C_C#), is the default, and failure to epenthesize is exceptional.5 Sadly, there is little interaction between the Rubach and Rysling papers (the latter briefly discusses the former’s 2013 paper), so I am not prepared to say whether Rysling’s radical revision addresses Rubach’s concerns with constraint indexation.

References

  1. P and colleagues refer to these constraints as “lexically specific”, but in fact it seems the relevant structures are all morphemes, and never involve polymorphemic words or lexemes.
  2. As far as I know, though, there is no proof of convergence,  under any circumstances, for BCD.
  3. Perhaps they are deriving this from the assumption that the initial state is M >> F, but without alternation evidence, BCD would rerank this as Max >> NoCoda and cloning would not be triggered.
  4. A subsequent rule of obstruent voice assimilation, which is needed independently would give us [kɛpt] from /kip-d/, and so on.
  5. Rysling seems to derive this proposal from an analysis of lexical statistics: she counts how many Polish nouns have yer alternations in the context …C_C# and compares this to non-alternating …CeC# and …CC#. It isn’t clear to me how the proposal follows from the statistics, though: non-epenthesis and epenthesis in …C_C# are about equally common in Polish, and their relative frequencies are not much different from what she finds in Russian.

References

Bailyn, J. F. and Nevins, A. 2008. Russian genitive plurals are impostors. In A. Bachrach and A. Nevins (ed.), Inflectional Identity, pages 237-270. Oxford University Press.
Gouskova, Maria. 2012. Unexceptional segments. Natural Language & Linguistic Theory 30: 79-133.
Kenstowicz, M. 1970. Lithuanian third person future. In J. R. Sadock and A. L. Vanek (ed.), Studies Presented to Robert B. Lees by His Students, pages 95-108. Linguistic Research.
Lightner, T. Segmental phonoloy of Modern Standard Russian. Doctoral dissertation, Massachusetts Institute of Technology.
Matteson, E. 1965. The Piro (Arawakan) Language. University of California Press.
Myers, S. 1987. Vowel shortening in English. Natural Language & Linguistic Theory 5(4): 485-518.
Pater, J. and Coetzee, A. W. 2005. Lexically specific constraints: gradience, learnability, and perception. In Proceedings of the Korea International Conference on Phonology, pages 85-119.
Pater, J. 2006. The locus of exceptionality: morpheme-specific phonology as constraint indexation. In University of Massachusetts Occasional Papers 32: Papers in Optimality Theory: 1-36.
Pater. J. 2009. Morpheme-specific phonology: constraint indexation and inconsistency resolution. In S. Parker (ed.), Phonological Argumentation: Essays on Evidence and Motivation, pages 123-154. Equinox.
Prince, A. and Tesar, B. 2004. Learning phonotactic distributions. In Kager, R., Pater, J. and Zonneveld, W. (ed.), Constraints in Phonological Acquisition, pages 245-291. Cambridge University Press.
Rubach, J. 2013. Exceptional segments in Polish. Natural Language & Linguistic Theory 31: 1139-1162.
Rysling, A. 2016. Polish yers revisited. Catalan Journal of Linguistics 15:121-143.
Zonneveld, W. 1978. A Formal Theory of Exceptions in Generative Phonology. Peter de Ridder.

Kisseberth & Zonneveld on exceptionality

[This post is part of a series on theories of lexical exceptionality.]

In a paper entitled simply “The treatment of exceptions”, Kisseberth (1970) proposes an interesting revision to the theory of exceptionality. Many readers may be familiar with the summary of this work given by Kenstowicz & Kisseberth 1977:§2.3 (henceforth K&K). Others may know it from the critique by Zonneveld (1978: ch. 3) or Zonneveld’s (1979) review of K&K’s book. I will discuss all of these in this post.

Kisseberth (1970)

A quick sidebar: Kisseberth’s paper is a fascinating scholarly artifact in that it probably could not be published in its current form today. (To be fair it was published in an otherwise-obscure journal, Papers in Linguistics.) For one, all the data is drawn from Matteson’s (1965) grammar of Piro;1 the only other referenced work is SPE. Kisseberth (henceforth K) gives no page, section, or example numbers for the forms he cites. I have tried to track down some of the examples in Matteson’s book, and it is extremely difficult to find them. K gives no derivations, only a few URs, and the entire study hinges around a single rule. But it’s provocative stuff all the same.

K observes that Piro has a rule which syncopates certain stem-final vowels. He gives the following formulation:

(1) Vowel Drop:

V -> ∅ / VC __ + CV


For example, [kama] ‘to make, form’ has a nominalization [kamlu] ‘handicraft’ with nominalizing suffix /-lu/, and [xipalu] ‘sweet potato’ has a possessed form /n-xipa-lu-ne/ [nxipalne] ‘my sweet potato’.2 One might think that (1) is intended to be applied simultaneously, as this is the convention for rule application in SPE, but this would predict *[nxiplne], with a medial triconsonantal cluster. Left-to-right application gives *[nxiplune]; the only way to get the observed [nxipalne] is via right-to-left (RTL) application, which I’ll assume henceforth. As far as I know, the directionality issue has not been noticed in prior work.

Of course, there are exceptions of several types. (I am drawing additional data from the unpublished paper by CUNY graduate student Héctor González, henceforth G. I will not make any attempt to make González’s transcriptions or glosses comparable to those used by K, but doing so should be straightforward.)

One type is exemplified by /nama/ ‘mouth of’, which does not undergo Vowel Drop, as in /hi-nama-ya/ [hinamaya] ‘3sgmpssr-mouth.of-Obl.’ (G 5a); under RTL application we would expect *[hinmaya]. This is handled easily in the SPE exceptionality theory I reviewed a few weeks ago by marking /nama/ as [-Vowel Drop].

However, other apparent instances of exceptionality are not so easily handled. Consider two forms involving the verb root /nika/ ‘eat’. In  /n-nika-nanɨ-m-ta/ [hnikananɨmta] ‘1sg-eat-Extns-Nondur-Vcl’ (G 5b) both vowels of the root satisfy (1) but do not undergo syncope. One might be tempted to mark this root as [-Vowel Drop], but it does undergo deletion in other derivations, such as in /nika-ya-pi/ [nikyapi] ‘eat-Appl-Instr.Nom’ (G 4d). Rather, it seems to be that the following /-nanɨ/ fails to trigger deletion. This is not easily handled in the SPE approach. K gives a number of similar examples involving the verbal theme suffixes /-ta/ and /-wa/, which also do not trigger syncope. If a morphemes vary in whether or not they undergo and whether or not they trigger Vowel Drop, one can imagine that these properties might cross-classify:

  • Mutable, catalytic The nominalizing suffix /-lu/, discussed above, is both  mutable (i.e., undergoes syncope), and catalytic (triggers syncope) in /n-xipa-lu-ne/ [nxipalne].
  • Inalterable, catalytic I have not found any relevant examples in Piro; Kenstowicz & Kisseberth 1977 (118f.) present a hastily-described example from Slovak.
  • Mutable, quiescent /meyi-wa-lu/ [meyiwlu] ‘celebration’ shows that intranstive verb theme suffix /-wa/ is mutable but quiescent (does not trigger syncope; *[meywalu]).
  • Inalternable, quiescent /yimaka-le-ta-ni-wa-yi/ shows that the imperfective suffix /-wa/ (not to be confused with the homophonous intransitive /-wa/) is inalterable; /r-hina-wa/ [rɨnawa] ‘3-come-Impfv’ (G 6c) shows that it is quiescent (*[rɨnwa]).

According to G, there is also one additional category that does not fit into the above taxonomy: the elative suffix /-pa/ triggers deletion of the penultimate (rather than preceding) vowel, as in /r-hitaka-pa-nɨ-lo/ [rɨtkapanro] ‘3-put-Evl-Antic-3sgf’ (G 7a). Furthermore, /-pa/ appears to lose its catalytic property when it undergoes syncope, as in /cinanɨ-pa-yi/ [cinanɨpyi] ‘full-Elv-2sg’ (G 7c). Given the rather unexpected set of behaviors here, apparently confined to a single suffix, I wonder if this is the full story.

Having reviewed this  data, I don’t have an abundance of confidence in it, particularly given K’s hasty presentation. However, K has identified something not obviously anticipated by the SPE theory. K’s proposal is a simple extension of the SPE theory; in addition to rule features for the target, we also need rule features for the context. For instance, inalterable, quiescent imperfective marker /-wa/,4 which neither undergoes nor triggers Vowel Drop, would be underlyingly [-rule Vowel Drop, env. Vowel Drop]. Then, the rule interpretative procedure applies a rule R when its structural description is met, when the target is [+rule R], and when the all morphemes in the context are [+env. R].

Zonneveld (1978, 1979)

I have already gone on pretty long, but I should briefly discuss what subsequent writers have had to say about this proposal. Kenstowicz & Kisseberth (1977, henceforth K&K), perhaps unsurprisingly endorse the proposal, and provide some very hasty examples of how one might use this new mechanism. Zonneveld (henceforth Z), in turn, is quite critical of K’s theory. These criticisms are laid out in chapter 3 of Zonneveld 1978 (a published version of his doctoral dissertation), which reviews quite a bit of contemporary work dealing with this issue. The 1978 book chapter (about 120 typewritten pages in all) is a really good review; it is well organized and written, and full of useful quotations from the sources it reviews, and while it is somewhat dense it is hard to imagine how it could be made less so. Z reprises the criticisms of K’s theory briefly, and near verbatim, in his uncommonly-detailed review of K&K’s book (Zonneveld 1979). Z has several major criticisms of rule environment theory.

First, he draws attention to an example on where the conventions proposed by K will fail; I will spell this out in a bit more detail than Z does. The key example is /w-čokoruha-ha-nu-lu/ [wčokoruhahanru] ‘let’s harpoon it’. The anticipatory /-nu/ is mutable (but quiescent) and it is in the phonological context for syncope. To its left is the ‘sinister hortatory’ /-ha/, and this is known to be quiescent because it does not trigger deletion of the final vowel in /čokoruha/; cf. /čokoruha-kaka/ [čokoruhkaka] ‘to cause to harpoon’, which shows that the substring /…čokoruha-ha…/ does not undergo deletion because /-ha/ is quiescent rather than because /čokoruha/ is inalterable. To its right is the catalytic /-lu/. By K’s conventions, syncope should not apply to the /u/ in the anticipatory morpheme because /-ha/, in the left context, is [-env. Vowel Drop], but in fact it does. Z anticipates that one might want to introduce separate left and right context environment features: maybe /-ha/ is [+left env. Vowel Drop, -right env. Vowel Drop]. The following additional issues suggest the very idea is on the wrong track, though.

Seccondly, Z shows that rule environment features cause additional issues if one adopts the SPE conventions. The /-ta/ in /yona-ta-nawa/ [yonatnawa] ‘to paint oneself’ is presumably quiescent because it fails to trigger syncope in /yona/.5  Thus we would expect it to be lexically [-env. Vowel Drop], and for this specification to percolate to the segments /t/ and /a/. (I referred to this as Convention 2 in my previous post, and K adopts this convention.) However, it is a problem for this specification to be present on /t/, since that /t/ is itself in the left context for Vowel Drop, and this would counterfactually block its application to the second /a/! This is schematized below.

(2) Structural description matching for /yona-ta-nawa/

   VCVCV
yonatanawa

As a related point, Z points out that there many cases where under K’s proposal it is arbitrary whether one uses rule or environmental exception features. For instance, in the famous example obesity, the root-final s is part of the structural context so the root could be marked [-rule Trisyllabic Shortening], which would percolate to the focus e, or it could be marked [-env. Trisyllabic Shortening], which would percolate to the right-context s, or both; all three options would derive non-application. This is also schematized below.

(3) Structural description matching for obesity:

  VC-VCV
obes-ity

Z continues to argue that a theory that distinguishes between leftward and rightward contextual exceptionality also will not go through. Sadly, he does not provide a full analysis of the Piro facts in his preferred theory.

Z has a much more to say about the (then-)contemporary literature on rule exceptionality. For example, he discusses an idea, originally proposed by Harms (1968:119f.) and also exemplified by Kenstowicz (1970), that there are exceptions such that a rule applies to morphemes that do not meet its (phonologically defined) structural description. While he does seem to accept this, possible examples for such rules is quite thin on the ground,  and the very idea seems to reflect the mania for minimizing rule descriptions and counting features that—and this is not just my opinion—polluted early generative phonology. If one rejects this frame, it is obvious that the effect desired can be simulated with two rules, applied in any order. The first will be a phonologically general one (with or without negative exceptions); the second will be the same change but targeting certain morphemes using whatever theory of exceptionality one prefers. Indeed, most examples of rules applying where their structural description is not met are already disjunctive, and I doubt whether such rules are really a single rule in the first place.

The ultimate theory Z settles on is one quite similar to that proposed by SPE. First, readjustment rules introduce rule features like [+R] and these handle simple exceptions of the obesity type. Z proposes further that such readjustment rules must be context-free, which clearly rules out using this mechanism for phonologically defined classes of negative exceptions; cf. (4-5) in my previous post.  Secondly, Z proposes that so-called morphological features like Lightner’s [±Russian] will be used for deriving what we might now call “stratal” effects: morphemes that are exceptions to multiple rules. For instance, if we have three rules ABC that all [-Russian] morphemes are exceptions to, then context-free redundancy rules will introduce the following rule features.

(4)
[-Russian] -> {-A}
[-Russian] -> {-B}
[-Russian] -> {-C}

Z replays several arguments from Lightner about why morphological of this sort should be distinguished from rule features; I won’t repeat again them here. Finally, Z derives minor rules via readjustment rules triggered by so-called “alphabet” features. For instance, let us again consider umlauting English plurals like goosegeese. Z supposes, adding some detail to a sketchier portion of the SPE proposal, that morphemes targeted by umlaut are marked [+G] (where G is some arbitrary feature). There are two ways one could imagine doing this.

First, either the underlying form, /guːs/ perhaps, could be underlyingly [+G]. Then, let us assume that umlauting is simply fronting in the context of a Plural morphosyntactic feature, and that subsequent phonological adjustments (like the diphthongization in mousemice) are handled by later rules. Then it is possible to write this as follows:

(5) Umlaut (variant 1): [+Back, +G] -> {-Back} / __ [+Plural]

This rule is phonologically “context-free”, but its application is conditioned by the presence of the alphabet feature specification in the focus and the morphosyntactic feature in the context. I will take up the question of whether such rules are always phonologically context-free in a (much) later post.

I suspect that the analysis in (5) is the one Z has in mind, and it is also seems to be the orthodoxy in Distributed Morphology (henceforth DM); see, e.g., Embick & Marantz 2008 and particularly their (4) for a conceptually similar analysis of the English past tense. Applying their approach strictly would lead us to miss the generalization (if it is in fact a linguistically meaningful generalization) that umlauting plurals all have a null plural suffix. Umlauting plurals have an underlying feature [+G] (there is no “list” per se; it is just), but their rules of exponence also need to “list” these umlauting morphemes as exceptionally selecting the null plural rather than the regular /-z/. It seems to me this is not necessary because the rules of exponence for the plural maybe could be sensitive to the presence or absence of [+G]. This would greatly reduce the amount of “listing” necesssary. (I do not have an analysis of—and thus put aside—the other class of zero plurals in English, mass nouns like corn.)

(6) Rules of exponence for English noun plural (variant 1):

a. [+Plural] <=>  ∅       / __ [+G]
b.                     <=> -ɹɘn / __ {√CHILD, …}
c.                      <=> -ɘn / __ {√OX, …}
d.                      <=> -z   / __

Secondly and more elaborately, one could imagine that [+G] is inserted by—i.e., and perhaps, is the expression of—plurality for umlauting morphemes. In piece-based realizational theories like DM, affixes are said to expone (and thus delete) syntactic uninterpretable features. One possibility (which brings this closer to amorphous theories without completely discarding the idea of morphs) is to treat insertion of [+G] as an exponent of plurality.

(7) Rules of exponence for English noun plural (variant 2):

a. [+Plural] <=> {+G} / __ {√GOOSE, √FOOT, √MOUSE, …}
b.                     <=> -ɹɘn  / __ {√CHILD}
c.                     <=> -ɘn    / __ {√OX}
d.                     <=> -z      / __

(7a) and (7b-d) implicate different types of computations—the former inserts an alphabet feature, the latter inserts vocabulary items—but I am supposing here that they can be put into competition. Under this alternative analysis, umlaut no longer requires a morphosyntactic context:

(8) Umlaut (variant 2): [+Back, +G] -> {-Back}

Beyond precedent, I do not see any reason to prefer analysis (5-6) over (7-8). Either can clearly derive what Lakoff called minor rules, though they differ in how exceptionality information is stored/propagated, and thus may have interesting consequences for how we relate the major/minor class distinction to theories of productivity. I have written enough for now, however, and I’ll have to return to that question and others another day.

Endnotes

  1. I too will refer to this language as Piro, as do Matteson and Kisseberth. It should not be confused with unrelated language known as Piro Pueblo. Some subsequent work on this phenomenon refer to the language as Yine (and say it “was previously known as Piro”), though I also found another source that says that Yine is simply a major variety of Piro. I have been unable to figure out whether there’s a preferred endonyms.
  2. I am not prepared to rule out the possibility that /xipa/ is itself an exception (“inalterable”), but all evidence is consistent with RTL application.
  3. In his endnote 2, K says the rule is even narrower than stated above, since it does not apply to monosyllabic roots. However, he might have failed to note that this condition is implicit in his rule, if we interpret (11) strictly as holding that the left context should be tautomorphemic. Piro requires syllables to be consonant-initial, so the minimal bisyllabic roots is CV.CV. Combining this observation with (1), we see that the shortest root which can undergo vowel deletion is also bisyllabic, since concatenating the left context and target gives us a bisyllabic VCV substring. In fact, things are more complicated because monosyllabic suffixes do undergo syncope; many examples are provided above. Clearly, the deleting vowel need not be tautomorphemic with the preceding vowel, contrary to what a strict reading of the “+” in (1) would seem to imply. According to González, syncope imposes no constraints on the morphological structure of its context except that it only applies in derived environments—CVCVCV trisyllables like /kanawa/ ‘canoe’ surfaces faithfully as [kanawa], not *[kanwa]—and is subject to lexical exceptionality discussed here. 
  4. K glosses this as ‘still, yet’.
  5. As was the case with /xipa/ in endnote 2, we’d like to confirm that /yona/ is mutable rather than inalterable, but one does not simply walk into Matteson 1965.

References

Embick, D. and Marantz, A. 2008. Architecture and blocking. Linguistic Inquiry 39(1): 1-53.
González, H. 2023. An evolutionary account of vowel syncope in Yine. Ms., CUNY Graduate Center.
Harms, R. T. 1968. Introduction to Phonological Theory. Prentice-Hall.
Kenstowicz, M. 1970. Lithuanian third person future. In J. R. Sadock and A. L. Vanek (ed.), Studies Presented to Robert B. Lees by His Students, pages 95-108. Linguistic Research.
Kenstowicz, M. and Kisseberth, C. W. 1977. Topics in Phonological Theory. Academic Press.
Kisseberth, C. W. 1970. The treatment of exceptions. Papers in Linguistics 2: 44-58.
Matteson, E. 1965. The Piro (Arawakan) Language. University of California Press.
Zonneveld, W. 1978. A Formal Theory of Exceptions in Generative Phonology. Peter de Ridder.
Zonneveld, W. 1979. On the failure of hasty phonology: A review of Michael Kenstowicz and Charles Kisseberth, Topics in Phonological TheoryLingua 47: 209-255.

Pied piping and style

I find pied-piping in English a bit stilted, even if it is sometimes the prescribed option. Consider the following contrast:

(1) I’m not someone to fuck with.
(2) I’m not someone with whom to fuck.

In (1) the preposition with is stranded; in (2) it is raises along with the wh-element. What are your impressions of a speaker who says (2)? For me, they sound a bit like a nerd, or perhaps a cartoonish villain. I thought about this the other day because I was watching Alien Resurrection (1997)—it’s okay but not one of my favorite entries in the Weyland-Yutani cinematic universe—and one of the first bits of characterization we get for mercenary “Ron Johner”, played by badass Ron Perlman, is the following bit of dialogue (here taken directly from Joss Whedon’s screenplay):

This would work if Johner was a sort of evil genius, or if it was some kind of callback to something earlier, but I think this is probably just unanalyzed language pedantry ruining the vibe a little.

Generativism and anti-linguistics

I strongly identify with the generativist program. I recognize and accept that there are other ways to study language; some of these (e.g., any reasonably careful documentary work) contribute to generativist discourse and many of those that don’t are still prosocial. I for one would love to see the humanist aspect of documentation get more recognition. (Why don’t humanities programs hire linguists engaged in documentation and translation efforts?) But I’m most interested in the scientific aspects of language and think that generativism basically encompasses the big questions in this area, and some of the questions it doesn’t encompass just aren’t very important.

I don’t think it’s really ideal to brand generativism as Chomskyanism, which is the term anti-generativists tend to use. Certainly Chomsky is the plurality contributor to the program, but I think it gives undue credit to a single individual when there are so many others worth recognizing. I suspect the reason anti-generativists prefer it is they tend to see generativism as a cult of personality and perhaps want to trade on the repute of Chomsky’s (admittedly, extremely idiosyncratic but conceptually unrelated) political commitments. In evolutionary biology, it is common to refer to the modern theory of natural as the neo-Darwinian synthesis or modern synthesis. This makes sense because in 2024 there are no “strict Darwinists”, since subsequent work has integrated his monumental contributions with Mendelian and molecular genetics. Similarly, linguistics has no “strict Chomskyans”, even though we linguists eagerly awake our Mendel and our Crick & Watson.

The thing that sticks with me about the anti-generativist contingent is how disunited and disorganized they are. Anti-generativists are mostly a sincere lot (generativists too), but their attitudes are greatly shaped by negative polarization and as such, they have strange bedfellows. On the anti-generativist internet, you’ll see Adorno-disciple social constructivists talking at cross-purposes with construction grammarians, self-identified leftist/radical sociolinguists palling around with neocon consent-manufacturing journalists, experimental psycholinguists who reserve all their respect for exactly one out-of-practice fieldworker, tensorbros who don’t read books, and a few really mad, really old Boomers who never managed to build a movement around their heresy. By all accounts these people ought to hate each other. (And maybe, deep down, they do.)

In the worst case these conservations tend to veer away from constructive critique to a kind of anti-linguistics which devalues any form of language analysis that isn’t legible either as social activism or white-coat-wearing lab science. I for one can’t take your opinion about the science of language seriously if you can’t do the “armchair linguistics” that forms the descriptive-empirical base of the field. There are anti-generativists who clear this low bar, but not many. You don’t have to be a genius to do linguistics, but you do sort of have to be a linguist.

In my opinion, generativism has ever been hegemonic beyond the level of individual departments, and claims otherwise are simply scurrilous. (Even MIT is a hotbed of anti-generativist reaction, after all.) But I think it would be a shame for college students to get a liberal arts education without learning about these very interesting ideas about human nature (in addition to standard consciousness-raising about prescriptivism and language ideology, which is important too.)

A thought about academic jobs

I try not to pontificate about the academic job market. I recognize that I incredibly fortunate to have the job I have. I recognize that it is hard to get such a job, that it in some sense it comes down to luck, that there are more PhDs than faculty jobs, and finally that my job is not my friend. That said…

A colleague of mine had a PhD advisee who was offered a more or less ideal tenure-track job, at an excellent state school specializing in the advisee’s subarea, in a very pleasant town. The student, believe it or not, turned it down, and is now starting more or less from scratch on the alt-ac path. I genuinely don’t understand this. Earning a PhD in your field is the one always-necessary condition for getting an faculty job, even if the skills transfer to other pursuits. The demands of a graduate program expects from you are, to a great degree, necessary to get a faculty job. There are of course extra steps—that qualifying paper has to be sent off to a journal, and so on—but in terms of effort they are nothing compared to the work needed to get your degree. If you are doing well in your PhD program and if you are enjoying your studies, why not, for as long as you are able, consider applying for faculty positions? If you are not meeting your program’s expectations, your pessimism about the academic job market is besides the point, and if you are meeting or exceeding those expectations, you really might want to consider it.