Defectivity in Russian; part 2: nouns

[This is part of a series of defectivity case studies.]

Pertsova (2005, henceforth P) describes defectivity in the genitive plural (henceforth, gen.pl.) of Russian nouns. This defectivity is not quite as extensive as in the verbs: Zaliznyak’s (1977) renowned morphological dictionary, Pertsova’s primary source, labels roughly forty noun lexemes as Р.мн. затрудн. ‘gen.pl. difficult’ (P gives this as ‘awkward’), and a smaller number are labeled Р.мн. нет ‘gen.pl. does not exist’. Pre-theoretically, the Russian gen.pl. form has three complexities. First, there are three (surface) gen.pl. suffixes: -ej-ov, and a null -∅ (though this is often taken to be the surface form of a back yer; e.g., Bailyn & Nevins 2008). Secondly, stress may either fall on the stem or the desinence, depending on a number of factors. First, the fleeting vowels found in many Slavic languages, sometimes surface in the final syllable of the stem of null gen.pl. forms; e.g., kiška ‘gut’ has a gen.pl kišok.

Let us consider the ‘do not exist’ cases first. A few examples are dno ‘bottom’, mgla ‘haze’, and mzda, an archaic word meaning ‘bribe’. All three of these are built from stems consisting solely of consonants—the -o and -a are nom.sg. affixes—and according to P, we would expect them to form a null gen.pl.1 Were they do so, then, the gen.pl. form would be a purely consonantal sequence (*dn, *mgl, *mzd, etc.). While Russian allows quite rich onsets, it does not permit vowel-less prosodic words. So, presumably, these ‘do not exist’ cases are in some sense derived, but are unpronounceable.

The cases labeled by Zaliznyak as ‘difficult’ are perhaps more challenging. These include fata ‘veil’, yula ‘weasel’, and suma ‘bag, pouch’. Many of these (e.g., čalma ‘turban’, taxta ‘ottoman’, mulla ‘mullah’) are borrowings from Tatar or other Turkic languages. All forty ‘difficult’ nouns are all feminines in -a, and all of them have desinential stress (i.e., stress on the case/number suffix) throughout. According to P, these ‘difficult’ nouns would all have a null gen.pl. were it not ineffable. This means that stem stress is the only option (because duh, there’s no way to stress a null desinence). Yet speakers are reluctant to introduce a new, stem-stressed allomorph of the stem that does not otherwise exhibit stem stress, because of a principle known as lexical conservatism.

I follow this basic logic, but I am not sure how to make the lexical conservatism filter into something mechanistic and appropriately parameterized so that it can actually predict the data we observe. P points out that there are other nouns which have exactly the same properties (null gen.pl., desinential stress) but resolve this conflict by retracting stress to the final syllable of the stem. For example, borodá ‘beard’ has a null, stem-stressed, but unobjectionable gen.pl. boród, as does the previously mentioned kišká/kišók.2 After considering and rejecting an appeal to frequency, P suggests that there is something special about monosyllabic stems that makes them more resistent to stem allomorphy. The problems do not end here, because as P gamely points out, the gen.pl. is not the only case/number combination which forces stress retraction; it also happens in the null nom.sg. of masculines like stol ‘table’, but I am aware of any Russian nouns which are defective in the nom.sg.

While I find P’s description clear, I do not think her analysis works. It correctly identifies two interesting explananda—the fact that gaps are localized to monosyllabic stems, and to the genitive plurals—but gives little explanation for these facts. And the invocation of lexical conservatism, never that well-defined in the first place, has done little to help; it could just as well be that the conflict between desinential stress and the selection of a null gen.pl. produces ineffability. Lexical conservatism actually prevents us from uniting the ‘does not exist’ and the ‘difficult’ cases, which after all both involve monosyllabic stems expected to exhibit desinential stress and a null gen.pl.3

Sims (2015) notes that these defective nouns might be an interesting case for studying the interaction between defectivity and syncretism, a topic first discussed by Stump (2010). In Russian there is syncretism between gen.pl. and accusative plural (acc.pl.) forms for all animate nouns. Since twelve of the defective nouns are animates, this  means that either there is an exception to this syncretism (in the case that the acc.pl. is acceptable) or perhaps, the defectivity itself is carried over by the syncretism and nouns are defective in the acc.pl. too. No one seems to have gathered the relevant judgments  about the well-formedness of these animates in their gen.pl. vs. acc.pl. forms yet.

Postscript

As is also the case for Russian verbs, defectivity in Russian nouns has arguably risen to the level of conscious awareness among Russophones. In his short story Kocherga, the Soviet humorist Mikhail Zoshchenko tells of workers who wish to order five more fire pokers for their drafty office. The nom.sg. kocherga ‘fire poker’, is unobjectionable, but no one they talk to is sure what the gen.pl., the case used for five or more of an object in a quantified noun phrase like ‘five fire pokers’, ought to be, and both they and their interlocutors use clever circumlocutions to dodge the question.

Endnotes

  1. I note that Wiktionary gives the gen.pl. of dno as don’ev, with a back stem yer  surfacing (also bearing stress) and the -ev gen.pl. suffix. If this is correct, this is a problem for P’s claim that stems of this class are expected to have a null gen.pl., unless there is some other reason dno to behaves differently than mgla or mzda.
  2. Here I am using acute accents to indicate stress, as is common practice in (anglophone) Russian linguistics.
  3. I am loathe to assign grammatical distinctions to informal gradations of unacceptability, so I see no incentive to distinguish these cases.

References

Bailyn, J. F. and Nevins, A. 2008. Russian genitive plurals are impostors. In A. Bachrach and A. Nevins (ed.), Inflectional Identity, pages 237-270. Oxford University Press.
Pertsova, K. 2005. How lexical conservatism can lead to paradigm gaps. UCLA Working Papers in Linguistics 11: 13-30.
Sims, A. 2015. Inflectional Defectiveness. Cambridge University Press.
Stump, G. 2010. Interactions between defectiveness and syncretism. In M. Baerman, G. G. Corbett, and D. Brown (ed.), Defective Paradigms: Missing Forms and What They Tell Us, pages 181-210. Oxford University Press.
Zaliznyak, A. A. 1977. Grammatičeskij slovar’ russkogo jazyka. Moskva.

Defectivity in Russian; part 1: verbs

[This is part of a series of defectivity case studies.]

The earliest discussion of defectivity within the generativist tradition can be found in an early paper by Halle (1973:7f.).

…one finds various kinds of defective paradigms in the inflection. For instance, in Russian there are about 100 verbs (all, incidentally, belonging to the so-called “second conjugation”) which lack first person singular forms of the nonpast tense. Russian grammar books frequently note that such forms as (8) “do not exist” or “are not used”, or “are avoided.”

(8)
*lažu ‘I climb’
*pobežu (or *pobeždu) ‘I conquer’
*deržu ‘I talk rudely’
*muču ‘I stir up’
*erunžu ‘I behave foolishly’

Subsequent work slightly lowers Halle’s estimate of 100 verbs. By combining evidence from Russian morphological dictionaries, Sims (2006) provides a list of 70 defective verbs, and Pertsova (2016) further refines Sims’ list to 63. But by any account, defectivity affects many more verb types than, for example, in the English verbs.

All of the defective verbs end in a dental consonant—s, z, t, or d—and belong to the second conjugation, in which verbs form infinitives in -et’ or -it’), and are defective only in the 1sg. non-past form, marked with a –and a mutation of the stem-final dental.

Baerman (2008) provides a detailed history of the mutations of t and d. The modern mutations, to š and ž respectively, represent the expected Russian reflexes of Common Slavic *tʲ, *dʲ, respectively. Christianization, beginning at the close of the first millennium, brought about a period of substantial contact with southern Slavic speakers, and their liturgical language, Old Church Slavonic (OCS), contributed novel reflexes of *tʲ, *dʲ, namely č [tʃʲ] and žd [ʐd] . The OCS reflexes were found in, among other contexts, the 1sg. non-past—where they competed with the native mutations—and the past passive participle, where they were largely entrenched. Ultimately, č persisted in the 1sg. non-past but žd was driven out sometime in the early 20th century (ibid., 85). However, the latter persists in past passive participles (e.g., rodit’ ‘to give birth’ has the past passive participle roždënnyj).1 The OCS affricate mutations are rarely found in contemporarily written Russian. However, Gorman & Yang (2019; henceforth G&K) cite some weak evidence that the OCS mutation has some synchronic purchase in the minds of Russian speakers. First, Sims (2006) administers a cloze task in which Russian speakers are asked to produce the 1sg. non-past of a defective verb shown in the infinitive (e.g., ubedit’ ‘to convince’) and several participants select the OCS-like ubeždu, which is proscribed. Secondly, Slioussar and Kholodilova (2013), Pertsova (2016), and Spektor (2021) catalog what happens when verbs borrowed from English end in a dental consonant. For instance, from the English friend comes zafrendit’ ‘to add s.o. to one’s friend list on social media and rasfrendit’ ‘to unfriend s.o. on social media’, and among the many options, they find instances of the OCS-like zafrenždu in addition to the expected zafrenžu. To add to the confusion, there there is some hesitation on the part of Russian speakers to apply either of the expected 1sg. non-past mutations, and some speakers produce the the unexpected, unmutated zafrendu.2 There is no precedent for this among native Russian verb lexemes.

The mutations of s and z to š and ž, respectfully, have no competitor inherited from contact with OCS. These mutations occur across the board. However, that’s not quite the whole story: English borrowings in the Slioussar and Kholodilova corpus often fail to alternate. For instance, for fiksit’ ‘to fix s.t.’, they record both the expected fikšu as well as the unexpected, unmutated fiksu. 

G&K develop an account of the Russian verbal gaps which assume that each of these four dental consonants does have a synchronically active competitor, and that there is simply no default. They couch this in terms of the Yang’s Tolerance Principle, but even one rejecting that particular method of deciding what is and is not productive might still agree with the basic insight—as indicated by English dental-stem loanwords—that the dental mutations are no longer productive and that this lack of productivity, along with sparse data during acquisition, results in defectivity.

Other accounts of this phenomena can be found in ch. 7 of Sims 2015 and in Pertsova 2016. These two studies contain many interesting suggestions for future work. However, with respect I must say I am not sure how to operationalize their suggestions as part of a mechanistic account of these observations.

Postscript

The aforementioned defectivity is the subject of occasional humor among Russian speakers. For instance, as discussed by as discussed by Sims (2015:5), a Russian translation of one of Milne’s Winnie the Pooh stories has the anthropomorphic bear puzzling over the 1sg. of pobedit‘ ‘to be victorious’. This suggests that Russian verbal defectivity has risen to the level of consciousness, and may reflect sociolinguistic “change from above”.

Endnotes

  1. This form was cited in G&K:186; I have taken the liberty of fixing an inconsistency in the transliteration: there рождённый was transliterated as roždënny (note the missing final glide).
  2. Russian has many indeclinable nouns, nouns which do not bear the ordinary case-number suffixes (Wade 2020:§36-40). For instance, radio ‘ibid.’ and VIČ ‘HIV’ can be used in any of the six cases and two numbers, but never bears any case-number suffixes. Crucially, though, indeclinables, unlike the aforementioned verbs, are either phonotactically-odd loanwords or acronyms, but as far as I can tell there is nothing phonotactically odd about zafrendit’ or its stem. And one should certainly not equate indeclinability and defectivity.

References

Baerman, M. 2008. Historical observations on defectiveness: The first singular non-past. Russian Linguistics 32: 81-97.
Gorman,. K. and Yang, C. 2019. When nobody wins. In F. Rainer, F. Gardani, H. C. Luschützky and W. U. Dressler (ed.), Competition in Inflection and Word Formation, pages 169-193. Springer.
Halle, M. 1973. Prolegomena to a theory of word formation. Linguistic Inquiry 4: 3-16.
Pertsova. 2016. Transderivational relations and paradigm gaps in Russian verbs.
Glossa 1: 13.
Sims, A. 2006. Minding the gap: Inflectional defectiveness in a paradigmatic theory. Doctoral dissertation, Ohio State University.
Sims, A. 2015. Inflectional Defectiveness. Cambridge University Press.
Slioussar, N. and Kholodilova, M. 2011. Paradigm leveling in non-standard Russian. In
Proceedings of the 20th meeting of Formal Approaches to Slavic Linguistics, pages 243-258.
Spektor, Y. 2021. Detection and morphological analysis of novel Russian
loanwords. Master’s thesis, Graduate Center, City University of New York.
Wade, T. 2020. A Comprehensive Russian Grammar. Wiley Blackwell, 4th edition.

On “significance levels”

R (I think it was R) introduced a practice in which multiple asterisk characters are used to indicate different significance levels for tests. [Correction: Bill Idsardi points out some prior art that probably predates the R convention. I have no idea what S or S-Plus did, nor what R was like before 2006 or so. But certainly R has helped popularize it.] For instance, in R statistical summaries, * denotes a p-value such that .01 < p < .05, ** denotes a p-value such that .001 < p < .01, and *** denotes a p-value < .001. This type of reporting increasingly can be found in papers also, but there are good reasons not to copy R’s bad behavior.

In null hypothesis testing, the mere size of the p-value itself has no meaning. All that matters is whether p is greater than or less than the α-level. Depending on space, we may report the exact value of p for a test (often rounded to two digits and “< .01″ used for abbreviatory purposes, since you don’t want to round down here), but we need not. And it simply does not matter at all how small p is when it’s less than the α-level. There is no notion of “more significant” or “less significant”.

R also uses the period character ‘.’ is used to indicate a p-value between .05 and .1. Of course, I have never read a single study using an α-level greater than .05 (I suppose this would simply make the possibility of Type I error too high), so I’m not sure what the point is.

My suggestion here is simple. If you want, use ‘*’ to indicate a significant (p < α) result, and then in the caption write something like “*: < .05″ (assuming that your α-level is .05). Do not use additional asterisks.

Major projects at the Computational Linguistics lab

[The following is geared towards our incoming students. I’m just using the blog as a easy publishing mechanism.]

The following are some major projects ongoing in the GC Computational Linguistics Lab.

Many phonologists believe that phonotactic knowledge is independent of knowledge of phonological alternations. In my dissertation I evaluated computational models of autonomous phonotactic knowledge as predictions of speakers’ judgments of wordlikeness, and I found that these fail to consistently outperform simple baselines. In part, these models fail because they predict gradience that is poorly correlated with human judgments. However, these conclusions were tentative because of the poor quality of the available data, collected little attention paid to experimental design or choice of stimuli. With funding from the National Science Foundation, and in collaboration with professors Karthik Durvasula at Michigan State University and Jimin Kahng at the University of Mississippi, we are building a open-source “megastudy” of human wordlikeness judgments and performing computational modeling of the resulting data.

Speech recognizers and synthesizers are, essentially, engines for synthesizing or recognizing sequences of phonemes. Therefore, it is necessary to transform text into phoneme sequences. Such transformations are challenging insofar as they require linguistic expertise—and language-specific knowledge—and are not always amenable to generic machine learning techniques. We are engaged in several projects involving these mappings. The lab maintains WikiPron (Lee et al. 2020), software and databases for building multilingual pronunciation dictionaries, and has organized two SIGMORPHON shared tasks on multilingual grapheme-to-phoneme conversion (Gorman et al. 2020, Ashby et al. 2021). And with funding from the CUNY Professional Staff Congress, PhD student Amal Aissaoui is engaged building diacritization engines for Arabic and Latin, engines which supply missing pronunciation information for these scripts.

Morphological generation systems use machine learning to predict the inflected forms of words. In 2019 I led a team of researchers in an error analysis of the top two systems in the CoNLL-SIGMORPHON 2017 shared task on morphological generation (Gorman et al. 2019). We found that the top models struggled with inflectional patterns which are sensitive to lexeme-inherent morphosyntactic features like gender, animacy, and aspect, which are not provided in the task data. For instance, the top models often inflect Russian perfective verbs as if they were imperfective, or Polish inanimate nouns as if they were animate. Finally, we find that models struggle with abstract morphophonological patterns which cannot be inferred from the citation form alone. For instance, the top models struggle to predict whether or not a Spanish verb will undergo diphthongization under stress (e.g., negarniego ‘to deny-I deny’ vs. pegarpego ‘to stick-I stick’). In collaboration with professor Katharina Kann and PhD student Adam Weimerslage at the University of Colorado, Boulder, we are developing an open-source “challenge set” for morphological generation, a set that targets complex inflectional patterns in a diverse sample of 10-20 languages. This challenge set will act as benchmarks for neural network models of inflection, and will allow us to further study inherent features and abstract morphophonological patterns. In designing these challenge sets we have targeted a wide variety of morphological processes, including reduplication and templatic formation in addition to affixation and stem change. MA students Kristysha Chan, Mariana Graterol, and M. Elizabeth Garza, and PhD student Selin Alkan have all contributed to the development of this challenge set thus far.

Inflectional defectivity is the poorly-understood dark twin of productivity. With funding from the CUNY Professional Staff Congress, Emily Charde (MA 2020) is engaged in a computational study of defectivity in Greek nouns and Russian verbs.

Defectivity in Kinande

[This is part of a series of defectivity case studies.]

I have already written a bit about reduplication in Kinande; it too is an example of inflectional defectivity, and here I’ll focus on that fact.

In this language, most verbs participate in a form of reduplication with the semantics of roughly ‘to hurriedly V’ or ‘to repetitively V’. Mutaka & Hyman (1990; henceforth MH), argue that the reduplicant is a bisyllabic prefix. For instance, the reduplicated form of e-ri-gend-a ‘to leave’ is e-ri-gend-a-gend-a ‘to leave hurriedly’, with the reduplicant underlined. (In MH’s terms, e- is the “augment”, -ri the “prefix”, and -a is the “final vowel” morpheme.)

Certain verbal suffixes, known to Bantuists as extensions, may also be found in the reduplicant when the reduplicant would otherwise be less than bisyllabic. For instance, the passive suffix, underlyingly /-u-/, surfaces as [w] and is copied by reduplication. Thus for the verb root hum ‘beat’ the passive e-ri-hum-w-a reduplicates as e-ri-hum-w-a-hum-w-a. More interesting is there are “unproductive” (MH’s term) extensions.1 Verbs bearing these extensions rarely have a compositional semantic relationship with their unextended form (if an unextended verb stem exists at all). For instance, whereas luh-uk-a ‘take a rest’ may be semantically related to luh-a ‘be tired’, but there is no unextended *bát-a to go with bát-uk-a ‘move’.

Interesting things happen when we try to reduplicate unproductivity extended monosyllabic verb roots. For some such verbs, the extension is not reduplicated; e.g., e-rí-bang-uk-a ‘to jump about’ has a reduplicated form e-rí-bang-a-bang-uk-a. This is the same behavior found for “productive” extensions. For others, the extension is reduplicated, producing a trisyllabic—instead of the normal bisyllabic—reduplicant; e.g., e-ri-hurut-a ‘to snore’ has a reduplicated form e-ri-hur-ut-a-hur-ut-a. Finally, there are some stems—all monosyllabic verb roots with unproductive extensions—which do not undergo reduplication; e.g., e-rí-bug-ul-a ‘to find’ does not reduplicate and neither *e-rí-bug-a-bug-ul-a or *e-rí-bug-ul-a-bug-ul-a exist.

While one could imagine there are certain semantic restrictions on reduplication, like in Chaha, MH make no mention of such restrictions in Kinande. If possible, we should rule out this as a possible explanation for the aforementioned defectivity.

Endnotes

  1. I will segment these with hyphens though it may make sense to regard some unproductive extensions as part of morphologically simplex stems.

References

Mutaka, N. and Hyman, L. M. 1990. Syllables and morpheme integrity in Kinande reduplication. Phonology 7: 73-119.

Defectivity in Polish

[This is part of a series of defectivity case studies.]

Gorman & Yang (2019), following up on a tip from Margaret Borowczyk (p.c.) discuss inflectional gaps in Polish declension. In this language, masculine genitive singular (gen.sg.) are marked either with -a or -u. The two gen.sg. suffixes have a similar type frequency, and neither appears to be more default-like than the other. For instance, both allomorphs are used with loanwords. Because of this, it is generally agreed that the gen.sg. allomorphy is purely arbitrary and must be learned by rote, a process that continues into adulthood (e.g., Dąbrowska 2001, 2005).

Kottum (1981: 182) reports his informants have no gen.sg. for masculine-gender toponyms like Dublin ‘id.’ (e.g., *Dublina/*Dublinu), Göteborg ‘Gothenburg’ and Tarnobrzeg ‘id.’, and Gorman & Yang (2019: 184) report their informants do not have a gen.sg. for words like drut ‘wire’ (e.g., *druta/*drutu, though the latter is prescribed), rower ‘bicycle’, balon ‘baloon’, karabin ‘rifle’, autobus ‘bus’, and lotos ‘lotus flower’.

References

Dąbrowska, E. 2001. Learning a morphological system without a default: The Polish genitive. Journal of Child Language 28: 545-574.
Dąbrowska, E. 2005. Productivity and beyond: mastering the Polish genitive inflection. Journal of Child Language 32:191-205.
Gorman,. K. and Yang, C. 2019. When nobody wins. In F. Rainer, F. Gardani, H. C. Luschützky and W. U. Dressler (ed.), Competition in Inflection and Word Formation, pages 169-193. Springer.
Kottum, S. S. 1981. The genitive singular form of masculine nouns in Polish. Scando-Slavica 27: 179-186.

Defectivity in Chaha

[This is part of a series of defectivity case studies.]

Rose (2000) describes a circumscribed form of defectivity in Chaha, a Semitic language spoken in Ethiopia. Throughout Ethio-Semitic, many verbs have a frequentative formed using a quadriliteral verbal template. Since few verb roots are quadriconsonantal—most are triconsonantal, some are biconsonantal—a sort of reduplication and/or spreading is used to fill in the template. In Tigryina, for instance (p. 318), the frequentative template is of the form CɘCaCɘC. Then, frequentative of the triconsonantal verb root √/grf/ ‘collect’ is [gɘrarɘf], with the root /r/ repeated, and for a biconsonantal verb root like √/ħt/ ‘ask’, the frequentative is [ħatatɘt], with three root /t/s.

Rose contrasts this state of affairs with Chaha. In this language, the frequentative template CɨCɘCɘC cannot be satisfied by a biconsonantal root like √/tʼm/ ‘bend’ or √/Rd/ ‘burn’, and all such verbs lack a frequentative.1 The expected *[tʼɨmɘmɘm] and *[nɨdɘdɘd] are ill-formed, as are all other alternatives. Furthermore, no frequentatives of any sort can be formed with quadriconsonantal roots.

Rose notes that there are often semantic reasons for a verb to lack a frequentative (e.g., stative and resultative verbs are generally not compatible with it), this does not seem applicable here.

Endnotes

  1. As Rose explains: “R represents a coronal sonorant which may be realized as [n] or [r] depending on context…” (p. 317).

References

Rose, S. 2000. Multiple correspondence in reduplication. In Proceedings of the 23rd Annual Meeting of the Berkeley Linguistic Society, pages 315-326.

Defectivity in English

[This is part of a small but growing series of defectivity case studies.]

English lexical verbs can have up to 5 distinct forms, and I am aware of just a few English verbs which are defective. (The following are all my personal judgments.)

  1. I can use begone as an imperative, though it has the form of a past participle (cf. gone and forgone). Is BEGO even a verb lexeme anymore?
  2. Fodor (1972), following Lakoff (1970 [1965]), notes that BEWARE has a limited distribution and never bears explicit inflection. For me, it can occur only as a positive imperative (e.g., beware the dog!), with or without emphatic do. I agree with Fodor that it is also bad under negation, but perhaps for unrelated reasons: e.g., *don’t beware… 
  3. FORGO lacks a simple past: forgo, forgoes, and forgoing are fine, as is the past participle forgone, but *forwent is bad as the preterite/simple past, and *forgoed is perhaps a bit worse.
  4. METHINK can only be used in the 3sg. present active indicative form methinks, and doesn’t allow for an explicit subject.
  5. STRIDE lacks a past participle (e.g., Hill 1976:668, Pinker 1999:136f., Pullum and Wilson 1977:770): *stridden is bad.  The simple past strode cannot be reused here, and I cannot use the regular *strided (under the relevant sense).

References

Fodor, J. D. 1972. Beware. Linguistic Inquiry 3: 528-534.
Hill, A. A. 1976. [Obituary:] Albert Henry Marckwardt. Language 52: 667-681.
Lakoff, G. 1970. Irregularity in Syntax. Holt, Rinehart and Winston.
Pinker, S. 1999. Words and Rules: The Ingredients of Language. Basic Books.
Pullum, G. K. and Wilson, D. 1977. Autonomous syntax and the analysis of auxiliaries. Language 53:741-788.

Deriving the major rule/minor rule distinction

The ability to target underspecified lexemes’ specifications for a rule feature, in which feature-filling is implemented by unification (e.g., Bale et al. 2014), ought to enable us to derive the traditional distinction (e.g., Lakoff 1970) between major rules (those for which non-application is exceptional) and minor rules (those for which application is exceptional), making this distinction purely descriptive of later feature-filling rules inserting unmarked rule features upon lexical insertion.

Let us suppose we have a rule R. Let us suppose that every formative is unified with  {+R} upon lexical insertion. Then, unification will fail only with formatives specified [−R], and these formatives will exhibit exceptional non-application. This describes the parade example of exceptions to a major rule: the failure of trisyllabic shortening in obesity (assuming obese is [−trisyllabic shortening]; see Chomsky & Halle 1968: §4.2.2).

Let us suppose instead that every formative is unified with {−R} upon lexical insertion. Then, unification will fail only with those formatives specified [+R], and these formatives will exhibit exceptional application, assuming they otherwise satisfy the phonological description of rule R. This describes minor rules.

This (admittedly quite sketchy at present) idea seems to address Zonneveld’s (1978: 160f.) concern that Lakoff and contemporaries did not posit any way to encode whether or not a rule was major or minor, except “transderivationally” via inspection of successful derivations. This also places the major/minor distinction—correctly, I think—in the scope of theory of productivity. More on this later.

References

Bale, A., Papillon, M., and Reiss, C. 2014. Targeting underspecified segments: a formal analysis of feature-changing and feature-filling rules. Lingua 148: 240-253.
Chomsky, N. and Halle, M. 1968. Sound Pattern of English. Harper & Row.
Lakoff, G. 1970. Irregularity in Syntax. Holt, Rinehart and Winston.
Zonneveld, W. 1978. A Formal Theory of Exceptions in Generative Phonology. Peter de Ridder.

Linguistics’ contribution to speech & language processing

How does linguistics contribute to speech & language processing? While there exist some “linguist eliminationists”, who wish to process speech audio or text “from scratch” without intermediate linguistic representations, it is generally recognized that linguistic representations are the end goal of many processing “tasks”. Of course some tasks involve poorly-defined, or ill-posed, end-state representations—the detection of hate speech and named entities, neither of which are particularly well-defined, linguistically or otherwise, come to mind—but are driven by apparent business value to be extracted rather than serious goals to understand speech or text.

The standard example for this kind of argument is syntax. It might be the case that syntactic representations are not as useful for textual understanding as was anticipated, and useful features for downstream machine learning can apparently be induced using far simpler approaches, like the masked language modeling task used for pre-training in many neural models. But it’s not as if a terrorist cell of rogue linguists locked NLP researchers in their office until they developed the field of natural language parsing. NLP researchers decided, of their own volition, to spend the last thirty years building models which could recover natural language syntax, and ultimately got pretty good at it, probably getting up to the point where, I suspect, unresolved ambiguities mostly hinge on world knowledge that is rarely if ever made explicit.

Let us consider another example, less widely discussed: the phoneme. The phoneme was discovered in the late 19th century by Baudouin de Courtenay and Kruszewski. It has been around a very long time. In the century and a half since it emerged from the Polish academy, Poland itself has been a congress, a kingdom, a military dictatorship, and a republic (three times), and annexed by the Russian empire, the German Reich, and the Soviet Union. The phoneme is probably here to stay. The phoneme is, by any reasonable account, one of the most successful scientific abstractions in the history of science.

It is no surprise then, that the phoneme plays a major role in speech technologies. Not only did the first speech recognizers and synthesizers make explicit use of phonemic representations (as well as notions like allophones), so did the next five decades worth of recognizers and synthesizers. Conventional recognizers and synthesizers require large pronunciation lexicons mapping between orthographic and phonemic form, and as they get closer to speech, convert these “context-independent” representations of phonemic sequences onto “context-dependent” representations which can account for allophony and local coarticulation, exactly as any linguist would expect. It is only in the last few years that it has even become possible to build a reasonably effective recognizer or synthesizer which doesn’t have an explicit phonemic level of representation. Such models instead use clever tricks and enormous amounts of data to induce implicit phonemic representations instead. We have every reason to suspect these implicit representations are quite similar to the explicit ones linguists would posit. For one, these implicit representations are keyed to orthographic characters, and as I wrote a month ago, “the linguistic analysis underlying a writing system may be quite naïve but may also encode sophisticated phonemic and/or morphemic insights.” If anything, that’s too weak: in most writing systems I’m aware of, the writing system is either a precise phonemic analysis (possibly omitting a few details of low functional load, or using digraphs to get around limitations of the alphabet of choice) or a precise morphophonemic analysis (ditto). For Sapir (1925, et. seq.) this was key evidence for the existence of phonemes! So whether or not implicit “phonemes” are better than explicit ones, speech technologists have converged on the same rational, mentalistic notions discovered by Polish linguists a century and a half ago.

So it is surprising to me that even those schooled in the art of speech processing view the contribution of linguistics to the field in a somewhat negative light. For instance, Paul Taylor, the founder of the TTS firm Phonetic Arts, published a Cambridge University Press textbook on TTS methods in 2009, and while it’s by now quite out of date, there’s no more-recent work of comparable breadth. Taylor spends the first five hundred (!) pages or so talking about linguistic phenomena like phonemes, allophones, prosodic phrases, and pitch accents—at the time, the state of the art in synthesis made use of explicit phonological representations—so it is genuinely a shock to me that Taylor chose to close the book with a chapter (Taylor 2009: ch. 18) about the irrelevance of linguistics. Here are a few choice quotes, with my commentary.

It is widely acknowledged that researchers in the field of speech technology and linguistics do not in general work together. (p. 533)

It may be “acknowledged”, but I don’t think it has ever been true. The number of linguists and linguistically-trained engineers working on FAANG speech products every day is huge. (Modern corporate “AI” is to a great degree just other people, mostly contractors in the Global South.) Taylor continues:

The first stated reason for this gap is the “aeroplanes don’t flap their wings” argument. The implication of this statement is that, even if we had a complete knowledge of how human language worked, it would not help us greatly because we are trying to develop these processes in machines, which have a fundamentally different architecture. (p. 533)

I do not expect that linguistics will provide deep insights about how to build TTS systems, but it clearly identified the relevant representational units for building such systems many decades ahead of time, just as mechanics provided the basis for mechanical engineering. This was true of Kempelen’s speaking machine (which predates phonemic theory, and so had to discover something like it) and Dudley’s voder as well as speech synthesizers in the digital age. So I guess I kind of think that speech synthesizers do flap their wings: parametric, unit selection, hybrid, and neural synthesizers are all big fat phoneme-realization machines. As is standard practice in physical sciences, the simple elementary particles of phonological theory—phonemes, and perhaps features—were discovered quite early on, but it the study of their onotology has taken up the intervening decades. And unlike the physical sciences, us cognitive scientists some day must also understand their epistemology (what Chomsky calls “Plato’s problem”) and ultimately, their evolutionary history (“Darwin’s problem”) too. Taylor, as an engineer, need not worry himself about these further studies, but I think he is being widely uncharitable about the nature of what he’s studying, or the business value of having a well-defined hypothesis space of representations for his team to engineer around in.

Taylor’s argument wouldn’t be complete without a caricature of the generative enterprise:

The most-famous camp of all is the Chomskian [sic] camp, started of course by Noam Chomsky, which advocates a very particular approach. Here data are not used in any explicit sense, quantitative experiments are not performed and little stress is put on explicit description of the theories advocated. (p. 534)

This is nonsense. Linguistic examples are data, in some cases better data than results from corpora or behavioral studies, as the work of Sprouse and colleagues has shown. No era of generativism was actively hostile to behavioral results; as early as the ’60s, generativist-aligned psycholinguists were experimentally testing the derivational theory of complexity and studying morphological decomposition in the lab. And I simply have never found that generativist theorizing lacks for formal explicitness; in phonology, for instance, the major alternative to generativist thinking is exemplar theory—which isn’t even explicit enough to be wrong—and a sort of neo-connectionism—which ought not to work at all given extensive proof-theoretic studies of formal learnability and the formal properties of stochastic gradient descent and backpropagation. Taylor continues to suggest that the “curse of dimensionality” and issues of generalizability prevent application of linguistic theory. Once again, though, the things we’re trying to represent are linguistic notions: machine learning using “features” or “phonemes”, explicit or implicit, is still linguistics.

Taylor concludes with some future predictions about how he hopes TTS research will evolve. His first is that textual analysis techniques from NLP will become increasingly important. Here the future has been kind to him: they are, but as the work of Sproat and colleagues has shown, we remain quite dependent on linguistic expertise—of a rather different and less abstract sort than the notion of the phoneme—to develop these systems.

References

Sapir, E. 1925. Sound patterns in language. Language 1:37-51.
Taylor, P. 2009. Text-to-Speech Synthesis. Cambridge University Press.