The alternation phonotactic hypothesis

The hypothesis

In a recent handout, I discuss the following hypothesis, implicit in my dissertation (Gorman 2013):

(1) Alternation Phonotactic Hypothesis: Let ABC, and D be (possibly-null) string sets. Then, if a grammar G contains a surface-true rule of alternation A → B / C __ D, nonce words containing the subsequence CAD are ill-formed for speakers of G.

Before I continue, note that definition is “phenomenological” in the sense that refers to two notions—alternations and surface-true-ness—which are not generally considered to be encoded directly in the grammar. Regarding the notion of alternations, it is not difficult to formalize whether or not a rule is alternating.

(2) Let a rule be defined by possibly-null string sets A, B, C, and D as in (1). Then if any elements of B are phonemes, then the rule is a rule of alternation.

(3) [ditto] If no elements of B are phonemes, then the rule is a rule of (pure) allophony.

But from the argument against bi-uniqueness in Sound Pattern of Russian (Halle 1959), it follows that we should reject a grammar-internal distinction between rules of alternation and allophony, and subsequent theory provides no way to encode this distinction in the grammar. Similarly, it is not hard to define what it means for a rule to be surface-true.

(4) [ditto] If no instances of CAD are generated by the grammar G, then the rule is surface-true.

But, there does not seem to be much reason for that notion to be encoded in the grammar and the theory does not provide any way to encode it.1 Note further that I am also deliberately stating in (1) that a constraint against CAD has been “projected” from the alternation, rather than treating such constraints as autonomous entities of the theory as is done in Optimality Theory (OT) and friends. Finally, I have phrased this in terms of grammaticality (“are ill-formed”) rather than acceptability.

Why might the Alternation Phonotactic Hypothesis (henceforth, APH) be true? First, I take it as obvious that alternations are more entrenched facts about grammars than pure allophony. For instance, in English, stop aspiration could be governed by a rule of allophony, but it is also plausible that English speakers simply represent aspirated stops as such in their lexical entries since there are no aspiration alternations. This point was made separately by Dell (1973) and Stampe (1973), and motivates the notion of lexicon optimization in OT. In contrast, though, rules of alternation (or someting like them) are actually necessary to obtain the proper surface forms. An English speaker who does not have a rule of obstruent voice assimilation will simply not produce the right allomorphs of various affixes. In contrast, the same speaker need not encode a process of nasalization—which in English is clearly allophonic (see, e.g., Kager 1999: 31f.)—to obtain the correct outputs. Given that alternations are entrenched in the relevant sense, it is not impossible to imagine that speakers might “project” constraints out of alternation generalizations in the manner described above. Such constraints could be used during online processing, assuming a strong isomorphism between grammatical representations used during production and perception.2 Secondly, since not all alternations are surface-true, it seems reasonable to limit this process of projection to those which are. Were one to project non-surface-true constraints in this fashion, the speaker would find themselves in an awkward position in which actual words are ill-formed.3,4

The APH is interesting contrasted with the following:

(5) Lexicostatistic Phonotactic Hypothesis: Let A, C, and be (possibly-null) string sets. Then, if CAD is statistically underrepresented (in a sense to be determined) in the lexicon L of a grammar G, nonce words containing the subsequence CAD are ill-formed for speakers of G. 

According to the LSPH (as we’ll call it), phonotactic knowledge is projected not from alternations but from statistical analysis of the lexicon. The LSPH is at least implicit in the robust cottage industry which uses statistical and/or computational modeling of the lexicon to infer the existence of phonotactic generalizations. It is notable how virtually none of the “cottage industry” of LSPH work discusses anything like the APH. Finally, one should note that APH and the LSPH do not exhaust the set of possibilities. For instance, Berent et al. (2007) and Daland et al. (2011) test for effects of the Sonority Sequencing Principle, a putative linguistic universal, on wordlikeness judgments. And some have denied the mere existence of phonotactic constraints.

Gorman 2013 reviews some prior results which argue in favor of the APH, which I’ll describe below.

Consider the putative English phonotactic constraint *V̄ʃ#, a constraint against word-final sequences of tense vowels followed by [ʃ] proposed by Iverson & Salmons (2005). Exceptions to this generalization tend to be markedly foreign (e.g., cartouche), to be proper names (e.g., LaRouche), or to convey an “affective, onomatopoeic quality” (e.g., sheeshwoosh). As Gorman (2013:43f.) notes, this constraint is statistically robust, but Hayes & White (2013) report that it has no measurable effect on English speakers’ wordlikeness judgments. In contrast, three English alternation rules  (nasal place assimilation, obstruent voice assimilation, and degemination) have a substantial impact on wordlikeness judgments (Gorman 2013, ch. 4).

A secod, more elaborate example comes from Turkish. Lees (1966a,b) proposes three phonotactic constraints in this language: backness harmony, roundness harmony, and labial attraction. All three of these constraints have exceptions, but Gorman (p. 57-60) shows that they are statistically robust generalizations. Thus, under the LSPH, speakers ought to be sensitive to all three.

Endnotes

  1. I note that the CONTROL module proposed by Orgun & Sprouse (1999) might be a mechanism by which this information could be encoded.
  2. Some evidence that phonotactic knowledge is deployed in production comes from the study of Finnish and Turkish, both of which have robust vowel harmony. Suomi et al. (1997) and Vroomen et al. (1998) find that disharmony seemingly acts as a cue for word boundaries in Finnish, and Kabak et al. (2010) find something similar for Turkish, but not in French, which lacks harmony.
  3. Durvasula & Kahng (2019) find that speakers do not necessarily judge a nonce word to be ill-formed just because it fails to follow certain subtle allophonic generalizations, which suggests that the distinction between allophony and alternation may be important here.
  4. I note that it has sometimes been proposed that actual words of G may in fact be gradiently marked or otherwise degraded w.r.t. to grammar G if they violate phonotactic constraints projected from G (e.g., Coetzee 2008). However, the null hypothesis, it seems to me, is that all actual words are also possible words and so it does not make sense to speak of actual words as marked or ill-formed, gradiently or otherwise.

References

Berent, I., Steriade, D., Lennertz, T., and Vaknin, V. 2007. What we know about what we have never heard: evidence from perceptual illusions. Cognition 104: 591-630.
Coetzee, A. W. 2008. Grammaticality and ungrammaticality in phonology. Language 64(2): 218-257. [I critique this briefly in Gorman 2013, p. 4f.]
Daland, R., Hayes, B., White, J., Garellek, M., Davis, A., and Norrmann, I. 2011. Explaining sonority projection effects. Phonology 28: 197-234.
Dell, F. 1973. Les règles et les sons. Hermann.
Durvasula, K. and Kahng, J. 2019. Phonological acceptability is not isomorphic with phonological grammaticality of stimulus. Talk presented at the Annual Meeting on Phonology.
Gorman, K. 2013. Generative phonotactics. Doctoral dissertation, University of Pennsylvania.
Halle, M. 1959. Sound Pattern of Russian. Mouton.
Hayes, B. and White, J. 2013. Phonological naturalness and phonotactic learning. Linguistic Inquiry 44: 45-75.
Iverson, G. K. and Salmons, J. C. 2005. Filling the gap: English tense vowel plus final
/š/. Journal of English Linguistics 33: 1-15.
Kager, R. 1999. Optimality Theory. Cambridge University Press.
Orgun, C. O. and Sprouse, R. 1999. From MPARSE to CONTROL: deriving ungrammaticality. Phonology 16: 191-224.
Kabak, B., Maniwa, K., and Kazanina, N. 2010. Listeners use vowel harmony and word-final stress to spot nonsense words: a study of Turkish and French. Journal of Laboratory Phonology 1: 207-224.
Lees, R. B. 1966a. On the interpretation of a Turkish vowel alternation. Anthropological Linguistics 8: 32-39.
Lees, R. B. 1966b. Turkish harmony and the description of assimilation. Türk Dili
Araştırmaları Yıllığı Belletene 1966: 279-297
Stampe, D. 1973. A Dissertation on Natural Phonology. Garland. [I don’t have this in front of me but if I remember correctly, Stampe argues non-surface true phonological rules are essentially second-class citizens.]
Suomi, K. McQueen, J. M., and Cutler, A. 1997. Vowel harmony and speech segmentation in Finnish. Journal of Memory and Language 36: 422-444.
Vroomen, J., Tuomainen, J. and de Gelder, B. 1998. The roles of word stress and vowel harmony in speech segmentation. Journal of Memory and Language 38: 133-149.

Logistic regression as the bare minimum. Or, Against naïve Bayes

When I teach introductory machine learning, I begin with (categorical) naïve Bayes classifiers. These are arguably the simplest possible supervised machine learning model, and can be explained quickly to anyone who understands probability and the method of maximum likelihood estimation. I then pivot and introduce logistic regression and its various forms. Ng et al. (2002) provide a nice discussion of how the two relate, and I encourage students to read their study.

Logistic regression is a more powerful technique than naïve Bayes. First, it is “easier” in some sense (Breiman 2001) to estimate the conditional distribution, as one does in logistic regression, than to model the joint distribution, as one does in naïve Bayes. Secondly, logistic regression can be learned using standard (online) stochastic gradient descent methods. Finally, it naturally supports conventional regularization strategies needed to avoid overfitting. For this reason, in 2022, I consider regularized logistic regression the bare minimum supervised learning method, the least sophisticated method that is possibly good enough. The pedagogical-instructional problem I then face is trying to convince students not to use naïve Bayes, given that it is obsolete—it is virtually always inferior to regularized logistic regression—given that tools like scikit-learn (Pedregosa et al. 2011) make it almost trivial to swap one machine learning method for the other.

References

Breiman, Leo. 2001. Statistical modeling: the two cultures. Statistical Science 16:199-231.
Ng, Andrew Y., and Michael I. Jordan. 2002. On discriminative vs. generative classifiers: a comparison of logistic regression and naive Bayes. In Proceedings of NeurIPS, pages 841-848.
Pedregosa, Fabian, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, …, and Édouard Duchesnay. 2011. Scikit-learn: machine learning in Python. Journal of Machine Learning Research 12:2825-2830.

On “alternative” grammar formalisms

A common suggestion to graduate students in linguistics (computational or otherwise) is to study “alternative” grammar formalisms [not my term-KBG]. The implication is that the student is only familiar with formal grammars inspired by the supposedly-hegemonic generativist tradition—though it is not clear if we’re talking about the GB-lite of Penn Treebank, the minimalist grammars (MGs) of Ed Stabler, or perhaps something else—and that the set of “alternatives” includes lexical-functional grammars (LFGs), tree-adjoining grammars (TAGs), combinatory categorial grammars (CCGs), head-driven phrase structure grammar (HPSG), or one of the various forms of construction grammar. I would never say that students should study less rather than more, but I am not convinced this diversity of formalism is key to training well-rounded students. TAGs and CCGs are known to be strongly equivalent (Schiffer & Maletti 2021), and the major unification-based grammar systems (which includes CCGs and HPSGs, and formal forms of construction grammars too) are equivalent to MGs. I speculate that maybe we should be emphasizing similarities rather than differences insofar as those differences are not represented in relative generative capacity.

Another useful way to determine the relative utility of alternative formalisms is to look at their actual use in wide-coverage computational grammars, since as Chomsky (1981: 6) says, it is possible to put systems to the test “only to the extent that we have grammatical descriptions that are reasonably compelling in some domain…”. Or put another way, grammar frameworks both hegemonic and alternative can be assessed for coverage (which can be extensive, in some languages and domains) or general utility rather than for the often-spicy rhetoric of their proponents.

Finally, it is at least possible that some alternative frameworks are simply losers of a multi-agent coordination game and at least some consolidation is desirable.

References

Chomsky, N. 1981. Lectures in Government and Binding. Foris.
Schiffer, L. K. and Maletti, A. 2021. Strong equivalence of TAG and CCG. Transactions of the Association for Computational Linguistics 9: 707-720.

Academic reviewing in NLP

It is obvious to me that NLP researchers are, on average, submitting manuscripts far earlier and more often than they ought to. The average manuscript I review is typo-laden, full of figures and tables far too small to actually read or intruding on the margins, with an unusable bibliography that the authors have clearly never inspected. Sometimes I receive manuscripts whose actual titles are transparently ungrammatical.

There are several reasons this is bad, but most of all it is a waste of reviewer time, since the reviewers have to point out (in triplicate or worse) minor issues that would have been flagged by proof-readers, advisors, or colleagues, were they involved before submission. Then, once these issues are corrected, the reviewers are again asked to read the paper and confirm they have been addressed. This is work the authors could have done, but which instead is pushed onto committees of unpaid volunteers.

The second issue is that the reviewer pool lacks relevant experience. I am regularly tasked with “meta-reviewing”, or critically summarizing the reviews. This is necessary in part because many, perhaps a majority, of the reviewers simply do not know how to review an academic paper, having not received instruction on this topic from their advisors or mentors, and their comments need to be recast in language that can be quickly understood by conference program committees.

[Moving from general to specific.]

I have recently been asked to review an uncommonly large collection of papers on the topic of prompt engineering. Several years ago, it became apparent that neural network language models, trained on enormous amounts of text data, could often provide locally coherent (though rarely globally coherent) responses to prompts or queries. The parade example of this type of model is GPT-2. For instance, if the prompt was:

Malfoy hadn’t noticed anything.

the model might continue:

“In that case,” said Harry, after thinking it over, “I suggest you return to the library.”

I assume this is because there’s fan fiction in the corpus, but I don’t really know. Now it goes without saying that at no point will, Facebook, say, launch a product in which a gigantic neural network is allowed to regurgitate Harry Potter fan fiction (!) at their users. However, researchers persist for some reason (perhaps novelty) to try to “engineer” clever prompts that produce subjectively “good” responses, rather than attempting to understand how any of this works. (It is not an overstatement to say that we have little idea why neural networks, and the methods we use to train them in particular, work at all.) What am I to do when asked to meta-review papers like this? I try to remain collegial, but I’m not sure this kind of work ought to exist at all. I consider GPT-2 a billionaire plaything, a rather wasteful one at that, and it is hard for me to see how this line of work might make the world a better place.

Is linguistics “unusually vituperative”?

The picture of linguistics one can get from books like The Linguistics Wars (Harris 1993) and press coverage of l’affaire du Pirahã suggests it is a quite nasty sort of field, full of hate and invective. Is linguistics really, as an engineer colleague would have it, “unusually vituperative”?

In my opinion it is not, for I object to the modifier unusually. Indeed, while such stories rarely make the nightly news, the sciences have never been without a heft dose of vituperation. For instance, anthropologist Napoleon Chagnon was accused, slanderously and at book length, of causing a measles epidemic among indigenous peoples of the Amazon. And entomologist E.O. Wilson had a pitcher of water poured on his head at a lecture because, according to a lone audience member, his research on ants implied support for eugenics. And even gentleman Darwin was not above keeping an ill-tempered bulldog.

References

Harris, R. A. 1993. The Linguistics Wars: Chomsky, Lakoff, and the Battle over Deep Structure. Oxford University Press. [I don’t recommend this book: Harris, instead of explaining the issues at stake, focuses on “horse race” coverage, quoting extensively from interviews with America’s grumpiest octogenarians.]

The 24th century Universal Translator is unsupervised and requires minimal resources

The Star Trek: Deep Space Nine episode “Sanctuary” pretty clearly establishes that by the 24th century, the Star Trek universe’s Universal Translator works in an unsupervised fashion and requires only a (what we in the real 21st century would consider) minimal monolingual corpus and a few hours of processing to translate Skrreean, a language new to Starfleet and friends. Free paper idea: how does the Universal Translator’s capabilities (in the 22nd through the 24th century, from Enterprise to the original series to the 24th century shows) map onto known terms of art in machine translation in our universe?

On being scooped

Some of my colleagues have over the years expressed concern their ongoing projects are in danger of being “scooped”, and as a result, they need to work rapidly to disseminate the projects in question. This concern is particularly prominent among the fast-moving (and unusually cargo-cultish) natural language processing community, though I have occasionally heard similar concerns in the company of theoretical linguists. Assuming this is not merely hysteria caused by material conditions like casualization and pandemic-related isolation, there is a simple solution: work on something else, something you yourself deem to be less obvious. If you’re in danger of being scooped, it suggest that you’re taking obvious next steps—you’re engaging in what Kuhn calls normal science—that you lack a competitive advantage (such as rare-on-the-ground expertise, special knowledge, proprietary or unreleased data, etc.) that would help you in particular advance the state of knowledge. If you find yourself in this predicament, you should consider allowing somebody else to carry the football across the goal-line. Or don’t, but then you might just get scooped after all.

How to write linguistic examples

There is a standard, well-designed way in which linguists write examples, and failure to use it in a paper about language is a strong shibboleth suggesting unfamiliarity with linguistics as a field. In brief, it is as follows:

  • When an example (affix, word, phrase, or sentence) appears in the body (i.e., the middle of a sentence):
    • if written in Roman, it should be italicized.
    • if written in non-Roman, but alphabetic scripts like Cyrillic, italicization is optional. (Cyrillic italics are, like the Russian cursive hand they’re based on, famously hard for Western amateurs like myself to read.)
    • if written in a non-alphabetic script, it can just be written as is, though you’re welcome to experiment.
    • Examples should never be underlined, bolded, or placed in single or double quotes, regardless of the script used.
  • When an example is set off from the body (i.e., as a numbered example or in a table), it need not be italicized.
  • Any non-English example should be immediately followed with a gloss.
    • A gloss should always be single-quoted.
    • Don’t intersperse words like “meaning”, as in “…kitab meaning ‘book’…”, just write “…kitab ‘book’…”
  • If using morph-by-morph or word-by-word glossing, follow the Leipzig glossing conventions.

How to write numbers

A lot of students—and increasingly, given how young the field of NLP is—don’t know how to write numbers in papers. Here are a few basic principles (some of these are loosely based off the APA guidelines):

  • Use the same number of decimals every time and don’t omit trailing zeros after the decimal. Thus “.50” or “.5000” and not “.5”.
  • Round to a small number of decimals: 2, 4, or 6 are all standard choices.
  • Omit leading zeros before the decimal if possible values of whatever quantity are always within [0, 1], thus you might say you got “.9823” accuracy.
  • (For LaTeX users) put the minus sign in math mode, too, or it’ll appear as a hyphen (ASCII char 45), which is quite a bit shorter and just looks wrong.
  • Use commas to separate the hundreds and thousands place (etc.) in large integers, and try not to use too many large exact integers; rounding is fine once they get large.
  • Expressions like “3k”, “1.3m” and “2b” are too informal; just write “3,000”, “1.3 million”, and “2 billion”.
  • Many evaluation metrics can either be written as (pseudo-)probabilities or percentages. Pick one or the other format and stick with it.

A few other points about tables with numbers (looking at you LaTeX users):

  • Right-align numbers in tables.
  • Don’t put two numbers (like mean and standard deviation or a range) in a single cell; the alignment will be all wrong. Just use more cells and tweak the intercolumnar spacing. 
  • Don’t make the text of your tables smaller than the body text, which makes the table hard to read. Just redesign the table instead.