On conspiracies

Kisseberth (1970) introduces the notion of conspiracies, cases in which a series of phonological rules in a single language “conspire” to create similar output configurations. Supposedly, Haj Ross chose the term “conspiracy”, and it is perhaps not an accident that the term he chose immediately reminds one of conspiracy theory, which has a strong negative connotation implying that the existence of the conspiracy cannot be proven. Kisseberth’s discovery of conspiracies motivated the rise of Optimality Theory (OT) two decades later—Prince & Smolensky (1993:1) refer to conspiracies as a “conceptual crisis” at the heart of phonological theory, and Zuraw (2003) explicitly links Kisseberth’s data to OT—but curiously, it seemingly had little effect on contemporary phonological theorizing. (A positivist might say that the theoretical technology needed to encode conspiratorial thinking simply did not exist at the time; a cynic might say that contemporaries did not take Kisseberth’s conspiratorial thinking seriously until it became easy to do so.) I discern two major objections to the logic of conspiracies: the evolutionary argument and the prosodic argument, which I’ll briefly review.

The evolutionary argument

What I am calling the evolutionary argument was first made by Kiparsky (1973:75f.) and is presented as an argument against OT by Hale & Reiss (2008:14). Roughly, if a series of rules lead to the same set of output configurations, they must be surface true, or they would not contribute to the putative conspiracy. Since surface-true rules are assumed to be easy to learn, especially relative to opaque rules are assumed to be difficult to learn, and since failure to learn rules would contribute to language change, grammars will naturally accumulate functionally related surface-true rules. I think we should question the assumption (au courant in 1973) that opacity is the end-all of what makes a rule difficult to acquire, but otherwise I find this basic logic sound.

The prosodic argument

At the time Kisseberth was writing, standard phonological theory included few of the prosodic primitives; even the notion of syllable was considered dubious. Subsequent revisions of the theory have introduced rich hierarchies of prosodic primitives. In particular, a subsequent generation of phonologists hypothesized that speakers “build” or “parse” sequences of segments into onsets and rimes, syllables, and feet, with repairs like stray erasure, i.e., deletion, of unsyllabified segmental or epenthesis used to resolve conflicts (McCarthy 1979, Steriade 1982, Itô 1986). It seems to me that this approach accounts for most of the facts of Yowlumne (formerly Yawelmani) reviewed by Kisseberth in his study:

  1. there are no word-initial CC clusters
  2. there are no word-final CC clusters
  3. derived CCCs are resolved either by deletion or i-epenthesis
  4. there are no CCC clusters in underlying form

The relevant observation that links all these facts is simply that Yowlumne does not permit branching onsets or codas, but more specifically, Yowlumne’s syllable-parsing algorithm does not build branching onsets or codas. This immediately accounts for facts #1-2. Assuming the logic of the McCarthy and contemporaries, #3 is also unsurprising: these clusters simply cannot be realized faithfully; the fact that there are multiple resolutions for the *CCC pathology is besides the point. And finally, adopting the logic that Prince & Smolensky (1993:54) were later to call Stampean occultation, the absence of underlying CCC clusters follows from the inability of them to surface, since the generalizations in question are all surface-true. (Here, we are treading closely to Kiparsky’s thoughts on the matter too.) Crucially, the analysis given above does not reify any surface constraints; the facts all follow from the feed-forward derivational structure of prosodically-informed phonological theory current a decade before Prince & Smolensky.

Conclusion

While Prince & Smolensky are right to say that OT provides a principled solution to Kisseberth’s notion of conspiracies, researchers in the ’70s and ’80s treated Kisseberth’s notion as epiphenomena of acquisition (Kiparsky) or prosodic structure-building (McCarthy and contemporaries). Perhaps, then, OT do not deserve credit for solving an unsolved problem in this regard. Of course, it remains to be seen whether the many implicit conjectures in these two objections can be sustained.

References

Hale, M. and Reiss, C. 2008. The Phonological Enterprise. Oxford University Press.
Kiparsky, P. 1973. Phonological representations. In O. Fujimura (ed.), Three Dimensions of Linguistic Theory, pages 1-135. TEC Corporation.
Kisseberth, C. W. 1970. On the functional unity of phonological rules. Linguistic Inquiry 1(3): 291-306.
Itô, J. 1986. Syllable theory in prosodic phonology. Doctoral dissertation, University of Massachusetts, Amherst. Published by Garland Publishers, 1988.
McCarthy, J. 1979. Formal problems in Semitic phonology and morphology. Doctoral dissertation, MIT. Published by Garland Publishers, 1985.
Prince, A., and Smolensky, P. 1993. Optimality Theory: constraint interaction in generative grammar. Rutgers Center for Cognitive Science Technical Report TR-2.
Steriade, D. 1982. Greek prosodies and the Nature of syllabification. Doctoral dissertation, MIT.
Zuraw, K. 2003. Optimality Theory in linguistics. In M. Arbib (ed.), Handbook of Brain Theory and Neural Networks, pages 819-822. 2nd edition. MIT Press.

On the Germanic *tl gap

One “parochial” constraint in Germanic is the absence of branching onsets consisting of a coronal stop followed by /l/. Thus /pl, bl, kl, gl/ are all common in Germanic, but *tl and *dl are not. It is difficult to understand what might gives rise to this phonotactic gap.

Blevins & Grawunder (2009), henceforth B&G, note that in portions of Saxony and points south, *kl has in fact shifted to [tl] and *gl to [dl]. This sound change has been noted in passing by several linguists, going back to at least the 19th century. This change has the hallmarks of a change from below: it does not appear to be subject to social evaluation and is not subject to “correction” in careful speech styles. B&G also note that many varieties of English have undergone this change; according to Wright, it could be found in parts of east Yorkshire. Similarly, no social stigma seems to have attached to this pronunciation, and B&G suggest it may have even made its way into American English. B&G argue that since it has occurred at least twice, KL > TL is a natural sound change in the relevant sense.

Of particular interest to me is B&G’s claim that one structural factor supporting *KL > TL is the absence of TL in Germanic before this change; in all known instances of *KL > TL, the preceding stage of the language lacked (contrastive) TL. While many linguists have argued that TL is universally marked, and that its absence in Germanic is a structural gap in the relevant sense, this does not seem to be borne out by quantitative typology of a wide range of language families.

Of course, other phonotactic gaps, even statistically robust ones, also are similarly filled with ease. I submit that evidence of this sort suggests that phonologists habitually overestimate the “structural” nature of phonotactic gaps.

References

Blevins, J. and Grawunder, S. 2009. *KL > TL sound change in Germanic and elsewhere: descriptions, explanations, and implications. Linguistic Typology 13: 267-303.

The curious case of -pilled

A correspondent asks whether –pilled is a libfix. I note grillpilled (when you stop caring about politics and focus on cooking meat outdoors) and catpilled (when you get toxoplasmosis). While writing this, I was wondering whether anyone has declared themselves tennispilled; yes, someone has.

The etymology of -pilled seems clear enough. The phrase taking the {blue, red} pill from that scene in The Matrix (1998) gave rise to the idiomatic compounds blue pill and red pill. These then underwent zero derivation, giving us bluepilled and (especially) redpilled. The most common syntactic function for these two words seems to be as a sort of perfective adjective, possibly with an agentive by-phrase (e.g., “I was redpilled by Donald Trump Jr.’s IG”), but I also recognize a construction where the agent has been promoted to subject position and the object is the benefactor (e.g., “Donald Trump Jr.’s IG redpilled me”).

The thing though, is that –pilled derives from two idiomatic compounds and still has the form of an English past participle. There is no clear evidence of recutting, just a new reading for the zero-derived pill plus the past participle marker –ed. It is thus much like other non-exactly-libfixes like –core (< hardcore) and –gate (< Watergate), in my estimation.

On expanding acronyms

Student writers are often taught that acronyms should also be given in expanded form on first use. While this is a good rule of thumb in my opinion, there is an exception for any acronym whose expansion the author believes to be misleading about its referent, particularly when the acronym in question seems to have been coined after the fact and purely for the creator’s amusement.

“Many such cases.”

An author-date citation may be preferable to spelling out the silly acronym.

The role of phonotactics in language change

How does phonotactic knowledge influence the path taken by language change? As is often the case, the null hypothesis seems to be simply that it doesn’t. Perhaps speakers have projected a phonotactic constraint C into the grammar of Old English, but that doesn’t necessarily mean that Middle English will conform to C, or even that Middle English won’t freely borrow words that flagrantly violate C.

One case comes from the history of English. As is well known, modern English /ʃ/ descends from Old English sk; modern instances of word-initial sk are mostly borrowed from Dutch (e.g., skipper) or Norse (e.g., ski); sky was borrowed from an Old Norse word meaning ‘cloud’ (which tells you a lot about the weather in the Danelaw). Furthermore, Old English forbids super-heavy long vowel-consonant cluster rimes. Because the one major source for /ʃ/ is sk, and because a word-final long vowel followed by sk was unheard of, V̄ʃ# was rare in Middle English and word-final sequences of tense vowels followed by [ʃ] are still rare in Modern English (Iverson & Salmons 2005). Of course there are exceptions, but according to Iverson & Salmons, they tend to:

  • be markedly foreign (e.g., cartouche),
  • to be proper names (e.g., LaRouche),
  • or to convey an “affective, onomatopoeic quality” (e.g., sheesh, woosh).

However, it is reasonably clear that all of these were added during the Middle or Modern period. Clearly, this constraint, which is still statistically robust (Gorman 2014:85), did not prevent speakers from borrowing and coining exceptions to it. However, it is hard to  rule out any historical effect of the constraint: perhaps there would be more Modern English V̄ʃ# words otherwise.

Another case of interest comes from Latin. As is well known Old Latin went through a near-exceptionless “Neogrammarian” sound change, a “primary split” or “conditioned merge” of intervocalic s with r. (The terminus ante quem, i.e., the latest possible date, for the actuation of this change is the 4th c. BCE.) This change had the effect of temporarily eliminating all traces of intervocalic in late Old Latin (Gorman 2014b). From this fact, one might posit that speakers of this era of Latin might project a *VsV constraint. And, one might posit that this would prevent subsequent sound changes from reintroducing intervocalic s. But this is clearly not the case: in the 1st c. BCE, degemination of ss after diphthongs and long monophthongs reintroduced intervocalic s (e.g., caussa > classical causa ’cause’). It is also clear that loanwords with intervocalic s were freely borrowed, and with the exception of the very early Greek borrowing tūs-tūris ‘incense’, none of them were adapted in any way to conform to a putative *VsV constraint:

(1) Greek loanwords: ambrosia ‘id.’, *asōtus ‘libertine’ (acc.sg. asōtum), basis ‘pedestal’, basilica ‘public hall’, casia ‘cinnamon’ (cf. cassia), cerasus ‘cherry’, gausapa ‘woolen cloth’, lasanum ‘cooking utensil’, nausea ‘id.’, pausa ‘pause’, philosophus ‘philosopher’, poēsis ‘poetry’, sarīsa ‘lance’, seselis ‘seseli’
(2) Celtic loanwords: gaesī ‘javelins’, omāsum ‘tripe’
(3) Germanic loanwords: glaesum ‘amber’, bisōntes ‘wild oxen’

References

Gorman, K. 2014a. A program for phonotactic theory. In Proceedings of the 47th Annual Meeting of the Chicago Linguistic Society, pages 79-93.
Gorman, K. 2014b. Exceptions to rhotacism, In Proceedings of the 48th Annual Meeting of the Chicago Linguistic Society, pages 279-293.
Iverson, G. K. and Salmons, J. C. 2005. Filling the gap: English tense vowel plus final
/š/. Journal of English Linguistics 33: 1-15.

Allophones and pure allophones

I assume you know what an allophone is. But what this blog post supposes […beat…] is that you could be more careful about how you talk about them.

Let us suppose the following:

  • the phonemic inventory of some grammar G contains t and d
  • does not contain s or z
  • yet instances of s or z are found on the surface

Thus we might say that /t, d/ are phonemes and [s, z] are allophones (perhaps of /t, d/: maybe in G, derived coronal stop clusters undergo assibilation).

Let us suppose that you’re writing the introduction to a phonological analysis of G, and in Table 1—it’s usually Table 1—you list the phonemes you posit, sorted by place and manner. Perhaps you will place s and in italics or brackets, and the caption will indicate that this refers to segments which are allophones.

I find this imprecise. It suggests that all instances of surface t or d are phonemic (or perhaps more precisely, and more vacuously, are faithful allophones),1 which need not be the case. Perhaps G has a rule of perseveratory obstruent cluster voice assimilation and one can derive surface [pt] from /…p-d…/, or surface [gd] from /…g-t…/, and so on. The confusion here seems to be that we are implicitly treating the sets of allophones and phonemes are disjoint when the former is a superset of the latter. What we seem to actually mean when we say that [s, z] are allophones is rather that they are pure allophones: allophones which are not also phonemes.

Another possible way to clarify the hypothetical table 1 is to simply state what phonemes and z are allophones of, exactly. For instance, if they are purely derived by assibilation, we might write that “the stridents s, z are (pure) allophones of the associated coronal stops /t, d/ respectively”. However, since this might be besides the point, and because there’s no principled upper bound on how many phonemic sources a given (pure or otherwise) allophone might have, I think it should suffice to suggest that s and z are pure allophones and leave it at that.2

This imprecision, I suspect, is a hang-over from structuralist phonemics, which viewed allophony as separate (and arguably, more privileged or entrenched) than alternations (then called morphophonemics). Of course, this assumption does not appear to have any compelling justification, and as Halle (1959) shows, it leads to substantial duplication (in the sense of Kisseberth 1970) between rules of allophony and rules of neutralization.3 Most linguists since Halle seem to have found the structuralist stipulation and the duplication it gives rise to aesthetically displeasing; I concur.

Endnotes

  1. I leave open the question of whether surface representations ever contain phonemes: perhaps vacuous rules “faithfully” convert them to allophones.
  2. One could (and perhaps should) go further into feature logic, and as such, regard both phonemes and pure allophones as mere bundles of features linked to a single timing slot. However, this makes things harder to talk about.
  3. I do not assume that “neutralization” is a grammatical primitive. It is easily defined (see Bale & Reiss 2017, ch. 20) but I see no reason to suppose that grammars distinguish neutralizing processes from other processes.

References

Bale, A. and Reiss, C. 2018. Phonology: A Formal Introduction. MIT Press.
Halle, M. 1959. Sound Pattern of Russian. Mouton.
Kisseberth, C. W. 1970. On the functional unity of phonological rules. Linguistic Inquiry 1(3): 291-306.

The alternation phonotactic hypothesis

The hypothesis

In a recent handout, I discuss the following hypothesis, implicit in my dissertation (Gorman 2013):

(1) Alternation Phonotactic Hypothesis: Let ABC, and D be (possibly-null) string sets. Then, if a grammar G contains a surface-true rule of alternation A → B / C __ D, nonce words containing the subsequence CAD are ill-formed for speakers of G.

Before I continue, note that definition is “phenomenological” in the sense that refers to two notions—alternations and surface-true-ness—which are not generally considered to be encoded directly in the grammar. Regarding the notion of alternations, it is not difficult to formalize whether or not a rule is alternating.

(2) Let a rule be defined by possibly-null string sets A, B, C, and D as in (1). Then if any elements of B are phonemes, then the rule is a rule of alternation.

(3) [ditto] If no elements of B are phonemes, then the rule is a rule of (pure) allophony.

But from the argument against bi-uniqueness in Sound Pattern of Russian (Halle 1959), it follows that we should reject a grammar-internal distinction between rules of alternation and allophony, and subsequent theory provides no way to encode this distinction in the grammar. Similarly, it is not hard to define what it means for a rule to be surface-true.

(4) [ditto] If no instances of CAD are generated by the grammar G, then the rule is surface-true.

But, there does not seem to be much reason for that notion to be encoded in the grammar and the theory does not provide any way to encode it.1 Note further that I am also deliberately stating in (1) that a constraint against CAD has been “projected” from the alternation, rather than treating such constraints as autonomous entities of the theory as is done in Optimality Theory (OT) and friends. Finally, I have phrased this in terms of grammaticality (“are ill-formed”) rather than acceptability.

Why might the Alternation Phonotactic Hypothesis (henceforth, APH) be true? First, I take it as obvious that alternations are more entrenched facts about grammars than pure allophony. For instance, in English, stop aspiration could be governed by a rule of allophony, but it is also plausible that English speakers simply represent aspirated stops as such in their lexical entries since there are no aspiration alternations. This point was made separately by Dell (1973) and Stampe (1973), and motivates the notion of lexicon optimization in OT. In contrast, though, rules of alternation (or someting like them) are actually necessary to obtain the proper surface forms. An English speaker who does not have a rule of obstruent voice assimilation will simply not produce the right allomorphs of various affixes. In contrast, the same speaker need not encode a process of nasalization—which in English is clearly allophonic (see, e.g., Kager 1999: 31f.)—to obtain the correct outputs. Given that alternations are entrenched in the relevant sense, it is not impossible to imagine that speakers might “project” constraints out of alternation generalizations in the manner described above. Such constraints could be used during online processing, assuming a strong isomorphism between grammatical representations used during production and perception.2 Secondly, since not all alternations are surface-true, it seems reasonable to limit this process of projection to those which are. Were one to project non-surface-true constraints in this fashion, the speaker would find themselves in an awkward position in which actual words are ill-formed.3,4

The APH is interesting contrasted with the following:

(5) Lexicostatistic Phonotactic Hypothesis: Let A, C, and be (possibly-null) string sets. Then, if CAD is statistically underrepresented (in a sense to be determined) in the lexicon L of a grammar G, nonce words containing the subsequence CAD are ill-formed for speakers of G. 

According to the LSPH (as we’ll call it), phonotactic knowledge is projected not from alternations but from statistical analysis of the lexicon. The LSPH is at least implicit in the robust cottage industry which uses statistical and/or computational modeling of the lexicon to infer the existence of phonotactic generalizations. It is notable how virtually none of the “cottage industry” of LSPH work discusses anything like the APH. Finally, one should note that APH and the LSPH do not exhaust the set of possibilities. For instance, Berent et al. (2007) and Daland et al. (2011) test for effects of the Sonority Sequencing Principle, a putative linguistic universal, on wordlikeness judgments. And some have denied the mere existence of phonotactic constraints.

Gorman 2013 reviews some prior results which argue in favor of the APH, which I’ll describe below.

Consider the putative English phonotactic constraint *V̄ʃ#, a constraint against word-final sequences of tense vowels followed by [ʃ] proposed by Iverson & Salmons (2005). Exceptions to this generalization tend to be markedly foreign (e.g., cartouche), to be proper names (e.g., LaRouche), or to convey an “affective, onomatopoeic quality” (e.g., sheeshwoosh). As Gorman (2013:43f.) notes, this constraint is statistically robust, but Hayes & White (2013) report that it has no measurable effect on English speakers’ wordlikeness judgments. In contrast, three English alternation rules  (nasal place assimilation, obstruent voice assimilation, and degemination) have a substantial impact on wordlikeness judgments (Gorman 2013, ch. 4).

A secod, more elaborate example comes from Turkish. Lees (1966a,b) proposes three phonotactic constraints in this language: backness harmony, roundness harmony, and labial attraction. All three of these constraints have exceptions, but Gorman (p. 57-60) shows that they are statistically robust generalizations. Thus, under the LSPH, speakers ought to be sensitive to all three.

Endnotes

  1. I note that the CONTROL module proposed by Orgun & Sprouse (1999) might be a mechanism by which this information could be encoded.
  2. Some evidence that phonotactic knowledge is deployed in production comes from the study of Finnish and Turkish, both of which have robust vowel harmony. Suomi et al. (1997) and Vroomen et al. (1998) find that disharmony seemingly acts as a cue for word boundaries in Finnish, and Kabak et al. (2010) find something similar for Turkish, but not in French, which lacks harmony.
  3. Durvasula & Kahng (2019) find that speakers do not necessarily judge a nonce word to be ill-formed just because it fails to follow certain subtle allophonic generalizations, which suggests that the distinction between allophony and alternation may be important here.
  4. I note that it has sometimes been proposed that actual words of G may in fact be gradiently marked or otherwise degraded w.r.t. to grammar G if they violate phonotactic constraints projected from G (e.g., Coetzee 2008). However, the null hypothesis, it seems to me, is that all actual words are also possible words and so it does not make sense to speak of actual words as marked or ill-formed, gradiently or otherwise.

References

Berent, I., Steriade, D., Lennertz, T., and Vaknin, V. 2007. What we know about what we have never heard: evidence from perceptual illusions. Cognition 104: 591-630.
Coetzee, A. W. 2008. Grammaticality and ungrammaticality in phonology. Language 64(2): 218-257. [I critique this briefly in Gorman 2013, p. 4f.]
Daland, R., Hayes, B., White, J., Garellek, M., Davis, A., and Norrmann, I. 2011. Explaining sonority projection effects. Phonology 28: 197-234.
Dell, F. 1973. Les règles et les sons. Hermann.
Durvasula, K. and Kahng, J. 2019. Phonological acceptability is not isomorphic with phonological grammaticality of stimulus. Talk presented at the Annual Meeting on Phonology.
Gorman, K. 2013. Generative phonotactics. Doctoral dissertation, University of Pennsylvania.
Halle, M. 1959. Sound Pattern of Russian. Mouton.
Hayes, B. and White, J. 2013. Phonological naturalness and phonotactic learning. Linguistic Inquiry 44: 45-75.
Iverson, G. K. and Salmons, J. C. 2005. Filling the gap: English tense vowel plus final
/š/. Journal of English Linguistics 33: 1-15.
Kager, R. 1999. Optimality Theory. Cambridge University Press.
Orgun, C. O. and Sprouse, R. 1999. From MPARSE to CONTROL: deriving ungrammaticality. Phonology 16: 191-224.
Kabak, B., Maniwa, K., and Kazanina, N. 2010. Listeners use vowel harmony and word-final stress to spot nonsense words: a study of Turkish and French. Journal of Laboratory Phonology 1: 207-224.
Lees, R. B. 1966a. On the interpretation of a Turkish vowel alternation. Anthropological Linguistics 8: 32-39.
Lees, R. B. 1966b. Turkish harmony and the description of assimilation. Türk Dili
Araştırmaları Yıllığı Belletene 1966: 279-297
Stampe, D. 1973. A Dissertation on Natural Phonology. Garland. [I don’t have this in front of me but if I remember correctly, Stampe argues non-surface true phonological rules are essentially second-class citizens.]
Suomi, K. McQueen, J. M., and Cutler, A. 1997. Vowel harmony and speech segmentation in Finnish. Journal of Memory and Language 36: 422-444.
Vroomen, J., Tuomainen, J. and de Gelder, B. 1998. The roles of word stress and vowel harmony in speech segmentation. Journal of Memory and Language 38: 133-149.

Logistic regression as the bare minimum. Or, Against naïve Bayes

When I teach introductory machine learning, I begin with (categorical) naïve Bayes classifiers. These are arguably the simplest possible supervised machine learning model, and can be explained quickly to anyone who understands probability and the method of maximum likelihood estimation. I then pivot and introduce logistic regression and its various forms. Ng et al. (2002) provide a nice discussion of how the two relate, and I encourage students to read their study.

Logistic regression is a more powerful technique than naïve Bayes. First, it is “easier” in some sense (Breiman 2001) to estimate the conditional distribution, as one does in logistic regression, than to model the joint distribution, as one does in naïve Bayes. Secondly, logistic regression can be learned using standard (online) stochastic gradient descent methods. Finally, it naturally supports conventional regularization strategies needed to avoid overfitting. For this reason, in 2022, I consider regularized logistic regression the bare minimum supervised learning method, the least sophisticated method that is possibly good enough. The pedagogical-instructional problem I then face is trying to convince students not to use naïve Bayes, given that it is obsolete—it is virtually always inferior to regularized logistic regression—given that tools like scikit-learn (Pedregosa et al. 2011) make it almost trivial to swap one machine learning method for the other.

References

Breiman, Leo. 2001. Statistical modeling: the two cultures. Statistical Science 16:199-231.
Ng, Andrew Y., and Michael I. Jordan. 2002. On discriminative vs. generative classifiers: a comparison of logistic regression and naive Bayes. In Proceedings of NeurIPS, pages 841-848.
Pedregosa, Fabian, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, …, and Édouard Duchesnay. 2011. Scikit-learn: machine learning in Python. Journal of Machine Learning Research 12:2825-2830.

On “alternative” grammar formalisms

A common suggestion to graduate students in linguistics (computational or otherwise) is to study “alternative” grammar formalisms [not my term-KBG]. The implication is that the student is only familiar with formal grammars inspired by the supposedly-hegemonic generativist tradition—though it is not clear if we’re talking about the GB-lite of Penn Treebank, the minimalist grammars (MGs) of Ed Stabler, or perhaps something else—and that the set of “alternatives” includes lexical-functional grammars (LFGs), tree-adjoining grammars (TAGs), combinatory categorial grammars (CCGs), head-driven phrase structure grammar (HPSG), or one of the various forms of construction grammar. I would never say that students should study less rather than more, but I am not convinced this diversity of formalism is key to training well-rounded students. TAGs and CCGs are known to be strongly equivalent (Schiffer & Maletti 2021), and the major unification-based grammar systems (which includes CCGs and HPSGs, and formal forms of construction grammars too) are equivalent to MGs. I speculate that maybe we should be emphasizing similarities rather than differences insofar as those differences are not represented in relative generative capacity.

Another useful way to determine the relative utility of alternative formalisms is to look at their actual use in wide-coverage computational grammars, since as Chomsky (1981: 6) says, it is possible to put systems to the test “only to the extent that we have grammatical descriptions that are reasonably compelling in some domain…”. Or put another way, grammar frameworks both hegemonic and alternative can be assessed for coverage (which can be extensive, in some languages and domains) or general utility rather than for the often-spicy rhetoric of their proponents.

Finally, it is at least possible that some alternative frameworks are simply losers of a multi-agent coordination game and at least some consolidation is desirable.

References

Chomsky, N. 1981. Lectures in Government and Binding. Foris.
Schiffer, L. K. and Maletti, A. 2021. Strong equivalence of TAG and CCG. Transactions of the Association for Computational Linguistics 9: 707-720.

Academic reviewing in NLP

It is obvious to me that NLP researchers are, on average, submitting manuscripts far earlier and more often than they ought to. The average manuscript I review is typo-laden, full of figures and tables far too small to actually read or intruding on the margins, with an unusable bibliography that the authors have clearly never inspected. Sometimes I receive manuscripts whose actual titles are transparently ungrammatical.

There are several reasons this is bad, but most of all it is a waste of reviewer time, since the reviewers have to point out (in triplicate or worse) minor issues that would have been flagged by proof-readers, advisors, or colleagues, were they involved before submission. Then, once these issues are corrected, the reviewers are again asked to read the paper and confirm they have been addressed. This is work the authors could have done, but which instead is pushed onto committees of unpaid volunteers.

The second issue is that the reviewer pool lacks relevant experience. I am regularly tasked with “meta-reviewing”, or critically summarizing the reviews. This is necessary in part because many, perhaps a majority, of the reviewers simply do not know how to review an academic paper, having not received instruction on this topic from their advisors or mentors, and their comments need to be recast in language that can be quickly understood by conference program committees.

[Moving from general to specific.]

I have recently been asked to review an uncommonly large collection of papers on the topic of prompt engineering. Several years ago, it became apparent that neural network language models, trained on enormous amounts of text data, could often provide locally coherent (though rarely globally coherent) responses to prompts or queries. The parade example of this type of model is GPT-2. For instance, if the prompt was:

Malfoy hadn’t noticed anything.

the model might continue:

“In that case,” said Harry, after thinking it over, “I suggest you return to the library.”

I assume this is because there’s fan fiction in the corpus, but I don’t really know. Now it goes without saying that at no point will, Facebook, say, launch a product in which a gigantic neural network is allowed to regurgitate Harry Potter fan fiction (!) at their users. However, researchers persist for some reason (perhaps novelty) to try to “engineer” clever prompts that produce subjectively “good” responses, rather than attempting to understand how any of this works. (It is not an overstatement to say that we have little idea why neural networks, and the methods we use to train them in particular, work at all.) What am I to do when asked to meta-review papers like this? I try to remain collegial, but I’m not sure this kind of work ought to exist at all. I consider GPT-2 a billionaire plaything, a rather wasteful one at that, and it is hard for me to see how this line of work might make the world a better place.