When rule directionality does and does not matter

At the Graduate Center we recently hosted an excellent lecture by Jane Chandlee of Haverford College. Those familiar with her work may know that she’s been studying, for some time now, two classes of string-to-string functions called the input strictly local (ISL) and output strictly local (OSL) functions. These are generalizations of the familiar notion of the strictly local (SL) languages proposed by McNaughton and Papert (1971) many years ago. For definitions of ISL and OSL functions, see Chandlee et al. 2014 and Chandlee 2014. Chandlee and colleagues have been arguing, for some time now, that virtually all phonological processes are ISL, OSL, or both (note that their intersection is non-null).

In her talk, Chandlee attempted to formalize the notions of iterativity and non-iterativity in phonology with reference to ISL and OSL functions. One interesting side effect of this work is that one can, quite easily, determine what makes a phonological process direction-invariant or direction-specific. In FSTP (Gorman & Sproat 2021:§5.1.1) we describe three notions of rule directionality (ones which are quite a bit less general than Chandlee’s notions) from the literature, but conclude: “Note, however, that directionality of application has no discernable effect for perhaps the majority of rules, and can often be ignored.” (op. cit., 53) We didn’t bother to determine when this is the case, but Chandlee shows that the set of rules which are invariant to direction of application (in our sense) are exactly those which are ISL ∩ OSL; that is, they describe processes which are both ISL and OSL, in the sense that they are string-to-string functions (or maps, to use her term) which can be encoded either as ISL or OSL.

As Richard Sproat (p.c.) points out to me, there are weaker notions of direction-invariance we may care about in the context of grammar engineering. For instance, it might be the case that some rule is, strictly speaking, direction-specific, but the language of input strings is not expected to contain any relevant examples. I suspect this is quite common also.

References

Chandlee, J. 2014. Strictly local phonological processes. Doctoral dissertation, University of Delaware.
Chandlee, J., Eyraud, R., and Heinz, J. 2014. Learning strictly local subsequential functions. Transactions of the Association for Computational Linguistics 2: 491-503.
Gorman, K., and Sproat, R. 2021. Finite-State Text Processing. Morgan & Claypool.
McNaughton, R., and Papert, S. A. 1971. Counter-Free Automata. MIT Press.

A* shortest string decoding for non-idempotent semirings

I recently completed some work, in collaboration with Google’s Cyril Allauzen, on a new algorithm for computing the shortest string through weighted finite-state automaton. For so-called path semirings, the shortest string is given by the shortest path, but up until now, there was no general-purpose algorithm for computing the shortest string over non-idempotent semirings (like the log or probability semiring). Such an algorithm would make it much easier to decode with interpolated language models or elaborate channel models in a noisy-channel formalism. In this preprint, we propose such an algorithm using A* search and lazy (“on-the-fly”) determinization, and prove that it is correct. The algorithm in question is implemented in my OpenGrm-BaumWelch library by the baumwelchdecode command-line tool.

WFST talk

I have posted a lightly-revised slide deck from a talk I gave at Johns Hopkins University here. In it, I give my most detailed-yet description of the weighted finite-state transducer formalism and describe two reasonably interesting algorithms, the optimization algorithm underlying Pynini’s optimize method and Thrax’s Optimize function, and a new A*-based single shortest string algorithm for non-idempotent semirings underlying BaumWelch’s baumwelchdecode CLI tool.

Dutch names in LaTeX

One thing I recently figured out is a sensible way to handle Dutch names (i.e., those that begin with denvan or similar particles. Traditionally, these particles are part of the cited name in author-date citations (e.g., den Dikken 2003, van Oostendorp 2009) but are ignored when alphabetizing (thus, van Oostendorp is alphabetized between Orgun & Sprouse and Otheguy, not between Vago and Vaux)This is not something handled automatically by tools like LaTeX and BibTeX, but it is relatively easy to annotate name particles like this so that they do the right thing.

First, place, at the top of your BibTeX file, the following:

@preamble{{\providecommand{\noopsort}[1]{}}}

Then, in the individual BibTeX entries, wrap the author field with this command like so:

 author = {{\noopsort{Dikken}{den Dikken}}, Marcel},

This preserves the correct in-text author-date citations, but also gives the intended alphabetization in the bibliography.

Note of course that not all people with van (etc.) names in the Anglosphere treat the van as if it were a particle to be ignored; a few deliberately alphabetize their last name as if it begins with v.

X moment

A Reddit moment is an expression used to refer to a certain type of cringe ‘cringeworthy behavior or content’ judged characteristic of Redditors, habitual users of the forum website reddit.com. It seems hard to pin down what makes cringe Redditor-like, but discussion on Urban Dictionary suggests that one salient feature is a belief in one’s superiority, or the superiority of Redditors in general; a related feature is irl behavior that takes Reddit too seriously. The normal usage is as an interjection of sorts; presented with cringeworthy internet content (a screenshot or URL), one might simply respond  “Reddit moment”.

However, Reddit isn’t the only community that can have a similar type of pejorative X moment. One can find many instances of crackhead moment, describing unpredictable or spazzy behavior. A more complicated example comes from a friend, who shared a link about a software developer who deliberately sabotaged a widely used JavaScript software library to protest the Russian invasion of Ukraine. JavaScript, and the Node.js community in particular, has been extremely vulnerable to both deliberate sabotage and accidental bricking ‘irreversible destruction of technology’, and naturally my friend sent the link with the commentary “js moment”. The one thing that seems to unite all X moment snowclones is a shared negative evaluation of the community in the common ground.

Evaluations from the past

In a literature review, speech and language processing specialists often feel tempted to report evaluation metrics like accuracy, F-score, or word error rate for systems described in the literature review. In my opinion, this is only informative if the prior and present work use the exact same data set(s) for evaluations. (Such results should probably be presented in a table along with results from the present work, not in the body of the literature review.) If instead, they were tested on some proprietary data set, an obsolete corpus, or a data set the authors of the present work have declined to evaluate on, this information is inactionable. Authors should omit this information, and reviewers and editors should insist that it be omitted.

It is also clear to me that these numbers are rarely meaningful as measures of how difficult a task is “generally”. To take an example from an unnamed 2019 NAACL paper (one guilty of the sin described above), word error rates on a single task in a single language range between 9.1% and 23.61% (note also the mixed precision). What could we possibly reason from this enormous spread of results across different data sets?

Country (dead)naming

Current events reminded me of an ongoing Discourse about how we ought to refer to the country Ukraine in English. William Taylor, US ambassador to the country under George W. Bush, is quoted on the subject in this Time magazine piece (“Ukraine, Not the Ukraine: The Significance of Three Little Letters”, March 5th, 2014; emphasis mine), which is circulating again today:

The Ukraine is the way the Russians referred to that part of the country during Soviet times … Now that it is a country, a nation, and a recognized state, it is just Ukraine.

Apparently they don’t fact-check claims like this, because this is utter nonsense. Russian doesn’t have definite articles, i.e., words like the. There is simply no straightforward way to express the contrast between the Ukraine and Ukraine in Russian (or in Ukrainian for that matter).

Now, it’s true that the before Ukraine has long been proscribed in English, but this seems to be more a matter of style—the the variant sounds archaic to my ear—than ideology. And, in Russian, there is variation between в Украине and на Украине, both of which I would translate as ‘in Ukraine’. My understanding is that both have been attested for centuries, but one (на) was more widely used during the Soviet era and thus the other (в) is thought to emphasize the country’s sovereignty in the modern era. As I understand it, that one preposition is indexical of Ukrainian nationalist sentiment and another is indexical of Russian revanchist-nationalist sentiment is more or less linguistically arbitrary in the Saussurean sense. Or, more weakly, the connotative differences between the two prepositions are subtle and don’t map cleanly onto the relevant ideologies. But I am not a native (or even competent) speaker of Russian so you should not take my word for it.

Taylor, in the Time article, continues to argue that US media should use the Ukrainian-style transliteration Kyiv instead of the Russian-style transliteration Kiev. This is a more interesting prescription, at least in that the linguistic claim—that Kyiv is the standard Ukrainian transliteration and Kiev is the standard Russian transliteration—is certainly true. However, it probably should be noted that dozens of other cities and countries in non-Anglophone Europe are known by their English exonyms, and no one seems to be demanding that Americans start referring to Wien [viːn] ‘Vienna’ or Moskva ‘Moscow’. In other words Taylor’s prescription is a political exercise rather than a matter of grammatical correctness. (One can’t help but notice that Taylor is a retired neoconservative diplomat pleading for “political correctness”.)

On conspiracies

Kisseberth (1970) introduces the notion of conspiracies, cases in which a series of phonological rules in a single language “conspire” to create similar output configurations. Supposedly, Haj Ross chose the term “conspiracy”, and it is perhaps not an accident that the term he chose immediately reminds one of conspiracy theory, which has a strong negative connotation implying that the existence of the conspiracy cannot be proven. Kisseberth’s discovery of conspiracies motivated the rise of Optimality Theory (OT) two decades later—Prince & Smolensky (1993:1) refer to conspiracies as a “conceptual crisis” at the heart of phonological theory, and Zuraw (2003) explicitly links Kisseberth’s data to OT—but curiously, it seemingly had little effect on contemporary phonological theorizing. (A positivist might say that the theoretical technology needed to encode conspiratorial thinking simply did not exist at the time; a cynic might say that contemporaries did not take Kisseberth’s conspiratorial thinking seriously until it became easy to do so.) I discern two major objections to the logic of conspiracies: the evolutionary argument and the prosodic argument, which I’ll briefly review.

The evolutionary argument

What I am calling the evolutionary argument was first made by Kiparsky (1973:75f.) and is presented as an argument against OT by Hale & Reiss (2008:14). Roughly, if a series of rules lead to the same set of output configurations, they must be surface true, or they would not contribute to the putative conspiracy. Since surface-true rules are assumed to be easy to learn, especially relative to opaque rules are assumed to be difficult to learn, and since failure to learn rules would contribute to language change, grammars will naturally accumulate functionally related surface-true rules. I think we should question the assumption (au courant in 1973) that opacity is the end-all of what makes a rule difficult to acquire, but otherwise I find this basic logic sound.

The prosodic argument

At the time Kisseberth was writing, standard phonological theory included few of the prosodic primitives; even the notion of syllable was considered dubious. Subsequent revisions of the theory have introduced rich hierarchies of prosodic primitives. In particular, a subsequent generation of phonologists hypothesized that speakers “build” or “parse” sequences of segments into onsets and rimes, syllables, and feet, with repairs like stray erasure, i.e., deletion, of unsyllabified segmental or epenthesis used to resolve conflicts (McCarthy 1979, Steriade 1982, Itô 1986). It seems to me that this approach accounts for most of the facts of Yowlumne (formerly Yawelmani) reviewed by Kisseberth in his study:

  1. there are no word-initial CC clusters
  2. there are no word-final CC clusters
  3. derived CCCs are resolved either by deletion or i-epenthesis
  4. there are no CCC clusters in underlying form

The relevant observation that links all these facts is simply that Yowlumne does not permit branching onsets or codas, but more specifically, Yowlumne’s syllable-parsing algorithm does not build branching onsets or codas. This immediately accounts for facts #1-2. Assuming the logic of the McCarthy and contemporaries, #3 is also unsurprising: these clusters simply cannot be realized faithfully; the fact that there are multiple resolutions for the *CCC pathology is besides the point. And finally, adopting the logic that Prince & Smolensky (1993:54) were later to call Stampean occultation, the absence of underlying CCC clusters follows from the inability of them to surface, since the generalizations in question are all surface-true. (Here, we are treading closely to Kiparsky’s thoughts on the matter too.) Crucially, the analysis given above does not reify any surface constraints; the facts all follow from the feed-forward derivational structure of prosodically-informed phonological theory current a decade before Prince & Smolensky.

Conclusion

While Prince & Smolensky are right to say that OT provides a principled solution to Kisseberth’s notion of conspiracies, researchers in the ’70s and ’80s treated Kisseberth’s notion as epiphenomena of acquisition (Kiparsky) or prosodic structure-building (McCarthy and contemporaries). Perhaps, then, OT do not deserve credit for solving an unsolved problem in this regard. Of course, it remains to be seen whether the many implicit conjectures in these two objections can be sustained.

References

Hale, M. and Reiss, C. 2008. The Phonological Enterprise. Oxford University Press.
Kiparsky, P. 1973. Phonological representations. In O. Fujimura (ed.), Three Dimensions of Linguistic Theory, pages 1-135. TEC Corporation.
Kisseberth, C. W. 1970. On the functional unity of phonological rules. Linguistic Inquiry 1(3): 291-306.
Itô, J. 1986. Syllable theory in prosodic phonology. Doctoral dissertation, University of Massachusetts, Amherst. Published by Garland Publishers, 1988.
McCarthy, J. 1979. Formal problems in Semitic phonology and morphology. Doctoral dissertation, MIT. Published by Garland Publishers, 1985.
Prince, A., and Smolensky, P. 1993. Optimality Theory: constraint interaction in generative grammar. Rutgers Center for Cognitive Science Technical Report TR-2.
Steriade, D. 1982. Greek prosodies and the Nature of syllabification. Doctoral dissertation, MIT.
Zuraw, K. 2003. Optimality Theory in linguistics. In M. Arbib (ed.), Handbook of Brain Theory and Neural Networks, pages 819-822. 2nd edition. MIT Press.

On the Germanic *tl gap

One “parochial” constraint in Germanic is the absence of branching onsets consisting of a coronal stop followed by /l/. Thus /pl, bl, kl, gl/ are all common in Germanic, but *tl and *dl are not. It is difficult to understand what might gives rise to this phonotactic gap.

Blevins & Grawunder (2009), henceforth B&G, note that in portions of Saxony and points south, *kl has in fact shifted to [tl] and *gl to [dl]. This sound change has been noted in passing by several linguists, going back to at least the 19th century. This change has the hallmarks of a change from below: it does not appear to be subject to social evaluation and is not subject to “correction” in careful speech styles. B&G also note that many varieties of English have undergone this change; according to Wright, it could be found in parts of east Yorkshire. Similarly, no social stigma seems to have attached to this pronunciation, and B&G suggest it may have even made its way into American English. B&G argue that since it has occurred at least twice, KL > TL is a natural sound change in the relevant sense.

Of particular interest to me is B&G’s claim that one structural factor supporting *KL > TL is the absence of TL in Germanic before this change; in all known instances of *KL > TL, the preceding stage of the language lacked (contrastive) TL. While many linguists have argued that TL is universally marked, and that its absence in Germanic is a structural gap in the relevant sense, this does not seem to be borne out by quantitative typology of a wide range of language families.

Of course, other phonotactic gaps, even statistically robust ones, also are similarly filled with ease. I submit that evidence of this sort suggests that phonologists habitually overestimate the “structural” nature of phonotactic gaps.

References

Blevins, J. and Grawunder, S. 2009. *KL > TL sound change in Germanic and elsewhere: descriptions, explanations, and implications. Linguistic Typology 13: 267-303.

The curious case of -pilled

A correspondent asks whether –pilled is a libfix. I note grillpilled (when you stop caring about politics and focus on cooking meat outdoors) and catpilled (when you get toxoplasmosis). While writing this, I was wondering whether anyone has declared themselves tennispilled; yes, someone has.

The etymology of -pilled seems clear enough. The phrase taking the {blue, red} pill from that scene in The Matrix (1998) gave rise to the idiomatic compounds blue pill and red pill. These then underwent zero derivation, giving us bluepilled and (especially) redpilled. The most common syntactic function for these two words seems to be as a sort of perfective adjective, possibly with an agentive by-phrase (e.g., “I was redpilled by Donald Trump Jr.’s IG”), but I also recognize a construction where the agent has been promoted to subject position and the object is the benefactor (e.g., “Donald Trump Jr.’s IG redpilled me”).

The thing though, is that –pilled derives from two idiomatic compounds and still has the form of an English past participle. There is no clear evidence of recutting, just a new reading for the zero-derived pill plus the past participle marker –ed. It is thus much like other non-exactly-libfixes like –core (< hardcore) and –gate (< Watergate), in my estimation.