Optimizing three-way composition for decipherment problems

Knight et al. (2006) introduce a class of problems they call decipherment. In this scenario, we observe a “ciphertext” C , which we wish to decode. We imagine that there exists a corpus of “plaintext” P, and which to recover the encipherment model G that transduces from P to C. All three components can be represented as (weighted) finite-state transducers: P is a language model over plaintexts, C is a list of strings, and G is an initially-uniform transducer from P to C. We can then estimate the parameters (i.e.. arc weights) of G by holding P and C constant and applying the expectation maximization algorithm (Dempster et al. 1977).

Both training and prediction require us to repeatedly compute the “cascade”, the three-way composition P ○ G ○ C. First off, two-way composition is associative, so for all ab, c : (ab) ○ c = a ○ (b ○ c). However, given any n-way composition, some associations may be radically more efficient than others. Even were the time complexity of each possible composition known, it is still not trivial to compute the optimal association. Fortunately, in this case we are dealing with three-way composition, for which there are only two possible associations; we simply need to compare the two.1

Composition performance depends on the sorting properties of the relevant machines. In the simplest case, the inner loop of (two-way) composition consists of a complex game of “go fish” between a state in the left-hand side automaton and a state in the right-hand side automaton. One state enumerates over its input (respectively, output) labels and queries the other state’s output (respectively input) labels. In the case that the state in the automaton being queried has its arcs sorted according to the label values, a sublinear binary search is used; otherwise, linear-time search is required. Optimal performance obtains when the left-hand side of composition is sorted by output labels and the right-hand side is sorted by input labels.2 Naturally, we also want to perform arc-sorting offline if possible.

Finally, OpenFst, the finite-state library we use, implements composition as an on-the-fly operation: states in the composed FST are lazily computed and stored in an LRU cache.3 Assiduous use of the cache can make it feasible to compute very large compositions when it is not necessary to visit all state of the composed machine. Today I focus on associativity and assume optimal label sorting; caching will have to wait for another day.

Our cascade consists of three weighted finite-state machines:

  • P is a language model expressed as a weighted label-sorted finite-state acceptor. The model is order 6, with Witten-Bell smoothing (Bell et al. 1990) backoffs encoded using φ (i.e., “failure”) transitions, and has been shrunk to 1 million n-grams using relative entropy pruning (Stolcke 1998).
  • G is a uniform channel model encoded as a finite-state transducer. Because it is a non-deterministic transducer, it can be input-label-sorted or output-label sorted, but not both.
  • C is an unweighted label-sorted string finite-state acceptor encoding a long plaintext.

There are two possible associativities, which we illustrate using the OpenFst Python bindings.4 In the first, we use a left-associative composition. Offline, before composition, we input label-sort G:

In [5]: G.arcsort("ilabel")

Then, we perform both compositions, sorting the intermediate object by output label:

In [6]: %timeit -n 10 
...          partial = compose(P, G, connect=False).arcsort("olabel"); 
...          cascade = compose(partial, C, connect=False)
10 loops, best of 3: 41.6 s per loop

In our second design, we use the parallel right-associative construction. Offline, we output label-sort G:

In [7]: G.arcsort("olabel")

Then, we perform both compositions, sorting the intermediate object by input label:

In [8]: %timeit -n 10 
...          partial = compose(G, C, connect=False).arcsort("ilabel"); 
...          cascade = compose(P, partial, connect=False)
3 loops, best of 3: 38.5 s per loop

So we see a small advantage for the right-associative composition, which we take advantage of in OpenGrm-BaumWelch, freely available from the OpenGrm website.

Endnotes

  1. There exist FST algorithms for n-ary composition (Allauzen & Mohri 2009), but in practice one can achieve similar effects using composition filters (Allauzen et al. 2010) instead.
  2. Note that acceptors which are input label-sorted are implicitly output label-sorted and vice versa, and string FSTs are input and output label-sorted by definition.
  3. In the case where one needs the entire composition at once, we can simply disable caching; in OpenFst, the result is also connected (i.e., trimmed) by default, but we disable that since we need to track the original state IDs.
  4. The timeit module is used to estimate execution times irrespective of caching.

References

Allauzen, C., and Mohri, M.. 2009. N-way composition of weighted finite-state transducers. International Journal of Foundations of Computer Science 20(4): 613-627.
Allauzen, C., Riley, M. and Schalkwyk, J. 2010. Filters for efficient composition of weighted finite-state transducers. In Implementation and Application of Automata: 15th International Conference, CIAA 2010, pages 28-38. Manitoba.
Bell, T.C., Clearly, J. G., and Witten, I.H. 1990. Text Compression. Englewood Cliffs, NJ: Prentice Hall.
Dempster, A. P., Laird, N., M, and Rubin, D.B. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B 39(1): 1-38.
Knight, K., Nair, A., Rathod, N, Yamada, K. 2006. Unsupervised analysis for decipherment problems. In Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions, pages 499-506. Sydney.
Stolcke, A. 1998. Entropy-based pruning of backoff language models. In Proceedings of the DARPA Broadcast News And Understanding Workshop, pages 270–274. Lansdowne, Virginia.

Words and what we should do about them

Every January linguists and dialectologists gather for the annual meeting of the Linguistics Society of America and its sister societies. And, since 1990, attendees crowd into a conference room to vote for the American Dialect Society’s Word Of The Year (or WOTY for short). The guidelines for nominating and selecting the WOTY are deliberately underdetermined. There are no rules about what’s a word (and, increasingly, picks are not even a word under any recognizable definition thereof), what makes a word “of the year” (should it be a new coinage? should its use be vigorous or merely on the rise? should it be stereotyped or notorious? should it reflect the cultural zeitgeist?) or even whether the journalists in the room are eligible to vote.

By my count, there are two major categories of WOTY winners over the last three decades: commentary on US and/or world political events, or technological jargon; I count 14 in the former category (1990’s bushlips, 1991’s mother of all, 2000’s chad, 2001’s 9-11, 2002’s WMD, 2004’s red state/blue state, 2005’s truthiness, 2007’s subprime and 2008’s bailout, 2011’s occupy, 2014’s #blacklivesmatter, 2016’s dumpster fire, 2017’s fake news, 2018’s tender-age shelter) and 9 in the latter (1993’s information superhighway, 1994’s cyber, 1995’s web, 1997’s millennium bug, 1998’s e-, 1999’s Y2K, 2009’s tweet, 2010’s app, 2012’s hashtag) But, as Allan Metcalf, former executive of the American Dialect Society, writes in his 2004 book Predicting New Words: The Secrets Of Their Success, terms which comment on a situation—rather than fill some denotational gap—rarely have much of a future. And looking back some of these picks not only fail to recapitulate the spirit of the era but many (bushlips, newt, morph, plutoed) barely denote at all. Of those still recognizable, it is shocking how many refer to—avoidable—human tragedies: a presidential election decided by a panel of judges, two bloody US incursions into Iraq and the hundreds of thousands of civilian casualities that resulted, the subprime mortgage crisis and the unprecedented loss of black wealth that resulted, and unchecked violence by police and immigration officers against people of color and asylum-seekers.

Probably the clearest example of this is the 2018 WOTY, tender-age shelter. This ghoulish euphemism was not, in my memory, a prominent 2018 moment, so for the record, it refers to a Trump-era policy of separating asylum-seeking immigrants from their children. Thus, “they’re not child prisons, they’re…”. Ben Zimmer, who organizes the WOTY voting, opined that this was a case of bureaucratic language backfiring, but I disagree: there was no meaningful blowback. The policy remains in place, and the people who engineered the policy remain firmly in power for the forseeable future, just as do the architects of and propagandists for the Iraqi invasions (one of whom happens to be a prominent linguist!), the subprime mortgage crisis, and so on. Tender-age shelter is of course by no means the first WOTY that attempts to call out right-wing double-talk, but as satire it fails. There’s no premise—it is not even in the common ground that the US linguistics community (or the professional societies who represent them) fervently desire an end to the aggressive detention and deportion of undocumented immigrants, which after all has been bipartisan policy for decades, and will likely remain so until at least 2024—and without this there is no irony to be found. Finally, it bespeaks a preoccupation with speech acts rather than dire material realities.

This is not the only dimension on which the WOTY community has failed to self-criticize. A large number of WOTY nominees (though few outright winners) of the last few years have clear origins in the African-American community (e.g., 2017 nominees wypipocaucasity, and 🐐, 2018 nominees yeet and weird flex but OK, 2019 nominees Karen and woke). Presumably these terms become notable to the larger linguistics community via social media. It is certainly possible for the WOTY community to celebrate language of people of color, but it is also possible to read this as exotificiation. The voting audience, of course, is upper-middle-class and mostly-white, and here these “words”, some quite well-established in the communities in which they originate, compete for novelty and notoriety against tech jargon and of-the-moment political satire. As scholars of color have noted, this could easily reinforce standard ideologies that view African-American English as a debased form of mainstream English rather than a rich, rule-governed system in its own right. In other words, the very means by which we as linguists engage in public-facing research risk reproducing linguistic discrimination:

How might linguistic research itself, in its questions, methods, assumptions, and norms of dissemination, reproduce or work against racism? (“LSA Statement on Race”, Hudley & Mallison 2019)

I conclude that the ADS should issue stringent guidance about what makes expressions “words”, and what makes them “of the year”. In particular, these guidelines should orient voters towards linguistic novelty, something the community is well-situated to assess.

Pynini 2020: State of the Sandwich

I have been meaning to describe some of the work I have been doing on Pynini, our weighted finite-state grammar development platform. For one, while I have been the primary contributor through the history of the project (Richard Sproat wrote the excellent path iteration library), we are now also getting many contributions from Lawrence Wolf-Sonkin (rewrite of the symbol table wrapper, type hints) and lots of usability and bug reports from the Google linguists.

We are currently on Pynini release 2.1.1. Here are some new features/improvements from the last few releases:

  • 2.0.9: Adds an efficient multi-argument union.
  • 2.0.9: Pynini (and the rest of OpenGrm) are available on Conda via Conda-Forge. This means that for most users, there is no longer any need to compile Pynini by hand; instead Pynini is compiled (for a variety of platforms) in the cloud, using a continuous integration framework.
  • 2.1.0: Rewrites the string compiler so that symbol tables are no longer attached to compiled FSTs, eliminating the need for expensive symbol table merging and relabeling options.
  • 2.1.0: Rewrites the FST and symbol table class hierarchies to better reflect the organization of lower-level APIs.
  • 2.1.1: Adds PEP 484/PEP 561-compatible type stubs.

We also have removed or renamed quite a few features:

  • stringify is renamed string.
  • text is renamed print (cf. the command-line tool fstprint).
  • The defaults struct is removed, though it may be reintroduced as a context manager at some point.
  • The * infix operator, previously used for composition is removed; use @ instead.
  • transducer‘s arguments input_token_type and output_token_type are merged as token_type.

Finally, we have broken Python 2.7 compatibility as of 2.1.0; pywrapfst, the lower-level API, still has some degree of Python 2.7 compatibility, but this is probably the last release to maintain that property.

Idealizations gone wild

Generative grammar and information theory are products of the US post-war defense science funding boom, and it is no surprise that the former attempted to incorporate insights from the latter. Many early ideas in generative phonology—segment structure and morpheme structure rules and constraints (Stanley 1967), the notion of the evaluation metric (Aspects, §6), early debates on opacities, conspiracies, and the alternation condition—are clearly influenced by information theory. It is interesting to note that as early as 1975, Morris Halle regarded his substantial efforts in this area to have been a failure.

In the 1950’s I spent considerable time and energy on attempts to apply concepts of information theory to phonology. In retrospect, these efforts appear to me to have come to naught. For instance, my elaborate computations of the information content in bits of the different phonemes of Russian (Cherry, Halle & Jakobson 1953) have been, as far as I know, of absolutely no use to anyone working on problems in linguistics. And today the same negative conclusion appears to be to be warranted about all my other efforts to make use of information theory in linguistics. (Halle 1975: 532)

Thus, the mania for information theory in early generative grammar—was exactly the sort of bandwagon effect of the sort Claude Shannon, the inventor of information theory, warned about decades earlier.

In the first place, workers in other fields should realize that the basic results of the subject are aimed at a very specific direction, a direction that is not necessarily relevant to such fields as psychology, economics, and other social sciences. (Shannon 1956)

Today, however, information theory is not exactly in disrepute in linguistics. First off, perplexity, a metric derived from information theory, is used as an intrinsic metric in certain natural language processing tasks, particularly language modeling.1 Secondly, there have been attempts to revive information theory notions as an explanatory factor in the study of phonology (e.g., Goldsmith & Riggle 2012) and human morphological processing (e.g., Moscoso del Prado Martı́n et al. 2004). And recently, Mollica & Piantadosi (2019; henceforth M&P) dare to use information theory to measure the size of the grammar of English.

M&P’s program is fundamentally one of idealization. Now, I don’t have any problem per se with idealization. Idealization is an important part of the epistemic process in science, one without which there can be no scientific observation at all. Critics of idealizations (and of idealization itself) are usually concerned with the things an idealization abstracts away from; for instance, critics of Chomsky’s famous “ideal speaker-listener” (Aspects, p. 3f) note correctly that it ignores bilingual interference, working memory limitations, and random errors. But idealizations are not merely the infinitude of variables they choose to ignore (and when the object of study is an enormously complex polysensory, multifactorial system like the human capacity for language, one is simply not going to be able to study the entire system all at once); they are just as much defined by the factors they foreground and the affordances they create, and the constraints they impose on scientific inquiry.

In this case, an information theoretic characterization of grammars constrains us to conceive of our knowledge of language in terms of probability distributions. This is a step I am often uncomfortable with. It is, for example, certainly possible to conceive of speakers’s lexical knowledge as a sort of probability distribution over lexical items, but I am not sure that P(word) has much grammatical work to do except act as a model of the readily apparent observation that more frequent words can be recalled and recognized more rapidly than rare words. To be sure, studies like the aforementioned one by Moscoso del Prado Martı́n et al. attempt to connect information theoretic characterizations of the lexicon to behavioral results, but these studies are correlational and provide little in the way of mechanistic-causal explanation.

However, for sake of argument, let us assume that the probabilistic characterization of grammatical knowledge is coherent. Why then should it be undertaken? M&P claim that the measurements they will allow—grammar sizes, measured in bits—weigh on an familiar debate. As they frame it:

…is the amount of information about language that is learned substantial (empiricism) or minimal (nativism)?

I don’t accept the terms of this debate. While I consider myself a nativist, I have formed no opinions about how many bits it takes to represent the grammar of English, which is by all accounts a rather complex object. The tradeoff between what is to be learned and what is innate is something that has been given extensive consideration in the nativist literature. Nativists recognize that the less there is to be learned, the more that has to have evolved in the rather short amount of time (in evolutionary terms) since we humans split off from our language-lacking primate cousins. But this tradeoff is strictly qualitative; were it possible to satisfactorily measure both evolutionary plausibility and grammar size, they would still be incommensurate quantities.

M&P proceed by computing the number of bits for various linguistic subsystems. They compute the information associated with phonemes (really, the acoustic cues to various features), the phonemic representation of wordforms, lexical semantics (mappings from words to meanings, here represented as a vector space as is the fashion), word frequency, and finally syntax. For each of these they provide lower bounds and upper bounds, though the upper bounds are in some cases constructed by adding an ad-hoc factor-of-two error to the lower bound. Finally, they sum these quantities, giving an estimate of roughly 1.5 megabytes. This M&P consider to be substantial. It is not at all clear why they feel this is the case, or how small a grammar would have to be to be “minimal”.

There is a lot to complain about in the details of M&P’s operationalizations. First, I am not certain that the systems they have identified are well-defined modules that would be recognizable to working linguists; for instance their phonemes module has next to nothing to do with my conception of phonological grammar. Secondly, it seems to me that by summing the bits needed to characterize each module, they are assuming a sort of “feed-forward”, non-interactive relationship between these components, and it is not clear that this is correct; for example, there are well-understood lexico-semantic constraints on verbs’ argument structure.

While I do not wish to go too far afield, it may be useful to consider in more detail their operationalization of syntax. For this module, they use a corpus of textbook example sentences, then compute the number of possible unlabeled binary branching trees that would cover each example. (This quantity is the same as the nth Catalan number.) To turn this into a probability, they assume that one correct parse has been sampled from a uniform distribution over all possible binary trees for the given sentence. First, this assumption of uniformitivity is completely unmotivated. Secondly, since they assume there’s exactly one possible bracketing, and do not provide labels to non-terminals, they have no way of representing the ambiguity of sentences like Call John an ambulance. (Thanks to Brooke Larson for suggesting this example.) Anyone familiar with syntax will have no problem finding gaping faults with this operationalization.2

M&P justify all this hastiness by comparing their work to the informal estimation approach known as a Fermi problem (they call them “Fermi calculations”). In the original framing, the quantity being estimated is the product of many terms, so assuming errors in estimation of each term are independent, the final estimate’s error is expected to grow logarithmically as the number of terms increases (roughly, this is because the logarithm of a product is equal to the sum of the logarithms of its terms). But in M&P’s case, the quantity being estimated is a sum, so the error will grow much faster, i.e., linearly as a function of the number of terms. Perhaps, as one reviewer writes, “you have to start somewhere”. But do we? If something is not worth doing well—and I would submit that measuring grammars, in all their richness, by comparing them to the storage capacity of obsolete magnetic storage media is one such thing—it seems to me to be not worth doing at all.

Footnotes

  1. Though not without criticism; in speech recognition, probably the most important application of language modeling, it is well-known that decreases in perplexity don’t necessarily give rise to decreases in word error rate.
  2. Why do M&P choose such a degenerate version of syntax? Because syntactic theory is “experimentally under-determined”, so they want to be “independent as possible from the specific syntactic formalism.”

References

Cherry, E. C., Halle, M., and Jakobson, R. 1953. Towards the logical description of languages in their phonemic aspect. Language 29(1): 34-46.
Chomsky, N. 1965. Aspects in the theory of syntax. Cambridge: MIT Press.
Goldsmith, J. and Riggle, J. 2012. Information theoretic approaches to phonology: the case of Finnish vowel harmony. Natural Language & Linguistic Theory 30(3): 859-896.
Halle, M. 1975. Confessio grammatici. Language 51(3): 525-535.
Mollica, F. and Piantadosi, S. P. 2019. Humans store about 1.5 megabytes of information during language acquisition. Royal Society Open Science 6: 181393.
Moscoso del Prado Martı́n, F., Kostić, A., and Baayen, R. H. 2004. Putting the bits together: an information theoretical perspective on morphological processing. Cognition 94(1): 1-18.
Shannon, C. E. 1956. The bandwagon. IRE Transactions on Information Theory 2(1): 3.
Stanley, R. 1967. Redundancy rules in phonology. Language 43(2): 393-436.

Elizabeth Warren and the morality of the professional class

I am surprised by the outpouring of grief engendered by Senator Elizabeth Warren’s exit from the presidential primary among my professional friends and colleagues. I dare not tell them how they ought to feel, but the spectacle of grief makes me wonder whether my friends are selling themselves short: virtually all of them have lived, in my opinion, far more virtuous lives than the senator from Massachusetts.

First off, none of them have spent most of their professional lives as right-wing activists, as did Warren, a proud Republican until the late ’90s. As recently as 1991, Warren gave a keynote at a meeting of the Federalist Society, the shadowy anti-choice legal organization that gave us Justice Brett Kavanaugh and so many other young ultra-conservative judicial appointees.

Secondly, Warren spent decades lying about her Cherokee heritage, presumably for nothing more than professional gain. This is a stunningly racist personal behavior, one that greatly reinforces white supremacy by equating the almost-unimaginable struggles of indigenous peoples with plagiarized recipes and “high cheekbones”. Were any of my friends or colleagues caught lying so blatantly on a job application, they would likely be subject to immediate termination. It is shocking that Warren has not faced greater  professional repercussions for this lapse in judgment.

Warren’s more recent history of regulatory tinkering around the most predatory elements of US capitalism, while important, are hardly an appropriate penance for these two monumental personal-professional sins.

On the not-exactly-libfixes

In an early post I noted the existence of libfix-like elements where the newly liberated affix mirrors existing—though possibly semantically opaque—morphological boundaries. The example I gave was that of -giving, as in Spanksgiving and Friendsgiving. Clearly, this comes from Thanksgiving, which is etymologically (if not also synchronically) a compound of the plural noun Thanks and the gerund/progressive giving. It seems some morphological innovation has occurred because this gives rise to new coinages and the semantics of -giving is more circumscribed than the free stem giving: it necessarily refers to a harvest-time holiday, not merely to “giving”.

At the time I speculated that it was no accident that the morphological boundaries of the new libfix mimic those of the compound. Other examples I have since collected include mare (< nightmare; e.g., writemare, editmare); core (< hardcore; e.g., nerdcore, speedcore) and step (< two-step; e.g., breakstep, dubstep), both of which refer to musical genres (Zimmer & Carson 2012); gate (< Watergate; e.g., Climategate, Nipplegate, Troopergate) and stock (< Woodstock; e.g., Madstock, Calstock), extracted from familiar toponyms, and position (< exposition; e.g., sexposition, craposition), for which the most likely source can be analyzed as a Latinate “level 1” prefix attached to a bound stem. So, what do we think? Are these libfixes too? Does it matter that recutting mirrors the etymological—or even synchronic—segmentation of the source word?

References

B. Zimmer and C. E. Carson. 2012. Among the new words. American Speech 87(3): 350-368.

tfw it’s not prescriptivism

I think it would be nice to have a term that allowed us to distinguish between politely asking that we preserve existing useful lexical distinctions (such as between terrorism ‘non-state violence against civilians intended to delegitimize the state’ and terms like atrocities or war crimes, between selfie ‘photo self-portrait’ and photo portrait), and full-blown ideologically-driven prescriptivism. I do not have a proposal for what this term ought to be.

Libfix report for December 2019

A while ago I acquired a dictionary of English blends (Thurner 1993), and today I went through it looking for candidate libfixes I hadn’t yet recorded. Here are a few I found. From burlesque, we have lesque, used to form both boylesque and girlesque. The kumquat gives rise to quat. This is used in two (literal) hybrid fruits: citrangequat and limequat. From melancholy comes choly, used to build solemncholy ‘a solemn or serious mood’ and the unglossable lemoncholy. From safari there is fari, used to build seafarisurfari, and even snowfariDocumentary has given rise to mentary, as in mockumentary and rockumentary.

An interesting case is that of stache. While stache is a common clipping of mustache, it is commonly used as an affix as well, as in liquid-based beerstache and milkstache and the pejorative fuckstache and fuzzstache.

I also found a number of libfix-like elements that can plausibly be analyzed as affixes rather than cases of “liberation”. Some examples are eteer (blacketeer, stocketeer), legger (booklegger, meatlegger), and logue (duologue, pianologue, travelogue). I do not think these are properly defined as libfixes (they are a bit like -giving) but I could be wrong.

References

D. Thurner (1993). The Portmanteau Dictionary: Blend Words in the English Language, Including Trademarks and Brand Names. MacFarland & Co.

A theory of error analysis

Manual error analyses can help to identify the strengths and weaknesses of computational systems, ultimately suggesting future improvements and guiding development. However, they are often treated as an afterthought or neglected altogether. In three of my recent papers, we have been slowly developing what might be called a theory of error analysis. The systems evaluated include:

  • number normalization (Gorman & Sproat 2016); e.g., mapping 97000 onto quatre vingt dix sept mille,
  • inflection generation (Gorman et al. 2019); e.g., mapping pairs citation form and inflectional specification like (aufbauen, V;IND;PRS;2) onto inflected forms like baust auf, and
  • grapheme-to-phoneme conversion (Lee et al. 2020); e.g., mapping orthographic forms like almohadilla onto phonemic or phonetic forms like /almoaˈdiʎa/ and [almoaˈðiʎa].

While these are rather different types of problems, the systems all have one thing in common: they generate linguistic representations. I discern three major classes of error such systems might make.

  • Target errors are only apparent errors; they arise when the gold data, the data to be predicted, is linguistically incorrect. This is particularly likely to arise with crowd-sourced data though such errors are also present in professionally annotated resources.
  • Linguistic errors are caused by misapplication of independently attested linguistic behaviors to the wrong input representations.
    • In the case of number normalization, these include using the wrong agreement affixes in Russian numbers; e.g., nom.sg. *семьдесят миллион for gen.sg. семьдесят миллионов ‘nine hundred million’ (Gorman & Sproat 2016:516)
    • In inflection generation, these are what Gorman et al. 2019 call allomorphy errors; e.g., for instance, overapplying ablaut to the Dutch weak verb printen ‘to print’ to produce a preterite *pront instead of printte (Gorman et al. 2019:144).
    • In grapheme-to-phoneme conversion, these include failures to apply allophonic rules; e,g, in Korean, 익명 ‘anonymity’ is incorrectly transcribed as [ikmjʌ̹ŋ] instead of [iŋmjʌ̹ŋ], reflecting a failure to apply a rule of obstruent nasalization not indicated in the highly abstract hangul orthography (Lee et al. under review).
  • Silly errors are those errors which cannot be analyzed as either target errors or linguistic errors. These have long been noted as a feature of neural network models (e.g., Pinker & Prince 1988, Sproat 1992:216f. for discussion of *membled) and occur even with modern neural network models.

I propose that this tripartite distinction is a natural starting point when building an error taxonomy for many other language technology tasks, namely those that can be understood as generating linguistic sequences.

References

K. Gorman, A. D. McCarthy, R. Cotterell, E. Vylomova, M. Silfverberg, and M. Markowska (2019). Weird inflects but OK: making sense of morphological generation errors. In CoNLL, 140-151.
K. Gorman and R. Sproat (2016). Minimally supervised number normalization. Transactions of the Association for Computational Linguistics 4: 507-519.
J. L. Lee, L. F.E. Ashby, M. E. Garza, Y. Lee-Sikka, S. Miller, A. Wong, A. D. McCarthy, and K. Gorman (under review). Massively multilingual pronunciation mining with WikiPron.
S. Pinker and A. Prince (1988). On language and connectionism: analysis of a parallel distributed processing model of language acquisition. Cognition 28(1–2):73–193.
R. Sproat (1992). Morphology and computation. Cambridge: MIT Press.

Action, not ritual

It is achingly apparent that an overwhelming amount of research in speech and language technologies considers exactly one human language: English. This is done so unthinkingly that some researchers seem to see the use of English data (and only English) as obvious, so obvious as to require no comment. This is unfortunate in part because English is, typologically speaking, a bit of an outlier. For instance, it has uncommonly impoverished inflectional morphology, a particularly rigid word order, and rather large vowel inventory. It is not hard to imagine how lessons learned designing for—or evaluating on—English data might not generalize to the rest of the world’s languages. In an influential paper, Bender (2009) encourages researchers to be more explicit about the languages studied, and this, framed as an imperative, is has come to be called the Bender Rule.

This “rule”, and the aforementioned observations underlying it, have taken on an almost mythical interpretation. They can easily be seen as a ritual granting the authors a dispensation to continue their monolingual English research. But this is a mistake. English hegemony is not merely bad science, nor is it a mere scientific inconvenience—a threat to validity.

It is no accident of history that the scientific world is in some sense an English colony. Perhaps you live in a country that owes an enormous debt to a foreign bank, and the bankers are demanding cuts to social services or reduction of tariffs: then there’s an excellent chance the bankers’ first language is English and that your first language is something else. Or maybe, fleeing the chaos of austerity and intervention, you find yourself and your children in cages in a foreign land: chances are you in Yankee hands. And, it is no accident that the first large-scale treebank is a corpus of English rather than of Delaware or Nahuatl or Powhatan or even Spanish, nor that the entire boondoggle was paid for by the largest military apparatus the world has ever known.

Such material facts respond to just one thing: concrete actions. Rituals, indulgences, or dispensations will not do. We must not confuse the act of perceiving and naming the hegemon with the far more challenging act of actually combating it. It is tempting to see the material conditions dualistically, as a sin we can never fully cleanse ourselves of. But they are the past and a more equitable world is only to be found in the future, a future of our own creation. It is imperative that we—as a community of scientists—take  steps to build the future we want.

References

Bender, Emily M. 2009. Linguistically naïve != language independent: why NLP needs linguistic typology. In EACL Workshop on the Interaction Between Linguistics and Computational Linguistics, pages 26-32.