arXiv vs. LingBuzz

In the natural language processing community, there has been a bit of kerfuffle about the ACL preprint policy, which essentially prevents you from submitting a manuscript to preprint aggregation websites like arXiv when the m.s. is also under review for a conference. I personally think this is a good policy: double blind review is really important for fairness. This lead me to reflect a bit on the outsized role that arXiv plays in natural language processing research. It is interesting to contrast arXiv with LingBuzz, a preprint aggregator for formal linguistics research.1 arXiv is visually ugly and cluttered, expensive (it somehow takes over $800,000 from Simons Foundations’ money to run it every year), and submissions tare subject to detailed, strict, carefully enforced editorial guidelines. In contrast, LingBuzz has a minimalistic text interface, is run and operated by a single professor (Michael Starke at the University of Tromsø), and the editorial guidelines are simple (they fit on a single page) and laxily enforced (mostly after the fact). Despite the laissez-faire attitude at LingBuzz, it has seen some rather contentious debates involving the usual trollish suspects (Postal, Everett, Behme, etc.) but it managed to keep things under control. But what I really love about LingBuzz is that unlike arXiv, no linguist is under the impression that it is any sort of substitute for peer review, or that authors need to know about (and cite) late-breaking work only available on LingBuzz. I think NLP researchers should take a hint from this and stop pretending arXiv is a reasonable alternative to peer review.

Endnotes

1. There are a few other such repositories. The Rutgers Optimality Archive (ROA) was once a popular repository for pre-prints of Optimality Theory work, but its contents are re-syndicated on LingBuzz and Optimality Theory is largely dead anyways. There is also the Semantics Archive.

Text encoding issues in Universal Dependencies

Do you know why the following comparison (in Python 3.7) fails?

>>> s1 = "ड़"
>>> s2 = "ड़"
>>> s1 == s2
False

I’ll give you a hint:

>>> len(s1)
1
>>> len(s2)
2

Despite the two strings rendering identically, they are encoded differently. The string s1 is a single-codepoint sequence, whereas s2 contains two codepoints. Thus string comparison fails, whether it’s done at the level of bytes or of Unicode codepoints.

Some NLP researchers are aware of issues arising from faulty string encoding. Eckhart de Castilho (2016), for example, describes a tool which automatically identifies misencoded pre-trained data, whereas Wu & Yarowsky (2018) report issues using an existing tool for transliteration on certain languages because of encoding issues. However, I suspect that far fewer NLP researchers are familiar with the aforementioned problem, which is specific to Unicode normalization. To put it simply, Unicode defines four normalization forms (and associated conversion algorithms) for strings, and the key distinction is between “composed” and “decomposed” forms of characters (using that term in a pretheoretic sense). The string s1 is composed into a single Unicode codepoint; s2 is decomposed into two.

Unfortunately, three columns of the Hindi Dependency Treebank (hi_hdtb, commit 54c4c0f; Bhat et al. 2017, Palmer et al. 2009) have a chaotic mix of composed and decomposed representations. It seems most if not all of these have to do with the encoding of the six nuqta (‘dot’) consonants, which are usually found in borrowings from Arabic or Persian (via Urdu, presumably). In Devangari these consonants are written by adding a dot to a phonetically similar native consonant; for instance ड [ɖə] plus the nuqta produces ड़ [ɽə]. As is usually the case in Unicode, there is more than one way to do it: you can either encode ड़ with a composed character (U+095C DEVANAGARI LETTER DDDHA) or with the native Devangari character (U+O921 DEVANAGARI LETTER DDA) plus a combining character (U+093C DEVANAGARI SIGN NUKTA). In practical terms, this means that strings containing diferent encodings of <ṛa> (as it is sometimes transliterated) will be treated as totally separate during training and evaluation, except on the off chance that all associated tools perform Unicode normalization ahead of time.

This does have negative consequences for NLP. Consider the UDPipe system (Straka & Straková 2017) at the CoNLL 2017 shared task on dependency parsing (Zeman et al. 2017), for which the primary metric is labeled attachment score (LAS). I first attempted to replicate the UDPipe results for the Hindi Dependency Treebank. Using UDPipe 1.2.0, word2vec (commit 20c129a), the hyperparameters given in the authors’ supplementary materials, and the official evaluation script, I obtain LAS = 87.09 on the “gold tokenization” subtask. However I can improve this simply by converting the training, development, and test data to a consistent normalization like so:

for FILE in *.conllu; do
    TMPFILE="$(mktemp)"
    uconv -x nfkc "${FILE}" > "${TMPFILE}"
    mv "${TMPFILE}" "${FILE}"
done

and then retraining. Here I have chosen to apply the NFKC (“compatibility composed”) normalization form. While Zeman et al. do not discuss the encoding of the labeled Universal Dependencies data, they do mention that they apply NFKC normalization to the addditional raw data. But it doesn’t really matter in this case which you choose so long as you are consistent. After retraining, I obtain LAS = 87.38, or .29 points for free. I also ran an “mismatch” experiment, where the training and testing data have different normalization forms; naturally, this causes a slight degradation to LAS = 86.98.

Straka & Straková (2017) report a separate set of experiments in which they have attempted to rebalance the training-development-test splits. Just to be sure, I repeated the above experiments using their original rebalancing script. With the baseline—mixed normalization—data, I can replicate their result exactly: LAS = 87.30. With a consistent NFKC normalization of training, development and test data, I get LAS = 87.50. And with a normalization mismatch between training and test data, I get LAS = 87.07, a slight degradation. And the improvements are more or less for free.

While I have not yet done a consistent audit, I found three other UD treebanks that have encoding issues. The ar_padt treebank has a non-canonical ordering of combining characters in the lemma column (the shaddah, which indicates geminates, should come before the fathah and not the other way around), but this is unlikely to have any major effect on model performance because it uses this non-canonical ordering consistently. The ko_kaist and ur_udtb treebanks also have minor inconsistencies.

Unfortunately my corporate overlord doesn’t permit me to file a pull request here because of the Hindi data is released under a CC BY-NC-SA license. But if you’re not so constrained, feel free to do so, and ping this thread once you have! And pay attention in the future.

References

Bhat, R. A., Bhatt, R., Farudi, A., Klassen, P., Narasimhan, B., Palmer, M., Rambow, O., Sharma, D. M., Vaidya, A., Vishnu, S. R., and Xia, F. 2017. The Hindi/Urdu Treebank Project. In Ide., N., and Pustejovsky, J. (ed.), The Handbook of Linguistic Annotation, pages 659-698. Springer.
Eckhart de Castilho, R. 2016. Automatic analysis of flaws in pre-trained NLP models. In 3rd International Workshop on Worldwide Language Service Infrastructure and 2nd Workshop on Open Infrastructures and Analysis Frameworks for Human Language Technologies, pages 19-27.
Palmer, M., Bhatt, R., Narasimhan, B., Rambow, O., Sharma, D. M., and Xia, F. 2009. Hindi syntax: Annotation dependency, lexical predicate-argument structure, and phrase structure. In ICON, pages 14-17.
Straka, M., and Straková, J. 2017. Tokenizing, POS tagging, lemmatizing and parsing UD 2.0 with UDPipe. In CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 88-99.
Wu, W. and Yarowsky, D. 2018. A comparative study of extremely low-resource transliteration of the world’s languages. In LREC, pages 938-943.
Zeman, D., Popel, M., Straka, M., Hajič, J., Nivre, J., Ginter, F., … and Li, J. 2017. CoNLL Shared Task: Multilingual parsing from raw text to Universal Dependencies. In CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 1-19.

Lessons learned from my time at Google

  • C++ 11 is a powerful, elegant language and the right choice for performant general-purpose code. Bash is an excellent lingua franca for chaining a long series of commands. Python is best for everything else.
  • Data should be passed around in schematic form, with a compact serializations over the wire and a human-readable format at rest. Protocol buffers (and the lesser-known text format) are an ideal cross-language solution.
  • Grammar development is more important than model building.
  • Model building is easier than deployment.
  • Whiteboards are useful.
  • I can only do certain sorts of work without an office (yes, that thing with a door).

A minimalist project design for NLP

Let’s say you want to build a new tagger, a new named entity recognizer, a new dependency parser, or whatever. Or perhaps you just want to see how your coreference resolution engine performs on your new database of anime reviews. So how should you structure your project? Here’s my minimalist solution.

There are two principles that guide my design. The first one is modularity. Some of these components will get run many times, some won’t. If you’re doing model comparison—and you should be doing model comparison—some components will get swapped out with someone else’s code. This sort of thing is a major lift unless you opt for modularity. The second principle is filesystem state. The filesystem is your friend. If your embedding table eats up all your RAM and you have to restart, the filesystem will be in roughly the same state as when you left. The filesystem allows you to organize things into directories and subdirectories, and give the pieces informative names; I like to record information about datasets and hyperparameter values in my file and directory names. So without further ado, here are the recommended scripts or applications to create when you’re starting off on a new project.

  1. split takes the full dataset and a random seed (which you should store for later) as input. The script reads the data in, randomly shuffles the data, and then splits it into an 80% training set, 10% development set, and a 10% test (i.e., evaluation set) which it then outptus. If you’re comparing to prior work that used a “standard split” you may want to have a separate script that generates that too, but I strongly recommend using randomly generated splits.
  2. train takes the training set as input and outputs a model file or directory. If you’re automating hyperparameter tuning you will also want to provide the development set as input; if not you will probably want to either add a bunch of flags to control the hyperparameters or allow the user to pass some kind of model configuration file (I like YAML for this).
  3. apply takes as input the model file(s) produced in (2) and the test set, and applies the model to the data, outputting a new hypothesized test data set (i.e., the model’s predictions). One open question is whether this ought to take only unlabeled data or should overwrite the existing labels: it depends.
  4. evaluate takes as input the gold test set and the hypothesized test data set generated in (3) and outputs the evaluation results (as text or in some structured data format—sometimes YAML is a good choice, other times TSV files will do). I recommend you test this with a small amount of data first.

That’s all there’s to it. When you begin doing model comparison you may find yourself swapping out (2-3) for somebody else’s code, but make sure to still stick to the same evaluation script.

I read “Language: The Cultural Tool”. You’ll never guess what happened next.

I recently obtained a copy of Daniel Everett’s pop-science paperback Language: The Cultural Tool (2012) from the Brooklyn Public Library. The chunky fonts of the cover made me think I was about to enter the world of a staunch iconoclast. But what I actually found was a laundry list of what you might call “grievance studies”—if that didn’t already mean something else—against a broadly generativist conception of language.

Everett, once a specialist in languages of the Amazon, does not draw so much from niche fieldwork so much as splashy papers by non-linguists in high-impact pop-science journals like Nature and Science. Thanks to my colleague Richard Sproat, I have seen how those august organizations make their sausage: they either don’t let linguists referee, or if they do, they simply ignore their negative reviews. (Everett, as it happens, has glowing things to say about the latter paper even though it has nothing particular to do with his titular thesis.) In general, the works cited draw from disparate areas that have received relatively little attention from specialists, so while Everett is a decent prose stylist,1 he is tilting at windmills for much of the book.

Everett often substitutes appeals to authority to actual arguments. For instance:2

Michael Tomasello, the Director of Psycholinguistics at the Max Planck Institute for Evolutionary Anthropology in Leipzig, says exactly this. A world leader in the study of cognitive development in canines and primates, including humans, he says simply ‘Universal grammar is dead.’ It was a good idea. It didn’t pan out. (p. 192)

That’s all we get on that point.

The other thing I was struck with were elementary factual errors that would have been cleaned up had literally any other linguist read the book before it went to press. Early on, Everett is discussing definitions of language. After describing the proposed definitions by Sweet and by Bloch and Trager, he quotes (p. 32) a passage from Noam Chomsky (the reference is neither given nor known to me):

A formal language is a (usually infinite) set of sequences of symbols (such sequences are “strings”) constructed by applying production rules to another sequence of symbols which initially contains just the start symbol.

Now, obviously this is not a definition of language as we understand it but rather the start of a definition of the mathematical construct formal language, a notion which predates Chomsky by at least half a century. Everett is either deeply confused or is deliberately misleading his readers.3 The second howler I found is the following passage, now from Everett:

The late Professor George Zipf of Howard University formulated an explanation of the relative lengths of words that has come to be known as ‘Zipf’s Law.’ His law predicts that more frequent words will be shorter than less frequent words. (p. 106)

George Kingsley Zipf taught at Harvard University, not Howard University, and that’s not what Zipf’s Law denotes.4

There are several factual errors. For instance, we’re told that ejectives are not found in European languages, which is only true if we don’t consider Armenian, Georgian, etc. languages of Europe (p. 177). And Xhosa is described as a Khoisan language when in fact it’s Bantu (p. 178).

And there’s the casually racist, classist, and sexist stuff. For instance, Everett posits that Pirahã children lack a theory of mind:

…many Pirahãs used to stare at me (some children still do) and talk about me in front of me—they didn’t believe I had a mind! (p. 165)

Okay. But maybe they were surprised rather than mentally deficient.

Later, Everett tells us:

…for many Ohio factory workers being overweight is less of a moral problem and more of a health problem—they do not value being at the right weight all that highly. (p. 300)

Okay. But the factories pretty much all closed down in Ohio years ago.

We’re told that in Wari‘, a language of the Amazon, the word for ‘wife’, manaxi’, means literally ‘our hole’ or ‘our vagina’. Everett suggests that “some outsiders”—let’s call them “the libs”—might “jump to the facile conclusion that this is a crude and demeaning comparison”. What’s the right analysis, though?

Perhaps to the Wari’ reproduction and the family are such important values that they honor the wife and the vagina as the source of life. So it is the highest form of flattery to call the wife ‘our vagina’, the source of life. Is this a possibfle conclusion? Yes. Is it the right one? I don’t know. No one can known unless they undertake a systematic analysis of Wari’ culture… (p. 195)

Okay. But maybe Everett could have just asked his coauthor Barbara Kern, an anthropologist who lived among the Wari’ for over forty years and who speaks their language fluently.

Finally, we’re told Banawá, another language of the Amazon, uses feminine as the default gender. Everett then proceeds to describe what I would call a (from a non-relativist perspective) brutal and essentializing coming-of-age ritual for pubescent Banawá girls. Are these facts related?

It is exactly by exploring such cultural values that we would try to build a connection between feminine identity and grammar in Banawá and other Arawan languages. I have not yet established such a link, but I am working on this. (p. 210).

Okay.

Footnotes

  1. Despite his affectation for cheery-dreary Boomer cultural touchstones, that is. In the first few chapters he mentions “Under The Boardwalk”, the music of Cream, the plot of an episode of The Andy Griffith Show, and the murder trial of Phil Spector. Sorry, but I already have a Dad.
  2. For the record, this also gets Tomasello’s title wrong: he was “Co-director” of the Institute, not “the Director of Psycholinguistics”.
  3. As a colleague pointed out, Everett himself is a coauthor on a paper (Futrell et al. 2016) that claims that Pirahã, an Amazonian language, can be described by a regular language. This suggests that Everett understands the distinction between human languages, of which Pirahã is an instantiation, and formal languages, of which the regular languages are an instantiation, and is simply being disingenuous here. For what it’s worth, the argument in that paper is incoherent. The authors simply observe that their corpus can be described by a regular language, but so can any finite sample. This is a vacuous observation. That said the study is not totally without value: the appendix contains an annotated corpus of Pirahã sentences.
  4. Zipf does observe something of the sort in his 1935 book The Psycho-biology of Language (p. 28f.), but “Zipf’s law” does not refer to word length at all.

The libfixes -pire, -spire, and -cuck

[CW: distasteful ideologies.]

A student at CUNY, Emily Campbell, recently brought two libfixes to my attention.

The first is -pire, presumably extracted from empire and found in the blend Fempire (an “investment cooperative for FIERCE women”) and in Trumpire, presumably a pejorative meaning something like ‘the world of the Trump family’. Both of these look blend-like in that the base provides a /m/.

In looking for more examples I also discovered a bunch of brand names in -spire, a libfix that appears to have been extracted from inspire. There is Artspire, an art festival, CitySpire, a New York City skyscraper which is more of a dome than a spire (n.), and the tech companies FundspireJobspirePinspire, and WeSpire.

A linguistically more interesting example is -cuck. This originates in cuckold, an archaic pejorative referring to the husband of an adulterous woman. How did a (string) prefix become a suffix? Here’s my best guess. First, cuckold obtains a new and more transgressive sense as the name for a genre of pornography in which a (usually white) man is forced to watch as a straight man (usually non-white) has sex with his (usually white) wife or girlfriend. This new racist sense lead to the blend cuckservative, a pejorative for white conservative Western politicians perceived to have betrayed their race (and perhaps also their donor base). While we might expect this would lead to a prefixal reanalysis (and a new libfix *cuck-), what seems to have happened first is cuck was made into a free stem. In informal usage, to cuck (v.) is to embarass, or more specifically emasculate, someone, and a cuck (n.) is someone perceived to be acting against their interests or the interests of their in-group; a class-, race-, or gender-traitor (though a conservative belief system is not necessarily presupposed). It didn’t take long before conservative politicians started using that one on each other. Later, with the fossilization of the incel narrative, we find the suffixal form -cuck as in words like wagecuck ‘wage-slave’ (“whadda schnook!”, I guess), Eurocucknormcuck, or studycuck, all pejorative (though not necessarily racist).

-cel goes libfix

Oh no, not that story: that’s misognynistic, objectivifying trash. But that narrative, regressive and objectifying as it is, has given us something new and exciting, a new libfix: -cel.

[CW: distasteful ideologies, misogyny, fat-shaming.]

It’s a familiar story, one we all know:

our protagonist, a young white man, can’t find a sexual partner because of feminism, his weak chin, his poor muscle tone…

Oh no, not that story: that’s misognynistic, objectivifying nonsense. But that narrative, regressive as it is, has given us something novel, a new libfix.

The story begins with two closely-related coinages. The first, according to Wikipedia, is the creation of a semi-anonymous Canadian college student who created a blog, “Alana’s Invo to discuss her sexual inactivity. The title: “Alana’s Involuntary Celibacy Project”. Involuntary celibacy, in the community that arose, was first shorted to invcel, then incel. (The author, as is happened, ultimately realized she was queer and abandoned the community she’d created.)

In the years since, a community of men gathered on Reddit (and specifically the subreddit “r/incels”), blaming women for their celibacy, and in some cases advocating for sexual violence to recoup their imagined losses. They call themselves incel (n.).

Not all the celibate are aggreviedly so; some have chosen their lot voluntarily, and they, in the jargon of the incel community, are termed volcel (n.). It is not immediately clear that this is a widely-used term of self-identification (though it has its own subreddit, too), and it doesn’t seem to satisfy a lexical need that wasn’t already being served by more-precise, in-community terms like asexual or aromantic. But, it does pair nicely with incel, and it’s fun to apply this plunky neologism to the private lives of historical asexuals like Virgil, James Buchanan, or H. P. Lovecraft.

So far, what we’ve seen looks like a standard type of word formation: clipping or (i.e., truncation) of both parts of a compound expression, which are then joined together to form a single word. In this case, the first syllable [1] of the both words is perserved. This is not particularly novel: consider Amex (< American Express) or op-ed (< opinion editorial).

But as is often the case, the clipping in incel and volcel appears to have spawned a libfix, an affix-like formative extracted from the compound. Witness the recently coined heightcel, an involuntarily celibate short person, presumably one whose involuntarily celibacy can be attributed to their diminuitive stature. Here, -cel attaches not to a clipping like in- or vol-, but to a free stem, the noun height. Libfixation, at least as it should be defined, has begun.

There are many more. (I’m not linking to any “manosphere” sources.) A marcel is an married incel; a baldcel is a bald(ing) incel; a currycel is an incel of South Asian descent; a ricecel is an incel of East- or Southeast Asian descent; a gingercel is a red-headed incel; and so on. There’s (ugh) fatcel, though there’s debate (in the incel community, at least) whether that’s more incel or volcel. And there’s even ironycel, someone (non-celibate, I suppose) who mocks incels.

Some of these -cel types foreground features that seem totally orthogonal to the sexual marketplace, suggesting some sort of gallows humor for outsiders, and for the mods: are we really to believe that some young man, somewhere, thinks he’d have a shot with Stacy if his wrists were just a bit thicker? But yet they keep coming.

[1] In volcel, it’s technically the first syllable plus the onset of the following unstressed syllable: [vɑl] < [vɑ.lənˌtɛ.ɹi].

[Some of my prior coverage of libfixation: Defining libfixesYour libfix and blend report for May 2016Your libfix and blend report for February 2018]

[Thanks to Twitter folks for some minor corrections.]

Sweet potato salad

Nerds love posting weird recipes; here’s mine.

Ingredients

Two sweet potatoes, skinned and cubed
1/2th cup balsamic vinegar
1/2th cup extra virgin olive oil
1 cup dry farro
A bag of baby kale
(Optional) a handful of fresh blueberries or strawberries
(Optional) chunks of chevre
Salt & pepper to taste

Preparation

Preheat oven to 425 degrees F. Wrap the cubed sweet potatoes loosely in foil and dress lightly with salt, pepper, and a dash of olive oil. Roast roughly 30 minutes (turning it at least once), until golden brown, and refrigerate.

Cook the farro in a rice cooker according to directions, and refrigerate.

Reduce the balsamic vinegar in a sauce pan over low heat, and set aside.

Wash the baby kale and combine with sweet potatoes, farro, and (optionally) fruit or chevre. Dress with equal parts balsamic vinegar and olive oil, and add salt and pepper to taste.