Dutch names in LaTeX

One thing I recently figured out is a sensible way to handle Dutch names (i.e., those that begin with denvan or similar particles. Traditionally, these particles are part of the cited name in author-date citations (e.g., den Dikken 2003, van Oostendorp 2009) but are ignored when alphabetizing (thus, van Oostendorp is alphabetized between Orgun & Sprouse and Otheguy, not between Vago and Vaux)This is not something handled automatically by tools like LaTeX and BibTeX, but it is relatively easy to annotate name particles like this so that they do the right thing.

First, place, at the top of your BibTeX file, the following:

@preamble{{\providecommand{\noopsort}[1]{}}}

Then, in the individual BibTeX entries, wrap the author field with this command like so:

 author = {{\noopsort{Dikken}{den Dikken}}, Marcel},

This preserves the correct in-text author-date citations, but also gives the intended alphabetization in the bibliography.

Note of course that not all people with van (etc.) names in the Anglosphere treat the van as if it were a particle to be ignored; a few deliberately alphabetize their last name as if it begins with v.

X moment

A Reddit moment is an expression used to refer to a certain type of cringe ‘cringeworthy behavior or content’ judged characteristic of Redditors, habitual users of the forum website reddit.com. It seems hard to pin down what makes cringe Redditor-like, but discussion on Urban Dictionary suggests that one salient feature is a belief in one’s superiority, or the superiority of Redditors in general; a related feature is irl behavior that takes Reddit too seriously. The normal usage is as an interjection of sorts; presented with cringeworthy internet content (a screenshot or URL), one might simply respond  “Reddit moment”.

However, Reddit isn’t the only community that can have a similar type of pejorative X moment. One can find many instances of crackhead moment, describing unpredictable or spazzy behavior. A more complicated example comes from a friend, who shared a link about a software developer who deliberately sabotaged a widely used JavaScript software library to protest the Russian invasion of Ukraine. JavaScript, and the Node.js community in particular, has been extremely vulnerable to both deliberate sabotage and accidental bricking ‘irreversible destruction of technology’, and naturally my friend sent the link with the commentary “js moment”. The one thing that seems to unite all X moment snowclones is a shared negative evaluation of the community in the common ground.

Country (dead)naming

Current events reminded me of an ongoing Discourse about how we ought to refer to the country Ukraine in English. William Taylor, US ambassador to the country under George W. Bush, is quoted on the subject in this Time magazine piece (“Ukraine, Not the Ukraine: The Significance of Three Little Letters”, March 5th, 2014; emphasis mine), which is circulating again today:

The Ukraine is the way the Russians referred to that part of the country during Soviet times … Now that it is a country, a nation, and a recognized state, it is just Ukraine.

Apparently they don’t fact-check claims like this, because this is utter nonsense. Russian doesn’t have definite articles, i.e., words like the. There is simply no straightforward way to express the contrast between the Ukraine and Ukraine in Russian (or in Ukrainian for that matter).

Now, it’s true that the before Ukraine has long been proscribed in English, but this seems to be more a matter of style—the the variant sounds archaic to my ear—than ideology. And, in Russian, there is variation between в Украине and на Украине, both of which I would translate as ‘in Ukraine’. My understanding is that both have been attested for centuries, but one (на) was more widely used during the Soviet era and thus the other (в) is thought to emphasize the country’s sovereignty in the modern era. As I understand it, that one preposition is indexical of Ukrainian nationalist sentiment and another is indexical of Russian revanchist-nationalist sentiment is more or less linguistically arbitrary in the Saussurean sense. Or, more weakly, the connotative differences between the two prepositions are subtle and don’t map cleanly onto the relevant ideologies. But I am not a native (or even competent) speaker of Russian so you should not take my word for it.

Taylor, in the Time article, continues to argue that US media should use the Ukrainian-style transliteration Kyiv instead of the Russian-style transliteration Kiev. This is a more interesting prescription, at least in that the linguistic claim—that Kyiv is the standard Ukrainian transliteration and Kiev is the standard Russian transliteration—is certainly true. However, it probably should be noted that dozens of other cities and countries in non-Anglophone Europe are known by their English exonyms, and no one seems to be demanding that Americans start referring to Wien [viːn] ‘Vienna’ or Moskva ‘Moscow’. In other words Taylor’s prescription is a political exercise rather than a matter of grammatical correctness. (One can’t help but notice that Taylor is a retired neoconservative diplomat pleading for “political correctness”.)

On expanding acronyms

Student writers are often taught that acronyms should also be given in expanded form on first use. While this is a good rule of thumb in my opinion, there is an exception for any acronym whose expansion the author believes to be misleading about its referent, particularly when the acronym in question seems to have been coined after the fact and purely for the creator’s amusement.

“Many such cases.”

An author-date citation may be preferable to spelling out the silly acronym.

Is linguistics “unusually vituperative”?

The picture of linguistics one can get from books like The Linguistics Wars (Harris 1993) and press coverage of l’affaire du Pirahã suggests it is a quite nasty sort of field, full of hate and invective. Is linguistics really, as an engineer colleague would have it, “unusually vituperative”?

In my opinion it is not, for I object to the modifier unusually. Indeed, while such stories rarely make the nightly news, the sciences have never been without a heft dose of vituperation. For instance, anthropologist Napoleon Chagnon was accused, slanderously and at book length, of causing a measles epidemic among indigenous peoples of the Amazon. And entomologist E.O. Wilson had a pitcher of water poured on his head at a lecture because, according to a lone audience member, his research on ants implied support for eugenics. And even gentleman Darwin was not above keeping an ill-tempered bulldog.

References

Harris, R. A. 1993. The Linguistics Wars: Chomsky, Lakoff, and the Battle over Deep Structure. Oxford University Press. [I don’t recommend this book: Harris, instead of explaining the issues at stake, focuses on “horse race” coverage, quoting extensively from interviews with America’s grumpiest octogenarians.]

On being scooped

Some of my colleagues have over the years expressed concern their ongoing projects are in danger of being “scooped”, and as a result, they need to work rapidly to disseminate the projects in question. This concern is particularly prominent among the fast-moving (and unusually cargo-cultish) natural language processing community, though I have occasionally heard similar concerns in the company of theoretical linguists. Assuming this is not merely hysteria caused by material conditions like casualization and pandemic-related isolation, there is a simple solution: work on something else, something you yourself deem to be less obvious. If you’re in danger of being scooped, it suggest that you’re taking obvious next steps—you’re engaging in what Kuhn calls normal science—that you lack a competitive advantage (such as rare-on-the-ground expertise, special knowledge, proprietary or unreleased data, etc.) that would help you in particular advance the state of knowledge. If you find yourself in this predicament, you should consider allowing somebody else to carry the football across the goal-line. Or don’t, but then you might just get scooped after all.

How to write linguistic examples

There is a standard, well-designed way in which linguists write examples, and failure to use it in a paper about language is a strong shibboleth suggesting unfamiliarity with linguistics as a field. In brief, it is as follows:

  • When an example (affix, word, phrase, or sentence) appears in the body (i.e., the middle of a sentence):
    • if written in Roman, it should be italicized.
    • if written in non-Roman, but alphabetic scripts like Cyrillic, italicization is optional. (Cyrillic italics are, like the Russian cursive hand they’re based on, famously hard for Western amateurs like myself to read.)
    • if written in a non-alphabetic script, it can just be written as is, though you’re welcome to experiment.
    • Examples should never be underlined, bolded, or placed in single or double quotes, regardless of the script used.
  • When an example is set off from the body (i.e., as a numbered example or in a table), it need not be italicized.
  • Any non-English example should be immediately followed with a gloss.
    • A gloss should always be single-quoted.
    • Don’t intersperse words like “meaning”, as in “…kitab meaning ‘book’…”, just write “…kitab ‘book’…”
  • If using morph-by-morph or word-by-word glossing, follow the Leipzig glossing conventions.

How to write numbers

A lot of students—and increasingly, given how young the field of NLP is—don’t know how to write numbers in papers. Here are a few basic principles (some of these are loosely based off the APA guidelines):

  • Use the same number of decimals every time and don’t omit trailing zeros after the decimal. Thus “.50” or “.5000” and not “.5”.
  • Round to a small number of decimals: 2, 4, or 6 are all standard choices.
  • Omit leading zeros before the decimal if possible values of whatever quantity are always within [0, 1], thus you might say you got “.9823” accuracy.
  • (For LaTeX users) put the minus sign in math mode, too, or it’ll appear as a hyphen (ASCII char 45), which is quite a bit shorter and just looks wrong.
  • Use commas to separate the hundreds and thousands place (etc.) in large integers, and try not to use too many large exact integers; rounding is fine once they get large.
  • Expressions like “3k”, “1.3m” and “2b” are too informal; just write “3,000”, “1.3 million”, and “2 billion”.
  • Many evaluation metrics can either be written as (pseudo-)probabilities or percentages. Pick one or the other format and stick with it.

A few other points about tables with numbers (looking at you LaTeX users):

  • Right-align numbers in tables.
  • Don’t put two numbers (like mean and standard deviation or a range) in a single cell; the alignment will be all wrong. Just use more cells and tweak the intercolumnar spacing. 
  • Don’t make the text of your tables smaller than the body text, which makes the table hard to read. Just redesign the table instead.

Moneyball Linguistics

[This is just a fun thought experiment. Please don’t get mad.]

The other day I had an intrusive thought: the phrase moneyball linguistics. Of course, as soon as I had a moment to myself, I had to sit down and think what this might denote. At first I imagined building out a linguistics program on a small budget like Billy Beane and the Oakland A’s. But it seems to me that linguistics departments aren’t really much like baseball teams—they’re only vaguely competitive (occasionally for graduate students or junior faculty), there’s no imperative to balance the roster, there’s no DL list (or is just sabbatical?), and so on—and the metaphor sort of breaks down. But the ideas of Beane and co. do seem to have some relevance to talking about individual linguists and labs. I don’t have OBP or slugging percentage for linguists, and I wouldn’t dare to propose anything so crude, but I think we can talk about linguists and their research as a sort of “cost center” and identify two major types of “costs” for the working linguist:

  1. cash (money, dough, moolah, chedda, cheese, skrilla, C.R.E.A.M., green), and
  2. carbon (…dioxide emissions).

I think it is a perfectly fine scientific approximation (not unlike competence vs. performance) to treat the linguistic universe as having a fixed amount of cash and carbon, so that we could use this thinking to build out a roster-department and come in just under the pay cap. While state research budgets do fluctuate—and while our imaginings of a better world should also include more science funding—it is hard to imagine near-term political change in the West would substantially increase it. And similarly, while there is roughly 1012 kg of carbon in the earth’s crust, climate scientists agree that the vast majority of it really ought to stay there. Finally, I should note that maybe we shouldn’t treat these as independent factors, given that there is a non-trivial amount of linguistics funding via petrodollars. But anyways, without further ado, let’s talk about some types of researchers and how they score on the cash-and-carbon rubric.

  • Armchair research: The armchairist is clearly both low-cash (if you don’t count the sports coats) and low-carbon (if you don’t count the pipe smoke).
  • Field work: “The field” could be anywhere, even the reasonably affordable, accessible, and often charming Queens, the archetypical fieldworker is flying in, first on a jet and then maybe reaches their destination via helicopter or seaplane. Once you’re there though, life in the field is often reasonably affordable, so this scores as low-cash, high-carbon.
  • Experimental psycholinguistics: Experimental psycholinguists have reasonably high capital/startup costs (in the form of eyetracking devices, for instance) and steady marginal costs for running subjects: the subjects themselves may come from the Psych 101 pool but somebody’s gotta be paid to consent them and run them through the task. We’ll call this medium-cash, low-carbon.
  • Neurolinguistics: The neurolinguistic imaging technique du jour, magnetoencephalography (or MEG), requires superconducting coils cooled to a chilly 4.2 K (roughly −452 °F); this in turn is accomplished with liquid helium. Not only is the cooling system expensive and power-hungry, the helium is mostly wasted (i.e., vented to the atmosphere). Helium is itself the second-most common element out there, but we are quite literally running out of the stuff here on Earth. So, MEG, at least, is high-cash, high-carbon.
  • Computational linguistics: there was a time not so long ago when I would said that computational linguists were a bunch of hacky-sackers filling up legal pads with Greek letters (the weirder the better) and typing some kind of line noise they call “Haskell” into ten-year-old Thinkpads. But nowadays, deep learning is the order of the day, and the substantial carbon impact from these methods are well-documented, or at least well-estimated (e.g., Strubell et al. 2019). Now, it probably should be noted that a lot of the worst offenders (BigCos and the Quebecois) locate their data centers near sources of plentiful hydroelectric power, but not all of us live within the efficient transmission zones for hydropower. And of course, graphics processing units are expensive too. So most computational linguistics is, increasingly, high-cash, high-carbon.

On a more serious note, just so you know, unless you run an MEG lab or are working on something called “GPT-G6”, chances are your biggest carbon contributions are the meat you eat, the cars you drive, and the short-haul jet flights you take, not other externalities of your research.

References

Strubell, M., Ganesh, A. and McCallum, A. 2019. Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3645-3650.

Don’t take money from the John Templeton Foundation

Don’t take money from the John Templeton Foundation. They backed the murderous Chicago School economists, the genocidal architects of the war on Iraq, and are among the largest contributors to the climate change denial movement. That’s all.