Should Noam Chomsky retire?

Somebody said he should. I don’t want to put them on blast. I don’t know who they are, really. Their bio says they’re faculty at a public university in the States, so they probably know how things go around here about as well as me. Why should he retire? They suggested that were he to retire his position at the University of Arizona, that it would open up a tenure line for “ECRs”.1

Let me begin by saying I do not have a particularly strong emotional connection to Noam. Like many linguists, my academic family tree has many roots at MIT, where Noam taught until quite recently. I have met him in person once or twice, and I found him polite and unassuming. This is a surprise to me. The Times once wrote that Noam is “arguably the most important intellectual alive today”, and important people are mostly assholes.

But I do have very strong intellectual commitments to Noam’s ideas. I think that the first chapter of his Aspects of the Theory of Syntax (1965) is the best statement of the problem of language acquisition. I believe that those who have taken issue with the Aspects idealization of the “ideal speaker-listener” betray a profound ignorance of the role that idealizations play in the history of science.

I think The Sound Pattern of English (SPE), which Noam cowrote with Morris Halle, is the most important work in the theory of phonology and morphology. I believe that the critics who took issue with the “abstract” and “decompositional” nature of SPE have largely been proven wrong.

I even admire the so-called “minimalist program” for syntactic theory Noam has outlined since the 1990s.

It is impossible to deny Noam’s influence on linguistics and cognitive science. We who study language are all pro- or anti-Chomskyians, for better or worse. (And I have much more respect for the “true haters” than the reflexive anti-Chomskyians.) I don’t think Noam should apologize for his critiques of “usage-based” linguistics. I don’t think Noam can fairly be called an “arm-chair” theorist. I think generative grammar has made untold contributions to even areas like language documentation and sociolinguistics, which might seem to be excluded by a strict reading of Aspects.

And, I admire Noam’s outspoken critique of US imperialism. While Noam may have some critics from the left, his detractors (including many scientists of language!) are loud defenders of the West’s blood-soaked imperial adventures.

As a colleague said: “I like Noam Chomsky. I think his theories are interesting, and he seems like a decent guy.” He is a great example of what one can, and ought to, do with tenure.

None of this really matters, though. I do not think he “deserves” a job any more than any other academic does. So, could Noam clear up a “tenure line” simply by retiring? The answer is probably not. Please allow me an anecdote, one that will be familiar to many of you. I teach in a rather-large and robust graduate linguistics program at a publicly-funded college in one of the richest cities in the world (“at the end of history”). Two of our senior faculty are retiring this year, and as of yet the administration has not approved our request to begin a search for a replacement for either of them. Declining to replace tenure lines after retirement is one of the primary mechanisms of casualization in the academy.

Even if you disagree with my assessment of Noam’s legacy, the availability of tenure is not directly conditioned on retirements (though perhaps it should be). Noam bears no moral burden for simply not retiring. If you’d like to fight back against casualization of labor, take the fight to the administration (and to the state houses who set the budgets), don’t blame senior faculty for simply continuing to exist in the system.

PS: If you enjoyed this, you should read The Responsibility of Intellectuals.

1: I had to look up this acronym. It stands for “early-career researchers”, though I’m not quite sure when one’s “early career” starts or ends. I find that an unfortunate ambiguity.

Latin vowel-glide alternations

Post-war structuralist phonology greatly emphasized phonemics and largely ignored morphophonemics. But in 1959, Morris Halle’s Sound Pattern of Russian argued that the distinction between allophony and alternation has little cognitive importance, and in fact the distinction leads to an unnecessary duplication of effort. As a result of Halle’s forceful arguments, the contrast between phonemic and morphophonemic processes plays little role in modern phonological theory. I would like to go one step further and suggest that patterns of alternation are actually more principled facts than those of allophony. Simply put, a speaker must command the pattern of alternation in their language; but it is not at all clear whether they exploit allophony when constructing their lexical entries. This is highlighted most clearly by the notions of lexicon optimization, Stampean occultation, and richness of the base in Optimality Theory, though as Hale et al. (1998) note, similar points apply to rule-based theories.

In writing the Romans did not draw distinctions between the high monophthongs [i, u, iː, uː] and glides [j, w], respectively. This naturally led structuralist linguists (e.g., Hall 1946) to suggest that the glides are allophones of the high monophthongs. There are some apparent problems with this suggestion, though not all of them are fatal. One point that has largely been ignored in this discussion is that Classical Latin has at least four types of plausible alternations between high monophthongs and the corresponding glides. In this squib I review these alternations.

Deverbal -u- derivatives

There are a large number of adjectival derivatives formed from verbal stems by the addition of -u- and the appropriate agreement suffixes, e.g., masculine nominative singular (masc. nom.sg.) -u-us, feminine nom.sg. -u-a, and neuter nom.sg. -u-um, and so on. These derivatives have a similar semantics to past participles (“having been Xed”) but in some cases have a secondary meaning “able to be Xed”. For example, the masc. nom.sg. form dīuiduus [diːwi.du.us] means ‘divided’ (cf. dīuidō [diːwi.doː] ‘I divide’) but also ‘divisible’. This is a fairly productive process, as the following examples show. (I have taken the liberty of leaving off certain further productive derivatives, such as intensified adjectives in per-.)

(1) assiduus ‘constant, ambiguus ‘hither and thither’, annuus ‘annual, arduus ‘elevated’, cernuus ‘bowed forward’, circumfluus ‘flowing around’ (refluus ‘ebbing’), cōnspicuus ‘visible’, contiguus ‘neighboring’, continuus ‘continuous’, dīuiduus ‘divided; divisible’ (indīuiduus ‘undivided; indivisible’), exiguus ‘strict’, fatuus ‘foolish’, incaeduus ‘uncut’,  ingenuus ‘indigenous’, irriguus ‘irrigated’, mēnstruus ‘monthly’, mortuus ‘dead’ (dēmortuus ‘departed’, intermortuus ‘decayed’, praemortuus ‘prematurely dead’), mūtuus ‘borrowed’ (prōmūtuus ‘paid in advance’), nocuus ‘harmful’ (innocuus ‘harmless’), occiduus ‘westerly’, pāscuus ‘for pasturing’, perpetuus ‘perpetual’, perspicuus ‘transparent’, praecipuus ‘particular’, prōmiscuus ‘indiscriminate’, residuus ‘remaining’,  riguus ‘irrigated’, strēnuus ‘brisk’, succiduus ‘sinking’, superuacuus ‘superfluous’, uacuus ’empty’, uiduus ‘destitute’

In all the above cases …uus is read [u.us]. However, when the stem ends in a liquid [l, r] …uus is read [wus], indicating that the deadjectival affix is realized as [w].

(2)
a. caluus ‘bald’, fuluus ‘reddish-yellow, tawny’, giluus ‘pale yellow’, heluus ‘honey yellow’
b. aruus ‘arable’, curuus ‘bent’ (incuruus ‘bent’), furuus ‘dark, swarthy’, paruus ‘small’, prōteruus ‘violent’, toruus ‘savage’

It is interesting to note that the contexts where -u- is realized as [w] align with a well-known allophonic generalization (Devine & Stephens 1977: 61., 134f.): a u preceded by a (tautomorphemic) coda liquid or front glide, and followed by a vowel, is realized as [w], as in silua [sil.wa] ‘forest’ or ceruus [ker.wus] ‘deer’, but is realized as a vowel when the preceding consonant is either a nasal, an obstruent, or part of a consonant cluster, as in lituus [li.tu.us] ‘trumpet’ or patruus [pa.tru.us] ‘paternal uncle’.

Two residual issues remain. First, when the verbal stem end in qu [kw], the adjectival derivative is spelled …quus. By the normal rules of spelling this would be read as [kwus], which would suggest that a zero allomorph of the adjectival suffix is selected for here.

(3) aequus ‘equal’, antīquus ‘old’, fallāciloquus ‘falsely speaking’ (fātiloquus ‘prophetic’, flexiloquus ‘ambiguous’, grandiloquus ‘grandiloquent’, magniloquus ‘boastful’, uāniloquus ‘lying’, uersūtiloquus ‘slyly speaking’), inīquus ‘unjust’, longinquus ‘distant’, oblīquus ‘slanting, oblique’, pedisequus ‘following on foot’, propinquus ‘near’, reliquus ‘remaining’

This is consistent with the metrical evidence. For instance in the following verse, aequus must be read as bisyllabic.

(4)
hoc opus hic labor est paucī quōs
aequus amāuit (Verg., Aen. 6.129)[ok.ko.pu|sik.la.bo|rest.paw|kiː.kwoː|saj.kwu.sa|maːwit]

Secondly, there are a number of deverbal derivatives in -u-us where the verb form also has a stem-final [w]. In this case we also observe [wus].

(5)
a. cauus [ka.wus] ‘hollowed; hollow’ (concauus ‘hollow’); cf. cauō [ka.woː] ‘I excavate’
b. flāuus [flaː.wus] ‘yellow, gold, blonde’ (sufflāuus ‘yellowish’); cf. flāueō [flaː.we.oː] ‘I am yellow’
c. (g)nāuus [naː.wus] ‘active’ (īgnāuus ‘lazy’); cf. nāuō [naː.woː] ‘I do s.t. enthusiastically’
d. nouus [no.wus] ‘new’; cf. nouō [no.woː] ‘I renew’
e. saluus [sal.wus] ‘safe; well’; cf. salueō [sal.we.oː] ‘I am well’
f. uīuus [wːi.wus] ‘living’ (rediuīuus ‘restored to life’); cf. uīuō [wiː.woː] ‘I live’

This may be another context where the adjectival suffix has a zero allomorph, though it is not clear whether we are looking at the same derivational process as above.

The foregoing discussion leads me to posit a deverbal adjective-forming suffix /-u-/ with two phonologically-predictable allomorphs: [w] before liquids, and zero before [kw] and possibly, [w].

The “third stem”

Schoolchildren learning Latin memorize four forms (or principal parts) of each verb: the first person singular (1sg.) present active indicative (e.g., amō ‘I love’), the present infinitive (amāre ‘to love’), the 1sg. perfect active (amāvī ‘I loved’), and the perfect passive participle (amātus masc. nom.sg. ‘loved). The first two principal parts effectively index the so-called “present stem” of the verb, and the third principal part gives the so-called “perfect stem”. The relationship between the present and perfect stem is often unpredictable. Some perfect stems lengthen a monophthong in the final syllable of the present stem (e.g., legō/lē‘I choose/chose’); some perfect stems omit a post-vocalic nasal in the final syllablem with comcomitant lengthening (uincō/uī ‘I win/won’); some are mutated by the addition of a -s- perfect suffix (cō/dīxī [diː.koː, diːk.siː] ‘I say/said’); others bear a CV-reduplication prefix, and so on. This has lead some to suggest that the latter two stems are essentially “listed” or “stored” for all verbs. This is, for instance, the position of Lieber (1980:141f., 152f.), but has been disputed by Aronoff (1994: chap. 2) and Steriade (2012), among others, who claim there are many productive regularities in both cases.

The majority of verbs have perfects that consist of the bare verb root, the theme vowel, a high back vocoid perfect suffix, and the appropriate person-number agreement suffixes (e.g., 1sg. -ī-). The perfect suffix is preceded by a theme vowel and as the appropriate agreement suffixes are all vowel-initial, it is always intervocalic. Allophonically, this is a context where [u] is never found but [w] is, and this is what we find here: amāuī [a.maː.wiː] ‘I loved’. This type of perfect is in fact found in all conjugations, and found in the overwhelming majority of 1st (-ā- theme vowel) and 4th conjugation (-ī-) verbs (Aronoff 1994:43f.).

(6)
a. cōnsōlāuī [kon.soː.laː.wiː], portāuī [por.taː.wiː] ‘I carried’
b. dēlēuī [deː.leː.wiː] ‘I destroyed’, plēuī [pleː.wiː] ‘I filled up’
c. cupīuī [ku.piː.wiː] ‘I desired’, petīuī [pe.tiː.wiː] ‘I sought’
d. audīuī [aw.diː.wiː] ‘I listened to’, mūnīuī [muː.niː.wiː] ‘I fortified’

However, there is an alternative formulation in which the theme vowel is omitted,  placing the perfect suffix to the right of a consonant, and in this context it is instead realized as [u]. This type of perfect is also found in all conjugations but is most common in the 2nd (-ē-) conjugation.

(7)
a. domuī [do.mu.iː] ‘I tamed’, uetuī [we.tu.iː] ‘I forbid’
b. docuī [do.ku.iː] ‘I taught’, tenuī [te.nu.iː] ‘I held’
c. rapuī [rap.u.iː] ‘I snatched’, texuī [tek.su.iː] ‘I wove’
d. aperuī [a.pe.ru.iː] ‘I opened’, saluī [sa.lu.iː] ‘I leapt’

Together the patterns in (6-7) account for the vast majority of perfects in all conjugations except the 3rd (itself a grab-bag of etymologically dissimilar verbs).

I propose that the default perfect suffix is /-u-/ and that it undergoes glide formation to [w] in (6), in intervocalic position, a generalization consistent with the allophonic facts. In (7), when adjacent to the verb root, glide formation is blocked. However, the examples in (7) cannot take a “free ride” on any allophonic generalization. As can be seen in (7d), the perfect suffix does not form [l.w, r.w] syllable contact clusters, unlike the adjectival suffix in (5). There is a surfeit of possible analyses for the failure of glide formation in this context: it might be an effect specific to the perfect suffix or to the category of verb, or the result of cyclicity or phase-based spellout. We leave the question open for now.

The “fourth stem”

The form of the perfect passive participle, the fourth principal part, similarly problematic. For many verbs, the perfect passive participle is formed by adding to the verb root a -t- suffix and the appropriate agreement suffixes (e.g., in citation form, the masc. nom.sg. -us), once again sometimes accompanied by lengthening of the stem-final vowel and/or leftward voice assimilation (an exception-less rule of Latin) triggered by the -t- as in (8b).

(8)
a. docuī [do.ku.iː] ‘I teach’, doctus [dok.tus] masc. nom.sg ‘taught’
b. tegō [te.goː] ‘I clothe’, tēctus [tek.tus] masc. nom.sg. ‘clothed’

Two verb roots which end in consonant followed by a high back vocoid and form a -t- perfect passive participle: soluō [solwoː] ‘I loosen; I explain’ and uoluō [wolwoː] ‘I roll’. This places the root-final high back vocoid, by hypothesis /u/, between two consonants, a context where glides are forbidden. The result is solūtus [soluːtus] and uolūtus [woluːtus]. However, it should be noted that this particular pattern is limited to these two verbs and their derivatives, and that the long ū is unexpected unless it reflects stem vowel lengthening (cf. tēctus above).

Synizesis and diaeresis

Latin poetry exhibits variation in glide formation. (The following examples are all drawn from Lehmann 2005). Synizesis, the unexpected overapplication of glide formation in response to the meter, can be seen in the following verse.

(9)
tenuis
ubī argilla et dūmōsīs calculus aruīs
(Verg., G. 2.180)
[ten.wi.su|biːr.gil|let.duː|mōsīs|kal.ku.lu|sar.wiːs]

In this verse, tenuis ‘thin’ occurs initially, which requires that the first syllable be heavy. The only way to accomplish this is to read it as the bisyllabic [ten.wis] rather than the expected trisyllabic [te.nu.is]. Similarly, in another verse (Verg., Aen. 8.599), abiēte, the ablative singular of abiēs ‘silver fir’, must be read as trisyllabic [ab.jeː.te] rather than the expected [ab.i.eː.te].

On the other hand, the poets also make use of diaeresis, or apparent underapplication of glide formation. For example, siluae, the genitive singular of silua ‘forest’, is in one verse (Hor., Carm. 1.23.4) read as trisyllabic [si.lu.aj] rather than as the expected bisyllabic [sil.waj]. The conditions governing synizesis and diaeresis are not yet well understood, but they constitute further evidence for the close grammatical relationship between [i ~ j] and [u ~ w] in Classical Latin.

Conclusion

We have seen four ways in which the Latin high vocoids alternate between vowels and glides. Together, these four patterns provide indirect evidence for the hypothesis that Latin glides are allophones of the corresponding high vowels, though there are some minor dissociations between patterns of allophony and alternations.

[Earlier writing about Latin glides: Latin glides and the case of “belua”]

References

Aronoff, Mark. 1994. Morphology by itself: stems and inflectional classes. Cambridge: MIT Press.
Devine, Andrew M., and Stephens, Laurence D. 1977. Two studies in Latin phonology. Saratoga: Anma Libri.
Hall, Robert A. 1946. Classical Latin noun inflection. Classical Philology 41(2): 84-90.
Hale, Mark and Kissock, Madelyn, and Reiss, Charles. 1998. Output-output correspondence in Optimality Theory. In Proceedings of WCCFL, pages 223-236.
Halle, Morris. 1959. The sound pattern of Russian. The Hague: Mouton.
Lehmann, Christian. 2005. La structure de la syllabe latine. In Touratier, Christian (ed.), Essais de phonologie latine, pages 157-206. Aix-en-Provence: Publications de l’Université de Provence.
Lieber, Rochelle. 1980. On the organization of the lexicon. Doctoral dissertation, MIT.
Steriade, Donca. 2012. The cycle without containment: Latin perfect stems. Ms., MIT.

Latin glides and the case of “belua”

Latin texts leave the distinction between high monophthongs [i, u, ī, ū] and glides [j, w] unspecified. This has lead some to suggest that the glides are allophones of the monophthongs. For instance, Steriade (1984) implies that the syllabicity of [+high, +vocalic] segments in Latin is largely predictable. Steriade points out two contexts where high vocoids are (almost) always glides: initially before a vowel (# __ V) and intervocalically (V __ V). In these two contexts, the only complications I am aware of arise from competition between generalizations. For instance, in ūua [uː.wa] ‘grape’ and ūuidus [uː.wi.dus] ‘damp’,  intervocalic glide formation appears to bleed word-initial glide formation. (Or it could be the case that ū is ineligible for glide formation by virtue of its length.) And the behavior of two adjacent high vocoids flanked by vowels is somewhat idiosyncratic: compare naevus [naj.wus] ‘birthmark’ and saeuiō [saj.wi.oː] ‘I am furious’, where (by hypothesis) /ViuV/ surfaces as [j.w], to dēuius [deː.wi.us] ‘devious’ and pauiō [pa.wi.oː] ‘I beat’, where (by hypothesis) /VuiV/ surfaces as [.wi] but never as *[w.j]. And so on.

However, Cser (2012) claims that syllabicity of high vocoids is not at all predictable after a consonant and before a vowel, i.e., in the context C __ V. Here we usually observe [w] when the preceding consonant is coda [j, l, r], as in the aforementioned naevus or silua [sil.wa] ‘forest’. Cser contrasts this latter form with belua ‘wild beast’, which is trisyllabic rather than bisyllabic. However, it is not clear this is a good near-minimal pair. The word was clearly not pronounced as [be.lu.a] because the first syllable scans heavy. In the following hexameter verse, the word comprises the fifth foot, a dactyl:

et centumgeminus Briareus, ac belua Lernae (Verg., Aen. 6.287)

Lewis & Short and the Oxford Latin Dictionary both give this word as bēlua [beː.lu.a]. However, it seems much more likely that the word is in fact bellua [bel.lu.a], as it was sometimes written. (Note also that tautomorphemic geminate ll is robustly attested in Latin.) In this case we would expect glide formation to be blocked because the [lw] complex onset is totally unattested, just as Cser predicts from general principles of sonority sequencing. Thus the above verse is:

[et.ken|tũː.ge.mi|nus.bri.a|re.u.sak|bel.lu.a|ler.naj]

As Cser notes, many of the remaining near-minimal pairs occur at morphological boundaries⁠—and thus look to someone with my theoretical commitments as evidence for the phonological cycle—or relate to the complex onsets qu [kw] and su [sw], which might be treated as contour segments underlyingly. But much work will be needed to show that these apparent exceptions follow from the grammar of Latin.

References

Cser, András. 2012. The role of sonority in the phonology of Latin. In Parker, Steve (ed.), The sonority controversy, pages 39-64. Berlin: Mouton de Gruyter.
Steriade, Donca. 1984. Glides and vowels in Romanian. In Proceedings of the Berkeley Lingusitics Society, pages 47-64.

Exceptions to reduplication in Kinande

Mutaka & Hyman’s (1990) study of reduplication in Kinande, a Bantu language spoken in “Eastern Zaire” (now the Democratic Republic of the Congo), is the sort of phonology study one doesn’t see much of anymore. The authors begin by noting the recent interest in reduplication phenomena, but note that most of the major work has completely ignored Bantu, an enormous language family in which nearly every language has one or more type of reduplication. Mutaka & Hyman (MH) proceed to describe Kindande reduplication in detail, with only occasional reference to other languages.

Nouns that undergo reduplication have the semantics of roughly ‘the real X’. Most Kinande verbs also undergo reduplication, with the semantics of roughly ‘to hurriedly X’ or ‘to repetitively X’. Verbal reduplication is somewhat more interesting because certain other verbal suffixes (or “extensions”, as they’re sometimes called in Bantu) may also be found in the reduplicant, argued to be a roughly-bisyllabic prefix.  For instance, the passive suffix is argued to be underlyingly /u/ but surfaces as [w], and is copied over in reduplication. Thus for the verb hum ‘beat’ the passive e-ri-hum-w-a ‘to be beaten’ reduplicates as erihumwahumwa. However, larger vowel-consonant verbal suffixes are not copied; the applied (-ir-) passive infinitive e-ri-hum-ir-w-a ‘to be beaten for’ has a reduplicated form erihumahumirwa, and for the verb tum ‘send’ the applied passive reciprocal (-an-) infinitive e-rí-tum-ir-an-w-a ‘to be sent to each other’ has a reduplicated form erítumatumiranwa (MH, 56).

What’s even more interesting to me is the behavior of verb stems with what MH call ‘unproductive’ extensions (all of which appear to be vowel-consonant). MH report that for only a small minority of these verb stems is there any plausible etymological relationship to a verb without the extension. One example is luh-uk-a ‘take a rest’ which is plausibly related to luh-a ‘be tired’ (MH, 73e), but there is no *bát-a paired with bát-uk-a ‘move’ (MH, 74d). Verb stems bearing unproductive suffixes may have one of three behaviors with respect to reduplication. For some such stems, reduplication is forbidden: eríbugula ‘to find’. For others, reduplication occurs but the ‘unproductive’ extension is stranded (the same behavior as the ‘productive’ extensions): e-rí-banguk-a ‘to jump about’ reduplicates as eríbangabanguka. Finally, some such stems (roughly half) unexpectedly build a trisyllabic (rather than bisyllabic) reduplicant consisting of the verb root and the unproductive extension: e-ri-hurut-a ‘to snore’ reduplicates as erihurutahuruta (MH, 75). This entire distribution poses a fascinating puzzle. How is the failure of reduplication encoded in the first case? What licenses the trisyllabic reduplicant in the last case?

References

Mutaka, Ngessimo and Hyman, Larry M. 1990. Syllables and morpheme integrity in Kinande reduplication. Phonology 7: 73-119.

Libfix report for June 2019

You may be familiar with fatberg, a mass of non-biodegradable solids and fats found in sewers, which suggests -berg has been innovated (presumably via iceberg). And now London is also haunted by a concreteberg.

Late great tech unicorn Theranos made use of a proprietary blood-collection device they called the nanotainer (via container), and I recently found out about vacutainer and a security software package called Cryptainer. So -tainer has been liberated.

The other day in Queens I saw a sign for a Mathnasium, presumably extracted from gymnasium, and the Corpus of Contemporary American English also has a token of jamnasium (a space for jam seshes), suggesting a nascent -nasium.

In a recent, widely-derided ad campaign, Applebee’s coined sizzletonin on analogy with the neurotransmitter seratonin and the hormone melatonin, but as far as I know that’s the end of the line for -tonin.

Using a fixed training-development-test split in sklearn

The scikit-learn machine learning library has good support for various forms of model selection and hyperparameter tuning. For setting regularization hyperparameters, there are model-specific cross-validation tools, and there are also tools for both grid (e.g., exhaustive) hyperparameter tuning with the sklearn.model_selection.GridSearchCV and random hyperparameter tuning (in the sense of Bergstra & Bengio 2012) with sklearn.model_selection.RandomizedSearchCV, respectively. While you could probably could implement these yourself, the sklearn developers have enabled just about every feature you could want, including multiprocessing support.

One apparent limitation of these classes is that, as their names suggest, they are designed for use in a cross-validation setting. In the speech & language technology, however, standard practice is to use a fixed partition of the data into training, development (i.e., validation), and test (i.e., evaluation) sets, and to select hyperparameters which maximize performance on the development set. This is in part an artifact of limited computing resources of the Penn Treebank era and I’ve long suspected it has serious repercussions for model evaluation. But tuning and evaluating with a standard split is faster than cross-validation and can make exact replication much easier. And, there are also some concerns about whether cross-validation is the best way to set hyperparameters anyways. So what can we do?

The GridSearchCV and RandomSearchCV classes take an optional cv keyword argument, which can be, among other things, an object implementing the cross-validation iterator interface. At first I thought I would create an object which allowed me to use a fixed development set for hyperparameter tuning, but then I realized that I could do this with one of the existing iterator classes, namely one called sklearn.model_selection.PredefinedSplit. The constructor for this class takes a single argument test_fold, an array of integers of the same size as the data passed to the fitting method.  As the documentation explains “…when using a validation set, set the test_fold to 0 for all samples that are part of the validation set, and to -1 for all other samples.” That we can do. Suppose that we have training data x_train and y_train and development data x_dev and y_dev laid out as NumPy arrays. We then create a training-and-development set like so:

x = numpy.concatenate([x_train, x_dev])
y = numpy.concatenate([y_train, y_dev])

Then, we create the iterator object:

test_fold = numpy.concatenate([
    # The training data.
    numpy.full(-1, x_train.shape[1], dtype=numpy.int8),
    # The development data.
    numpy.zeros(x_dev.shape[1], dtype=numpy.int8)
])
cv = sklearn.model_selection.PredefinedSplit(test_fold)

Finally, we provide cv as a keyword argument to the grid or random search constructor, and then train. For instance, similar to this example we might do something like:

base = sklearn.ensemble.RandomForestClassifier()
grid = {"bootstrap": [True, False], 
        "max_features": [1, 3, 5, 7, 9, 10]}
model = sklearn.model_select.GridSearchCV(base, grid, cv=cv)
model.fit(x, y)

Now just add n_jobs=-1 to the constructor for model and to spread the work across all your logical cores.

References

Bergstra, J., and Bengio, Y. 2012. Random search for hyperparameter optimization. Journal of Machine Learning Research 13: 281-305.

arXiv vs. LingBuzz

In the natural language processing community, there has been a bit of kerfuffle about the ACL preprint policy, which essentially prevents you from submitting a manuscript to preprint aggregation websites like arXiv when the m.s. is also under review for a conference. I personally think this is a good policy: double blind review is really important for fairness. This lead me to reflect a bit on the outsized role that arXiv plays in natural language processing research. It is interesting to contrast arXiv with LingBuzz, a preprint aggregator for formal linguistics research.1 arXiv is visually ugly and cluttered, expensive (it somehow takes over $800,000 from Simons Foundations’ money to run it every year), and submissions tare subject to detailed, strict, carefully enforced editorial guidelines. In contrast, LingBuzz has a minimalistic text interface, is run and operated by a single professor (Michael Starke at the University of Tromsø), and the editorial guidelines are simple (they fit on a single page) and laxily enforced (mostly after the fact). Despite the laissez-faire attitude at LingBuzz, it has seen some rather contentious debates involving the usual trollish suspects (Postal, Everett, Behme, etc.) but it managed to keep things under control. But what I really love about LingBuzz is that unlike arXiv, no linguist is under the impression that it is any sort of substitute for peer review, or that authors need to know about (and cite) late-breaking work only available on LingBuzz. I think NLP researchers should take a hint from this and stop pretending arXiv is a reasonable alternative to peer review.

Endnotes

1. There are a few other such repositories. The Rutgers Optimality Archive (ROA) was once a popular repository for pre-prints of Optimality Theory work, but its contents are re-syndicated on LingBuzz and Optimality Theory is largely dead anyways. There is also the Semantics Archive.

Text encoding issues in Universal Dependencies

Do you know why the following comparison (in Python 3.7) fails?

>>> s1 = "ड़"
>>> s2 = "ड़"
>>> s1 == s2
False

I’ll give you a hint:

>>> len(s1)
1
>>> len(s2)
2

Despite the two strings rendering identically, they are encoded differently. The string s1 is a single-codepoint sequence, whereas s2 contains two codepoints. Thus string comparison fails, whether it’s done at the level of bytes or of Unicode codepoints.

Some NLP researchers are aware of issues arising from faulty string encoding. Eckhart de Castilho (2016), for example, describes a tool which automatically identifies misencoded pre-trained data, whereas Wu & Yarowsky (2018) report issues using an existing tool for transliteration on certain languages because of encoding issues. However, I suspect that far fewer NLP researchers are familiar with the aforementioned problem, which is specific to Unicode normalization. To put it simply, Unicode defines four normalization forms (and associated conversion algorithms) for strings, and the key distinction is between “composed” and “decomposed” forms of characters (using that term in a pretheoretic sense). The string s1 is composed into a single Unicode codepoint; s2 is decomposed into two.

Unfortunately, three columns of the Hindi Dependency Treebank (hi_hdtb, commit 54c4c0f; Bhat et al. 2017, Palmer et al. 2009) have a chaotic mix of composed and decomposed representations. It seems most if not all of these have to do with the encoding of the six nuqta (‘dot’) consonants, which are usually found in borrowings from Arabic or Persian (via Urdu, presumably). In Devangari these consonants are written by adding a dot to a phonetically similar native consonant; for instance ड [ɖə] plus the nuqta produces ड़ [ɽə]. As is usually the case in Unicode, there is more than one way to do it: you can either encode ड़ with a composed character (U+095C DEVANAGARI LETTER DDDHA) or with the native Devangari character (U+O921 DEVANAGARI LETTER DDA) plus a combining character (U+093C DEVANAGARI SIGN NUKTA). In practical terms, this means that strings containing diferent encodings of <ṛa> (as it is sometimes transliterated) will be treated as totally separate during training and evaluation, except on the off chance that all associated tools perform Unicode normalization ahead of time.

This does have negative consequences for NLP. Consider the UDPipe system (Straka & Straková 2017) at the CoNLL 2017 shared task on dependency parsing (Zeman et al. 2017), for which the primary metric is labeled attachment score (LAS). I first attempted to replicate the UDPipe results for the Hindi Dependency Treebank. Using UDPipe 1.2.0, word2vec (commit 20c129a), the hyperparameters given in the authors’ supplementary materials, and the official evaluation script, I obtain LAS = 87.09 on the “gold tokenization” subtask. However I can improve this simply by converting the training, development, and test data to a consistent normalization like so:

for FILE in *.conllu; do
    TMPFILE="$(mktemp)"
    uconv -x nfkc "${FILE}" > "${TMPFILE}"
    mv "${TMPFILE}" "${FILE}"
done

and then retraining. Here I have chosen to apply the NFKC (“compatibility composed”) normalization form. While Zeman et al. do not discuss the encoding of the labeled Universal Dependencies data, they do mention that they apply NFKC normalization to the addditional raw data. But it doesn’t really matter in this case which you choose so long as you are consistent. After retraining, I obtain LAS = 87.38, or .29 points for free. I also ran an “mismatch” experiment, where the training and testing data have different normalization forms; naturally, this causes a slight degradation to LAS = 86.98.

Straka & Straková (2017) report a separate set of experiments in which they have attempted to rebalance the training-development-test splits. Just to be sure, I repeated the above experiments using their original rebalancing script. With the baseline—mixed normalization—data, I can replicate their result exactly: LAS = 87.30. With a consistent NFKC normalization of training, development and test data, I get LAS = 87.50. And with a normalization mismatch between training and test data, I get LAS = 87.07, a slight degradation. And the improvements are more or less for free.

While I have not yet done a consistent audit, I found three other UD treebanks that have encoding issues. The ar_padt treebank has a non-canonical ordering of combining characters in the lemma column (the shaddah, which indicates geminates, should come before the fathah and not the other way around), but this is unlikely to have any major effect on model performance because it uses this non-canonical ordering consistently. The ko_kaist and ur_udtb treebanks also have minor inconsistencies.

Unfortunately my corporate overlord doesn’t permit me to file a pull request here because of the Hindi data is released under a CC BY-NC-SA license. But if you’re not so constrained, feel free to do so, and ping this thread once you have! And pay attention in the future.

References

Bhat, R. A., Bhatt, R., Farudi, A., Klassen, P., Narasimhan, B., Palmer, M., Rambow, O., Sharma, D. M., Vaidya, A., Vishnu, S. R., and Xia, F. 2017. The Hindi/Urdu Treebank Project. In Ide., N., and Pustejovsky, J. (ed.), The Handbook of Linguistic Annotation, pages 659-698. Springer.
Eckhart de Castilho, R. 2016. Automatic analysis of flaws in pre-trained NLP models. In 3rd International Workshop on Worldwide Language Service Infrastructure and 2nd Workshop on Open Infrastructures and Analysis Frameworks for Human Language Technologies, pages 19-27.
Palmer, M., Bhatt, R., Narasimhan, B., Rambow, O., Sharma, D. M., and Xia, F. 2009. Hindi syntax: Annotation dependency, lexical predicate-argument structure, and phrase structure. In ICON, pages 14-17.
Straka, M., and Straková, J. 2017. Tokenizing, POS tagging, lemmatizing and parsing UD 2.0 with UDPipe. In CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 88-99.
Wu, W. and Yarowsky, D. 2018. A comparative study of extremely low-resource transliteration of the world’s languages. In LREC, pages 938-943.
Zeman, D., Popel, M., Straka, M., Hajič, J., Nivre, J., Ginter, F., … and Li, J. 2017. CoNLL Shared Task: Multilingual parsing from raw text to Universal Dependencies. In CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 1-19.

Lessons learned from my time at Google

  • C++ 11 is a powerful, elegant language and the right choice for performant general-purpose code. Bash is an excellent lingua franca for chaining a long series of commands. Python is best for everything else.
  • Data should be passed around in schematic form, with a compact serializations over the wire and a human-readable format at rest. Protocol buffers (and the lesser-known text format) are an ideal cross-language solution.
  • Grammar development is more important than model building.
  • Model building is easier than deployment.
  • Whiteboards are useful.
  • I can only do certain sorts of work without an office (yes, that thing with a door).