Defectivity in Norwegian

[This is part of a small but growing series of defectivity case studies.]

Icelandic is not the only Scandinavian language to exhibit defectivity in imperatives: Rice (2003, 2004; henceforth R) describes a superficially similar pattern of defectivity in Norwegian adjectives.

In Norwegian, the infinitival form of most verbs consists of the particle å, the verb stem, and a schwa (which, like in German, is spelled -e). Such verbs’ imperatives then consists of the bare stem, without a particle or the schwa; e.g., å skrive ‘to write’/skriv ‘write!’. A second, smaller class of verbs are monosyllables ending in a (non-schwa) vowel. These verbs use the bare verb stem in both infinitive and imperative; e.g., å tre ‘to step’/tre ‘step!’. While R does not go into any details about how these two patterns might be encoded, one might posit two allomorphs of the infinitive suffix, -e and zero. Presumably this allomorphy is in part lexically conditioned, since it seems necessary to distinguish between minimal pairs like å vie ‘to dedicate’/vi ‘dedicate!’, which belongs to the former class, and å si ‘to say’/si ‘say!’, which belongs to the latter. However, R only gives a few examples of vowel-final monosyllable with infinitive in -e (all other verbs of this shape have zero infinitives), so it’s possible these are just exceptions and the allomorphy conditioning is mostly phonological.

A third class of verbs are those whose stem ends in a rising-sonority consonant cluster; e.g., åpne ‘to open’, sykle ‘to bike’.1 These superficially resemble the first class of verbs (e.g., å skrive) in that they end in a schwa in the infinitive. However, Norwegian does not permit rising sonority codas, so the expected *åpn, *sykl, and so on are ill-formed.

According to R, some speakers simply use circumlocutions to avoid the imperative of such verbs, making this a standard case of defectivity. However, R mentions several other strategies used by Norwegian speakers:2

  • The word-final sonorant can be made syllabic (e.g., [oːpn̩]).
  • If the cluster consists of a voiceless consonant followed by a sonorant, the sonorant can be devoiced, reducing the sonority rise (e.g., [oːpn̥]).
  • One can insert a schwa to break up the cluster (e.g., [oːp.pɘn]).
  • One can insert a schwa after the cluster (e.g., [oːp.nɘ]).

One question that arises is whether there are any other places in the Norwegian grammar where we would expect word-final rising sonority consonant clusters to surface. As others have noted (e.g., Albright 2009), most if not all instances of inflectional defectivity are limited to specific morphological categories. For speakers who cannot generate an imperative of verbs like åpne or sykle, is this defectivity limited to this the category of imperatives, or is it found anywhere else in the language?

Endnotes

  1. R gives the infinitives of this third class of verbs without the å particle. It is unclear to me whether this is intentional or just an oversight.
  2. These forms are ones I have posited on the basis of R’s description, which is not as detailed as one might like.

References

Albright, A. 2009. Lexical and morphological conditioning of paradigm gaps. In C. Rice and S. Blaho (ed.), When Nothing Wins: Modeling Ungrammaticality in OT, pages 117-164. Equinox.
Rice, C. 2003. Dialectal variation in Norwegian imperatives. Nordlyd 31: 372-384.
Rice, C. 2005. Optimal gaps in optimal paradigms. Catalan Journal of Linguistics 4: 155-170.

Defectivity in Icelandic

Hansson (1999; henceforth H) discusses an interesting case of defectivity in Icelandic imperative formation. According to H, this language has three types of (2sg.) imperative.

  • The root imperative is available only as a “deliberate archaism”; it won’t be considered further.
  • The full imperative consists of the root plus a coronal suffix plus a 2sg. pronominal enclitic -u /ʏ/.
  • The clipped imperative also consists of the root plus a coronal suffix but uses a contrastively stressed pronoun ‘you’ (cf. English ‘YOU work!’) instead of a clitic.

For example, the full imperative for taka ‘to take’ is taktu [ˈtʰaxtʏ] and the clipped imperative is takt ÞÚ [tʰaxt ˈθuː].1 H develops an account of the allomorphy of the dental suffix in the full and clipped imperatives; going forward I will cite the full forms, since the distinction is irrelevant. Under H’s analysis, there are two allomorphs:

  • /-T-/ is a [−spread glottis] coronal obstruent surfacing as [t] or [ð] depending on context; e.g., the full imperative for negla ‘to nail’ is negldu [ˈnɛɣ͡ltʏ].2
  • /-Tʰ-/ is a [+spread glottis] coronal obstruent, surfacing as [t] with devoicing of preceding stem-final consonants; e.g., the full imperative for synda ‘to swim’ is syntu [ˈsɪn̥tʏ].

H claims that “[f]or the vast majority of verbs, the choice of allomorph is uniquely determined on the basis of the root-final consonant(s)” (p. 108), implying that this is a phonologically conditioned allomorphy, though the conditioning is not given in prose form. H also implies (fn. 4) that this is suppletive allomorphy, though this assumption is also not justified. Let us assume, for sake of argument, that both assumptions are correct and this is a case of phonologically conditioned suppletive allomorphy. Finally, H notes that under his assumptions, there are certain roots for which either allomorph would give the same imperative surface form.

There are several exceptional verbs for which the phonological conditioning H proposes yields an incorrect result. For instance, the full imperative of senda ‘to send’ is the /-T-/ form sendu [ˈsɛntʏ] rather than the expected /-Tʰ-/ form *[ˈsɛn̥tʏ].3 H draws attention to weak verbs whose roots end in /ll, nn/. For these, H’s account of the phonological conditioning ought to prefer /-T-/, but most select /-Tʰ-/.4

There are four strong verbs whose roots end in /ll, nn/. So far, other than the characteristic ablaut, we have seen no reason to treat imperative formation in the strong verbs differently than in weak verbs.5 For example, for stela ‘to steal’, the full imperative is the /-T-/ form steldu [ˈstɛltʏ]. Yet, there are three strong verbs in /ll, nn/ for which neither possible form of the imperative is well-formed. These are the verbs vinna ‘to work’ (*vinndu, *vinntu), spinna ‘to spin (s.t.)’ (*spinndu, *spinntu), and falla ‘to fall; flunk’ (*falldu, *falltu). And to make matters more complex, there is one strong verb in /nn/ for which the “expected” /-T-/ is acceptable: the full imperative of finna ‘to find’ is finndu [ˈfɪntʏ].

H identifies the following explananda for imperative formation in Icelandic.

  • The imperative stem is always the same as the past stem in weak verbs
  • Yet, defectivity is found only in imperatives and never in pasts.
  • Defectivity occurs only in strong verbs.
  • Defectivity is found only in roots in /ll, nn/, a form which “usually is indicative of exceptionality in allomorph selection” (p. 344).

It is not obvious to me that the first explanandum is meaningful. While many linguists believe “Priscian”-like mechanisms which permit direct encoding of these kinds of facts, the mere stem identity of two semantically distant parts of speech is not itself compelling evidence. In this particular case, one might implement these facts without referring to identity by deriving the allomorphy from a verbal theme, perhaps a floating [α spread glottis] feature, which surfaces in both the imperative and the past. Thus roots selecting /-Tʰ-/ might be underlyingly someting like /√-ʰ/ where the surd denotes the root and /ʰ/ a thematic [+spread glottis] specification.

The second explanandum does seem to be meaningful, even independently of the first. One possible fact that might be relevant here is that (other than the enclitic) the Icelandic imperative is bare, whereas weak verb stems are, to my knowledge, always followed by a vowel-initial suffix. So one could imagine that this is, in part, a phonotactic effect at some level of prosodic structure that does not include the clitic.

The third explanandum also seems meaningful. One can, for instance, frame it as a simple statistical hypothesis test, the null hypothesis being that imperative defectivity is independent of the strong/weak distinction. While I don’t have psychologically plausible counts of the strong and weak verbs—the numbers I need to compute sufficient statistics for this test—in front of me, I suspect the probability of observing this pattern under the null hypothesis is going to be vanishingly small.

The fourth and final explanandum is certainly one worth incorporating into any analysis. However, I think the obvious step has not yet been taken: serious attempts out to be made to incorporate it into a phonological account of the coronal suffix allomorphy, something H unfortunately has not attempted. If we are in fact to regard verbs in /ll, nn/ as lexically exceptional, one should first reasonably exhaust possible phonological accounts. One direction for future research would be to better understand the allomorphy associated with the imperative and past stems in Icelandic in general.

H proposes, essentially, that defectivity results in strong verbs in /ll, nn/ because such verbs lack a coronal-suffixed past tense form elsewhere in the paradigm; he adds that the strong imperative finndu is exempted because there are other /…nt/ forms in the paradigm of that verb. So many, many, many different things have to go wrong for a defective imperative in Icelandic: essentially, one has to be imperative, in /ll, nn/, and lack other coronal-final stems, and this come together in just three verbs in the entire language. Whether or not one finds H’s account compelling, it is very difficult to reason much about the theory of defectivity from the existence of no more than three verbs in a language. We might do better to focus on languages, like Greek or Russian, in which inflectional defectivity has much higher type frequency.

Endnotes

  1. Whether or not the full and the clipped imperative are pragmatically substitutable is unclear to me from H’s description.
  2. Unfortunately, H does not always give the orthographic form of the words he is citing, and given the language’s famously difficult spelling, I am not always certain I have guessed the correct spelling for inflected forms. However, it appears to me that the contrast between /-T-/ and /-Tʰ-/ is spelled as -d- vs. -t-.
  3. Once again, it is not clear why this is the expected form because the only description of the phonological conditioning is given in a sketchy Optimality Theory analysis (H:§2.1-2).
  4. The relevant statistic is that 6 out of 33 weak verbs in /ll, nn/ select the “expected” /-T-/. From this H concludes that in this environment, “the exceptions far outnumber the regulars” (p. 113). I note briefly that under the tolerance principle (Yang 2005), an environment of 33 examples can tolerate up to 9 exceptions, so this could be a productive generalization according to that theory.
  5. In H’s examples, strong imperatives use the same ablaut grade as the infinitive, so we just have to take his word that they are in fact strong.

References

Hansson, G. Ó. 1999. ‘When in doubt…’: intraparadigmatic dependencies and gaps in Icelandic. In Proceedings of NELS 29, pages 105-119. GLSA Publications.
Yang, C. 2005. On productivity. Language Variation Yearbook 5: 333-370.

On “from scratch”

For a variety of historical and sociocultural reasons, nearly all natural language processing (NLP) research involves processing of text, i.e., written documents (Gorman & Sproat 2022). Furthermore, most speech processing research uses written text either as input or output.

A great deal of speech or language processing treads words (however they are understood) as atomic, indivisible units rather than the “intricately structured objects linguists have long recognized them to be” (Gorman in press). But there has been a recent trend to instead work with individual Unicode codepoints, or even the individual bytes of a Unicode string encoded in UTF-8. When such systems are part of an “end-to-end” neural network, these systems are sometimes said to be “from scratch”; see, e.g., Gillick et al. 2016 and Li et al. 2019, who both use this exact phrase to describe their contributions. There is an implication that such systems, by bypassing the fraught notion of word, have somehow eliminated the need for linguistic insight altogether.

The expression “from scratch” makes an analogy to baking: it is as if we are making angel food cake by sifting flour, superfine sugar, and cream of tartar, rather than using the “just add water and egg whites” mixes from Betty Crocker. But this analogy understates just how much linguistic knowledge can be baked in (or perhaps “sifted in”) to writing systems. Writing systems are essentially a type of linguistic analysis (Sproat 2010), and like any language technology, they necessarily reify the analysis that underlies them.1 The linguistic analysis underlying a writing system may be quite naïve but may also encode sophisticated phonemic and/or morphemic insights. Thus written text, whether expressed as Unicode codepoints or UTF-8 bytes, may have quite a bit of linguistic knowledge sifted and folded in.

A familiar and well-known example of this kind of knowledge comes from English (Gorman in press). In this language, changes in vowel quality triggered by the addition of “level 1” suffixes like -ity are generally not indicated in written form. Thus sane [seɪn] and sanity [sæ.nɪ.ti], for instance, are spelled more similarly than they are pronounced (Chomsky and Halle 1968: 44f.), meaning that this vowel change need not be modeled when working with written text.

Endnotes

  1. The Sumerian and Egyptian scribes were thus history’s first linguists, and history’s first language technologists.

References

Chomsky, N., and Halle, M. 1968. Sound Pattern of English. Harper & Row.
Gillick, D., Brunk, C., Vinyals, O., and Subramanya, A. 2016. Multilingual language processing from bytes. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1296-1306.
Gorman, K.. In press. Computational morphology. In Aronoff, M. and Fudeman, K., What is Morphology? 3rd edition. Blackwell.
Gorman, K., and Sproat, R. 2022. The persistent conflation of writing and language. Paper presented at Grapholinguistics in the 21st Century.
Li, B., Zhang, Y., Sainath, T., Wu, Y., and Chan, W. 2019. Bytes are all you need: end-to-end multilingual speech recognition and synthesis with bytes. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 5621-5625.
Sproat, R. 2010. Language, Technology, and Society. Oxford University Press.

The computational revolution in linguistics

(Throughout this post, I have taken pains not to name any names. The beauty of subtweeting and other forms of subposting is that nobody knows for sure you’re the person being discussed unless you volunteer yourself. So, don’t.)

One of the more salient developments in linguistics as a discipline over the last two decades is the way in which computational knowledge has diffused into the field.1 20 years ago, there were but a handful of linguistics professors in North America who could perform elaborate corpus analyses, apply machine learning and statistical analysis, or extract acoustic measurements from an audio file. And, while it was in some ways quite robust, speech and language processing at the turn of the last century simply did not hold the same importance it does nowadays.

While some professors—including, to their credit, many of my mentors and colleagues—can be commended for having “skilled up” in the intervening years, this knowledge has, I am sad to say, mostly advanced one death (and subsequent tenure line renewal) at a time. This has negative consequences for linguistics students who want to train for or pivot to a career in the tech sector, since there are professors who were, in their time, computationally sophisticated, but lack the skills a rising computational linguist is expected to have mastered. In an era of contracting tenure rolls and other forms of casualization in the academy, this has the risk of pushing out legitimate, albeit staid, lines of linguistic inquiry in favor of areas favored by capitalists.2

Yet I believe that this upskilling has a lot to contribute to linguistics as a discipline. There are many core questions about language use, acquisition, variation, and change which are best answered with a computational simulation that forces us to be explicit about our assumptions, or a corpus study that tells us what people really said, or a statistical analysis that tells us whether our correlations are likely to be meaningful, or even a machine learning system that helps us rapidly label linguistic data.3 It is a boon to our field that linguists of any age can employ these tools when appropriate.

This is not to say that the transition has not been occasionally ugly. First, there are the occasional nasty turf wars over who exactly is a linguist.4 Secondly, the standards of quality for work in this area must be negotiated and imposed. While a syntax paper in NL&LT from even 30 years ago are easily readable today, the computational methods of even widely-praised paper from 15 or 20 years ago are, frankly, often quite sloppy. I have found it necessary to explain this to students who want to interact with this older work lest they lower their own methodological standards.

I discern at least a few common sloppy habits in this older computational work, focusing for the moment on computational cognitive models of linguistic behavior.

  1. If a proposed computational model is compared to some “baseline” or older model, this older model is usually an ancient associationist model from psychology. This older model naturally lacks much of the rich linguistic specifications of the proposed model, and naturally it fails to model the data. Deliberately picking a bad baseline is putting one’s finger on the scale.
  2. Comparison of different computational models is usually informal. One should instead use statistical model comparison methods.
  3. The dependent variable for modeling is often derived from poorly-designed human subjects experiments. The subjects in these experiments may be instructed to perform a task they are unlikely to be able to do consciously (i.e., the tasks are cognitively impenetrable). Unjustified assumptions about appropriate scales of measurement may have been made. Finally, the n‘s are often needlessly small. Computational cognitive models demand high-quality measures of the behaviors they’re meant to model.
  4. Once the proposed model has been shown better than the baseline, it is reified far beyond what the evidence suggests. Computational cognitive modeling can at most show that certain explicit assumptions are consistent with the observed data: they cannot establish much beyond that.

The statistician Andrew Gelman writes that scientific discourse sometimes proceeds as if earlier published work has additional claim to truth than later research that is critical of the original findings (which may or may not be published yet).5 Critical interpretation of this older computational work is increasingly called for, as our methodological standards continue to mature. I find reviewers (and literature-reviewers) overly deferential to prior work of dubious quality simply because of its priority.

Endnotes

  1. An under-appreciated element to this process is that it is is simply easier to do linguistically-relevant things with computers than it was 20 years prior. For this, one should thank Python and R, NumPy and Scikit-learn, and of course tools like Praat and Parselmouth.
  2. I happen to think college education should not be merely vocational training.
  3. I happen to think most of these questions can be answered with a cheap laptop,  and only a few require a CUDA-enabled GPU.
  4. I suspect this is mostly a response to the rapidly casualizing academy. Unfortunately, any question about whether we should be doing X in linguistics is misinterpreted as a question about whether people who do X deserve to have a job. This is a presupposition failure for me: I believe everyone deserves meaningful work, and that academic tenure is a model of labor relations that should be expanded beyond the academy.
  5. To free ourselves of this bias, Gelman proposes what he calls the time-reversal heuristic, in which one imagines the temporal order reversed (e.g., that the later failed replication is now the first published result on the matter) and then re-evaluates the evidence. When interacting with older computational work, similar  thinking is called for here.

Lambda lifting in Python

Python really should have a way to lambda-lift a value e to a no-argument callable function which returns e. Let us suppose that our e is denoted by the variable alpha. One can approximate such a lifting by declaring alpha_fnc = lambda: alpha. Python lambdas are slow compared to true currying functionality, like provided by functools.partial and the functions of the operator library, but it basically works. The problem, however, is that lambda declarations in Python, unlike in, say, C++ 11, have no closure mechanism to capture the local scope, so lambda which refer to outer variables are context-dependent. The following interactive session illustrates the problem.

In [1]: alpha_fnc = lambda: alpha

In [2]: alpha_fnc()
------------------------------------------------------------------------
NameError Traceback (most recent call last)
Input In [2], in ()
----> 1 alpha_fnc()

Input In [1], in ()
----> 1 alpha_fnc = lambda: alpha

NameError: name 'alpha' is not defined

In [3]: alpha = .5

In [4]: alpha_fnc()
Out[4]: 0.5

In [5]: alpha = .4

In [6]: alpha_fnc()
Out[6]: 0.4

When rule directionality does and does not matter

At the Graduate Center we recently hosted an excellent lecture by Jane Chandlee of Haverford College. Those familiar with her work may know that she’s been studying, for some time now, two classes of string-to-string functions called the input strictly local (ISL) and output strictly local (OSL) functions. These are generalizations of the familiar notion of the strictly local (SL) languages proposed by McNaughton and Papert (1971) many years ago. For definitions of ISL and OSL functions, see Chandlee et al. 2014 and Chandlee 2014. Chandlee and colleagues have been arguing, for some time now, that virtually all phonological processes are ISL, OSL, or both (note that their intersection is non-null).

In her talk, Chandlee attempted to formalize the notions of iterativity and non-iterativity in phonology with reference to ISL and OSL functions. One interesting side effect of this work is that one can, quite easily, determine what makes a phonological process direction-invariant or direction-specific. In FSTP (Gorman & Sproat 2021:§5.1.1) we describe three notions of rule directionality (ones which are quite a bit less general than Chandlee’s notions) from the literature, but conclude: “Note, however, that directionality of application has no discernable effect for perhaps the majority of rules, and can often be ignored.” (op. cit., 53) We didn’t bother to determine when this is the case, but Chandlee shows that the set of rules which are invariant to direction of application (in our sense) are exactly those which are ISL ∩ OSL; that is, they describe processes which are both ISL and OSL, in the sense that they are string-to-string functions (or maps, to use her term) which can be encoded either as ISL or OSL.

As Richard Sproat (p.c.) points out to me, there are weaker notions of direction-invariance we may care about in the context of grammar engineering. For instance, it might be the case that some rule is, strictly speaking, direction-specific, but the language of input strings is not expected to contain any relevant examples. I suspect this is quite common also.

References

Chandlee, J. 2014. Strictly local phonological processes. Doctoral dissertation, University of Delaware.
Chandlee, J., Eyraud, R., and Heinz, J. 2014. Learning strictly local subsequential functions. Transactions of the Association for Computational Linguistics 2: 491-503.
Gorman, K., and Sproat, R. 2021. Finite-State Text Processing. Morgan & Claypool.
McNaughton, R., and Papert, S. A. 1971. Counter-Free Automata. MIT Press.

A* shortest string decoding for non-idempotent semirings

I recently completed some work, in collaboration with Google’s Cyril Allauzen, on a new algorithm for computing the shortest string through weighted finite-state automaton. For so-called path semirings, the shortest string is given by the shortest path, but up until now, there was no general-purpose algorithm for computing the shortest string over non-idempotent semirings (like the log or probability semiring). Such an algorithm would make it much easier to decode with interpolated language models or elaborate channel models in a noisy-channel formalism. In this preprint, we propose such an algorithm using A* search and lazy (“on-the-fly”) determinization, and prove that it is correct. The algorithm in question is implemented in my OpenGrm-BaumWelch library by the baumwelchdecode command-line tool.

Please don’t send .docx or .xlsx files

.docx and .xlsx can only be read on a small subset of devices and only after purchasing a license. It is frankly a bit rude to expect everyone to have such licenses in 2022 given the proliferation of superior, and free, alternatives. If the document is static, read-only content, convert it to a PDF. If it’s something you want me to edit or comment on, or which will be changing with time, send me the document via Microsoft 365 or the equivalent Google offerings. Or a Git repo. Sorry to be grumpy but everyone should know this by now. If you’re still emailing these around, please stop.

WFST talk

I have posted a lightly-revised slide deck from a talk I gave at Johns Hopkins University here. In it, I give my most detailed-yet description of the weighted finite-state transducer formalism and describe two reasonably interesting algorithms, the optimization algorithm underlying Pynini’s optimize method and Thrax’s Optimize function, and a new A*-based single shortest string algorithm for non-idempotent semirings underlying BaumWelch’s baumwelchdecode CLI tool.