Professional organizations in linguistics

I am a member of the Linguistic Society of America (LSA) and the Association for Computational Linguistics (ACL), US-based professional organizations for linguists and computational linguists, respectively. (More precisely, I am usually a member. I think my memberships both lapsed during the pandemic and I renewed once I started going to their respective conferences again.)

I attend LSA meetings when they’re conveniently located (next year’s in Philly and we’re doing a workshop on Logical Phonology), and roughly one ACL-hosted meeting a year as well. As a (relatively) senior scholar I don’t find the former that useful (the scholarship is hit-or-miss and the LSA is the dominated by a pandemonium of anti-generativists who are best just ignored), but the networking can be good. The *CL meetings tend to have more relevant science (or at least they did before prompt engineering…) but they’re expensive and rarely held in the ACELA corridor.

While the LSA and the ACL are called professional organizations, their real purview is mostly to host conferences. The LSA does some other stuff of course: they run Language, the institutes, and occasionally engage in lobbying, etc. But they do not have much to say about the lives of workers in these fields. The LSA doesn’t tell you about the benefits of unionizing your workplace. The ACL doesn’t give you ethics tips about what to do if your boss wants you to spy on protestors.  They don’t really help you get jobs in these fields either. They could; they just don’t.

There is an interesting contrast here with another professional organization I was once a member of: the Institute of Electrical and Electronics Engineers (IEEE, pronounced “aye Tripoli”). Obviously, I am not an electrical engineer, but electrical engineering was historically the home of speech technology research and their ASRU and SLT conferences are quite good in that field. During the year or so I was an IEEE member, I received their monthly magazine. Roughly half of it is in fact just stories of general interest to electrical engineers; one that stuck with me argued that the laws of physics preclude the existence of “directed energy weapons” claimed to cause Havana Syndrome. But the other half were specifically about the professional life of electrical engineers, including stuff about interviewing, the labor market outlook, and working conditions.

Imagine if Language had a quarterly professional column or if the ACL Anthology had a blog-post series…

Hiring season

It’s hiring season and your dean has approved your linguistics department for a new tenure line. Naturally, you’re looking to hire an exciting young “hyphenate” type who can, among other things, strengthen your computational linguistics offerings, help students transition into industry roles and perhaps even incorporate generative AI into more mundane parts of your curriculum (sigh). There are two problems I see with this. First, most people applying for these positions don’t actually have relevant industry experience, so while they can certainly teach your students to code, they don’t know much about industry practices. Secondly, an awful lot of them would probably prefer to be a full-time software engineer, all things considered, and are going to take leave—if not quit outright—if the opportunity ever becomes available. (“Many such cases.”) The only way to avoid this scenario, as I see it, is to find people who have already been software engineers and don’t want to be them anymore, and fortunately, there are several of us.

Hugging Face needs better curation

Hugging Face is, among other things, a platform for obtaining pre-trained neural network models. We use their tokenizers and  transformers Python libraries in a number of projects. While these have a bit more abstraction than I like, and are arguably over-featured, they are fundamentally quite good and make it really easy to, e.g., add a pre-trained encoder. I also appreciate that the tokenizers are mostly compiled code (they’re Rust extensions, apparently), which in practice means that tokenization is IO-bound rather than CPU bound.

My use case mostly involves loading Hugging Face transformers and their tokenizers and using their encoding layers for fine-tuning. To load a model in transformers, one uses the function transformers.AutoModel.from_pretrained and provides the name of the model on Hugging Face as a string argument. If the model exists, but you don’t already have a local copy, Hugging Face will automatically download it for you (and stashes the assets in some hidden directory). One can do something similar with the tokenizers.AutoTokenizer, or one can request the tokenizer from the model instance.

Now you might think that this would make it easy to, say, write a command-line tool where the user can specify any Hugging Face model, but unfortunately, you’d be wrong. First off, a lot of models, including so-called tokenfree ones lack a tokenizer. Why doesn’t ByT5, for instance, provide as its tokenizer a trivial Rust (or Python, even) function that returns bytes? In practice, one cannot support arbitrary Hugging Face models because one cannot count on them having a tokenizer. In this case, I see no alternative but to keep a list of supported models that lack their own tokenizer. Such a list is necessarily incomplete because the model hub continues to grow.

A similar problem comes with how parameters of the models are named. Most models are trained with dropout and support a dropout parameter, but the name of this parameter is inconsistent from model to model. In UDTube, for instance, dropout is a global parameter and it is applied to each hidden layer of the encoder (which requires us to access the guts of the Hugging Face model), and then again to the pooled contextual subword embeddings just before they’re pooled into word embeddings. Most of the models we’ve looked at call the dropout probability of the encoder hidden_dropout_prob, but others call it dropout or dropout_rate.  Because of this, we have to maintain a module which keeps track of what the hidden layer dropout probability parameter is called.

I think this is basically a failure of curation. Hugging Face community managers should be out there fixing these gaps and inconsistencies, or perhaps should also publish standards for such things. They’re valued at $4.5 billion. I would argue this is at least as important as their efforts with model cards and the like.

The dark triad professoriate

[I once again need to state that I am not responding any person or recent event. But remember the Law of the Subtweet: if you see yourself in some negative description but are not explicitly named, you can just keep that to yourself.]

There is a long debate about the effects of birth order on stable personality traits. A recent article in PNAS1 claims the effects are near null once proper controls are in place; the commentary it’s paired with suggests the whole thing is a zombie theory. Anyways, one of the claims I remember hearing was that older siblings were more likely to exhibit subclinical “Dark Triad” (DT) traits: Machiavellianism, narcissism, and psychopathy. Alas, this probably isn’t true, but it is easy to tell a story about why this might be adaptive. Time for some game theory. In a zero-sum scenario, if you’re the most mature (and biggest) of your siblings, you probably have more to gain from non-cooperative behaviors, and DT traits ought to select for said behaviors. A concrete (if contrived example): you can either hog or share the toy, and the eldest is by more likely to get away with hogging.

I wonder whether the scarcity of faculty positions—even if overstated (and it is)—might also be adaptive for dark triad traits. I know plenty of evil Boomer professors, but not many that are actually DT, and if I had to guess, these traits (particularly the narcissism) are much more common in younger (Gen X and Millennial) cohorts. Then again, this could be age-grading, since anti-social behaviors peak in adolescence and decline afterwards.

Endnotes

  1. This is actually a “direct submission”, not one of those mostly-phony “Prearranged Editor” pieces. So it might be legit.

Python ellipses considered harmful

Python has a conventional object-oriented design, but it was slowly grafted onto the language, something which shows from time to time. Arguably, you see this in the convention that instance methods need self passed as their first argument, and class methods need clsas their first argument. Another place you see it is how Python does abstract classes. First, one can use definitions in the built-in abc module, proposed in PEP-3119, to declare a class as abstract. But in practice most Pythonistas make a class abstract by declaring unimplemented instance methods. There are two conventional ways to do this, either with ellipses or by raising an exception, illustrated below.

class AbstractCandyFactory:
    def make_candy(self, batch_size: int): ...
class AbstractCandyFactory:
    def make_candy(self, batch_size: int):
        raise NotImplementedError

The latter is a bit more verbose, but there is actually a very good reason to prefer it to the former, elliptical version. With the exception version, if one forgets to implement make_candy—say, in a concrete subclass like SnickersFactory(AbstractCandyFactory)—an informative exception will be raised when make_candy is called on a SnickersFactory instance. However, in the elliptical form, the inherited form will be called, and of course will do nothing because the method has no body. This will likely cause errors down the road, but they will not be nearly as easy to track down because there is nothing to directly link the issue to the failure to override this method. For this reason alone, I consider ellipses used to declare abstract instance methods as harmful.

Announcing UDTube

In collaboration with CUNY master’s program graduate Daniel Yakubov, we have recently open-sourced UDTube, our neural morphological analyzer. UDTube performs what is sometimes called morphological analysis in context: it provides morphological analyses—coarse POS tagging, more-detailed morphosyntactic tagging, and lemmatization—to whole sentences using nearby words as context.

The UDTube model, developed in Yakubov 2024, is quite simple: it uses a pre-trained Hugging Face encoders to compute subword embeddings. We then take the last few layers of these embeddings and mean-pool them, then mean-pool subword embeddings for those words which correspond to multiple subwords. The resulting encoding of the input is then fed to separate classifier heads for the different tasks (POS tagging, etc.). During training we fine-tune the pre-trained encoder in addition to fitting the classifier heads, and we make it possible to set separate optimizers, learning rates, and schedulers for the encoder and classifier modules.

UDTube is built atop PyTorch and Lightning, and its command-line interface is made much simpler by the use of LightningCLI, a module which handles most of the interface work. One can configure the entire thing using YAML configuration files. CUDA GPUs and MPS-era Macs (M1 etc.) can be used to accelerate training and inference (and should work out of the box). We also provide scripts to perform hyperparameter tuning using Weights & Biases. We believe that this model, with appropriate tuning, is probably state-of-the-art for morphological analysis in context.

UDTube is available under an Apache 2.0 license on GitHub and on PyPI.

References

Yakubov, D. 2024. How do we learn what we cannot say? Master’s thesis, CUNY Graduate Center.

News from the east

I am a total sucker for cute content from East Asia. I loved to watch Pangzai do his little drinking tricks. I love to hear what the “netizens” are up to. I love the greasy little hippo. I love the horse archer raves. I even love the chow chows painted as pandas. It’s delightful. Is this propaganda? Maybe; certainly it’s embedded a larger matrix of Western-oriented soft-power diplomacy. (That’s why we have so many Thai restaurants.) But I suppose I’m blessed to live in a time where you can get so much cute news from halfway across the world.

Our vocation

If you’re a linguist: well, why?

One thing that stands out about the life of the professional linguist is what Chomsky has the responsibility of intellectuals, to “speak to the truth and to expose lies”, in this case uncomfortable truths about language and its role in society. Certainly this responsibility—and privilege, as Chomsky also points out—is inspiration for many linguists. But other motives abound. I for one am more drawn to learning about (an admittedly narrow corner) of human nature than I am to speaking truth to power, and most likely would have ended in some other area of social science had I not discovered the field. And there’s nothing wrong with a linguist who is most of all drawn to little logic puzzles, so long as these puzzles are ultimately grounded in those questions about human nature. (I do reject, categorically, those who say that linguists ought to be doing nothing than “Word Sudoku” or “Wordle with more steps”. Maybe there are people who work solely in those modes, and if so I wish them a very happy alt-ac career transition.)

I think the truths about human nature uncovered by the epistemology-obsessed generativists—including those of the armchair variety—has something to say about the proper organization of society. But one is more likely to get such messages from sociolinguists. Sociolinguists correctly point out that we have unexamined, corrosive ideologies about language, languages, and their speakers that are mostly contrary to the liberal values most of us profess, and they certainly are well-positioned to speak these truths. That said, I do not agree with an often-implicit assumption that sociolinguistics is somehow a more noble vocation than other topics in the field. The “discourse” on this is often fought as a proxy war over hiring: e.g., one I’ve heard before is “Why doesn’t MIT’s linguistics faculty include a sociolinguist?” First off, it sort of does: it includes one of the world’s foremost creolists, who has written extensively about the role of creole studies in neocolonialism and white supremacy. Whether or not a creolist is a sociolinguist is probably more a matter of self-identity than one of observable fact, but there’s no question that creole studies has a lot to give to—but also a both a lot to answer for on—the problem of linguistic equality. Should the well-rounded linguist have studied sociolinguistics? Absolutely. But there are probably many other areas, topics, or even theories you think that any well-rounded linguist ought to have studied but which are not required or widely taught, and these rarely provoke such discourse.

Optimality Theory on exceptionality

[This post is part of a series on theories of lexical exceptionality.]

I now take a large jump in exceptionality theory from the late ’70s to the mid-aughts. (I am skipping over a characteristically ’90s approach, which I’ll cover in my final post.) I will focus on a particular approach—not the only one, but arguably the most robust one—to exceptionality in Optimality Theory (OT). This proposal is as old as OT itself, but is developed most clearly in Pater (2006), and Pater & Coetzee (2005) propose how it might be learned. I will also briefly discuss the application of the morpheme-specific approach to the yer patterns characteristic of Slavic languages by Gouskova and Rysling, and Rubach’s critique of this approach.

I will have very little to say about the cophonology approach to exceptionality that has also been sporadically entertained in OT.  Cophonology holds that morphemes may have arbitrarily different constraint rankings. Pater (henceforth P) is quite critical of this approach throughout his 2006 paper: among other criticisms, he regards it as completely unconstrained. I agree: it makes few novel predictions, and would challenge cophonologists (if any exist in 2024) to consider how cophonology might be constrained so as to derive interesting predictions about exceptionality.

Indexed constraints

Even the earliest work in Optimality Theory supposed that some constraints might be specific to particularly grammatical categories or morphemes. This of course is a loosening of the idea that Con, the universal constraint family, is language-universal and finite, but it seems to be a necessary assumption. P claims that this device is powerful enough to handle all known instances of exceptionality in phonology. The basic idea is extremely simple: for every constraint X there may also exist indexed constraints of the form X(i) whose violations are only recorded when the violation occurs in the context of some morpheme i.1 There are then two general schemas that produce interesting results.

(1) M(i) >> F >> M
(2) F(i) >> M >> F

Here M stands for markedness and F for faithfulness. As will be seen below, (1) has a close connection to the notions of mutability and catalysis introduced in my earlier post; (2) in turn has a close connection with quiescence and inalterability.

One of P’s goals is to demonstrate that this approach can be applied to Piro syncope. His proposal is not quite as detailed as one might wish, but it is still worth discussing and trying to fill in the gaps. For P, the general syncope pattern arises from the ranking Align-Suf-C >> Max; in prose, it is permissible to delete a segment if doing so brings the suffix in contact with a consonant. This also naturally derives the non-derived environment condition since it specifically mentions suffixhood. P derives the avoidance of tautomorphemic clusters, previously expressed with the VC_CV environment, with the markedness constraint *CCC. This gives us *CCC >> Align-Suf-C >> Max thus far.  This should suffice for derivations whose roots are all mutable and catalytic.

For P, inalterable roots are distinguished from mutable ones by an undominated, indexed clone of Max which I’ll call Max(inalt), giving us a partial ranking like so.

(3) Max(inalt) >> Align-Suf-C >> Max

This is of course an instance of schema (2). Note that since the ranking without the indexing is just Align-Suf-C >> Max, it seemingly treats mutability as the default and inalterability as exceptional, a point I’ll return to shortly.

Quiescent roots in P’s analysis are distinguished from catalytic ones by a lexically specific clone of Align-Suf-C; here the lexically indexed one targets the catalytic suffixes, so we’ll write it Align-Suf-C(cat), giving us the following partial ranking.

(4) Align-Suf-C(cat) >> Max >> Align-Suf-C

This is an instance of schema (1). It is interesting to note that the Align constraint bridges the distinction between target and trigger, since the markedness is a property of the boundary itself. Note also that it also seems to treat quiescence as the default and catalysis as exceptional.

Putting this together we obtain the full ranking below.

(5) *CCC, Max(inalt) >> Align-Suf-C(cat) >> Max >> Align-Suf-C

P, unfortunately, does not take the time to compare his analysis to Kisseberth’s (1970) proposal, or to contrast it with Zonneveld’s (1978) critiques, which I discussed in detail in the earlier post. I do observe one potential improvement over Kisseberth. Recall that Kisseberth had trouble with the example /w-čokoruha-ha-nu-lu/ [wčokoruhahanru] ‘let’s harpoon it’ , because /-ha/ is quiescent and having a quiescent suffix in the left environment is predicted counterfactually to block deletion in /-nu/. As far as I can tell this is not a problem for P; the following suffix /-lu/ is on the Align-Suf-C(cat) lexical list and /-nu/ is not on the Max(inalt) list and that’s all that matters. Presumably, P gets this effect because of the joint operation of the two flavors of Align-Suf-C  and *CCC means has properly localized the catalysis/quiescence component of the exceptionality. However, P’s analysis does not seem to generate the right-to-left application; it has no reason to favor the attested /n-xipa-lu-ne/ [nxipalne] ‘my sweet potato’ over *[nxiplune]. This reflects a general issue in OT in accounting for directional application.

As I mentioned above, P’s analysis of Piro treats mutability and quiescence as productive and inalterability and catalysis as exceptional. Indeed, it predicts mutability and quiescence in the absence of any indexing, and one might hypothesize that Piro speakers would treat a new suffix of the appropriate shape as mutable and quiescent. I know of no reason to suppose this is correct; for Matteson (1965), these are arbitrary and there is no obvious default, whereas my impression is that Kisseberth views mutability (like P) and catalysis (unlike P) as the default. This question of productivity is one that I’ll return to below as I consider how indexing might be learned.

Learning indexed constraints

Pater and Coetzee (2005, henceforth P&C) propose indexed constraint rankings can be learned using a variant of the Biased Constraint Demotion (BCD) algorithm developed earlier by Prince and Tesar (2004). Most of the details of that algorithm are not strictly relevant here; I will focus on the ones that are. BCD supposes that learners are able to accumulate UR/SR pairs and then use the current state of their constraint hierarchy to record them as a data structure called a called mark-data pair. These give, for each constraint violation, whether that violation prefers the actual SR or a non-optimal candidate. From a collection of these pairs it is possible to rank constraints via iterative demotion.2 The presence of lexical exceptionality produces a case where it is not possible to for vanilla BCD to advance the demotion because a conflict exists: some morphemes favor one ranking whereas others favor another. P&C propose that in this scenario, indexed constraints will be introduced to resolve the conflict.

P&C are less than formal in specifying how this cloning process works, so let us consider how it might function. Their example, a toy, concerns syllable shape. They suppose that they are dealing with a language in which /CVC/ is marked (via NoCoda) but there are a few words of this shape which surface faithfully (via Max). They suppose that this results in a ranking paradox which cannot be resolved with the existing constraints. As stated, I have to disagree: their toy provides no motivation for NoCoda >> Max.3 Let us suppose, though, for sake of argument that there is some positive evidence, after all, for that ranking. Perhaps we have the following.

(6) Toy grammar (after P&C):
a. /kap/ – > [ka]
b. /gub/ -> [gu]
c. /net/ -> [net]
d. /mat/ -> [mat]

Let us also suppose that there is some positive evidence that /kap, gub/ are the correct URs so they are not changed to faithful URs via Lexicon Optimization. Then, (6ab) favor NoCoda >> Max but (6cd) favor Max >> NoCoda. P&C suppose this is resolved by cloning (i.e., generating an indexed variant of) Max, producing a variant for each faithfully-surfacing /CVC/ morpheme. If these morphemes are /net/ and /mat/, then we obtain the following partial ranking after BCD.

(7) Max(net), Max(mat) >> NoCoda >> Max

This is another instance of schema (2); there are just multiple indexed constraints in the highest stratum. Indeed, P&C imagine various mechanisms by which Max(net) and Max(mat) might be collapsed or conflated at a later stage of learning.

It is crucial to the P&C’s proposal that the child actual observes the exceptional morphemes both of (6cd) surfacing faithfully; however, it is not necessary to observe (6ab), just to observe some morphemes in which, like in (6ab), a coda consonant is deleted so as to trigger cloning. The critical sample for (7), then, is either (6acd) or (6bcd). It is not necessary to see both (6a) and (6b), but it is necessary to see both of (6cd). Thus, there is some very real sense in which this analysis treats coda deletion as the productive default and coda retention as exceptional behavior, much like how P’s analysis of Piro treated mutability and quiescence as productive. However, it seems like P&C could have instead adapted schema (1) and proposed that what is cloned is NoCoda, obtaining the following ranking.

(8) NoCoda(kap), NoCoda(gub) >> Max >> NoCoda

Then, for this analysis, the crucial sample is either (6abc) or (6abd), and there is a similar sense in which coda retention is now the default behavior.

P&C give no reason to prefer (7) over (8). Reading between the lines, I suspect they imagine that the relative frequency (i.e., number of morpheme types) which either retain or lose their coda is the crucial issue, and perhaps they would appeal to an informal “majority-rules” principle. That is, if forms like (6ab) are more frequent than those like (6cd) they would probably prefer (7) and would prefer (8) if the opposite is true. However, I think P&C should have taken up this question and explained what is cloned when. Indeed, there is an alternative possibility: perhaps cloning produces all of the following constraints in addition to Max and NoCoda.

(9) Max(kap), Max(gub), Max(net), Max(mat), NoCoda(kap), NoCoda(gub), NoCoda(net), NoCoda(mat)

While I am not sure, I think BCD would be able to proceed and would either converge on (7) or (8), depending on how it resolves apparent “ties”.

Another related issue, which also may lead to the proliferation of indexed constraints, is that P&C have little to say about how constraint cloning words in complex words. Perhaps the cloning module is able to localize the violation to particular morphemes. For instance, it seems plausible that one could inspect a Max violation, like the ones produced by Piro syncope, to determine which morpheme is unfaithful and thus mutable. However, if we wish to preserve P’s treatment of mutability as the default (and that inalterable morphemes have a high-ranked Max clone), we instead need to do something more complex: we need to determine that a certain morpheme does not violate Max (good so far), but also that under a counterfactual ranking of this constraint and its “antagonist” Align-Suf-C, would have done so; this may be something which can be read off of mark-data pairs, but I am not sure. Similarly, to preserve P’s treatment of quiescence as the default, we need to determine that a certain suffix has an Align-Suf-C violation (again, good so far), but also that under a counterfactual ranking of this constraint and its antagonist, it would have not done so.

While I am unsure if this counterfactual reasoning part of the equation can be done in general, I can think of least one case where the localization reasoning cannot be done: epenthesis at morpheme boundaries, as in the [-əd] allomorph of the English regular past. Here there is no sense in which the Dep violation can be identified to a particular morpheme. Indeed, Dep violations are defined by the absence of correspondence. This is a perhaps an unfortunate example for P&C’s approach. English has a number of few “semiweak” past tense forms (e.g. from Myers 1987: bitbledhidmet, spedledreadfedlitslid) which are characterized by a final dental consonant and shortening of the long nucleus of the present tense form. Given related pairs like keep-kept, one might suppose that these bear a regular /-d/ suffix, but fail to trigger epenthesis (thus *[baɪtəd], etc.). To make this work, we assume the following.

(10) Properties of semiweak pasts:
a.  Verbs with semiweak pasts are exceptionally indexed to a high-ranking Dep constraint which dominates relevant syllable structure markedness constraints.
b. Verbs with semiweak pasts are exceptionally indexed to high-ranking markedness constraint(s) triggering “Shortening” (in the sense of Myers 1987)
c. A general (i.e., non-indexed) markedness constraint against hetero-voiced obstruent clusters dominates antagonistic voice faithfulness constraints.

The issue is this: how do children localize the failure of epenthesis in (10a) to the root and not the suffix, given that the counterfactual epenthetic segment is not an exponent of either, occurring rather at the boundary between the two? Should one reject the sketchy analysis given in (10), there are surely many other cases where correspondence alone is insufficient; for example, consider vowels which coalesce in hiatus.

The yers

I have again already gone on quite long, but before I stop I should briefly discuss the famous Slavic yers as they relate to this theory.

In a very interesting paper, Gouskova (2012) presents an analysis of the yers in modern Russian. In Russian, certain instances of the vowels e and alternate with zero in certain contexts. These alternating vowels are termed yers in traditional Slavic grammar. A tradition, going back to early work by Lightner (1965), treats yers in Russian and other Slavic languages, as underlyingly distinct from non-alternating e and o, either featurally or, in later work, prosodically. For example, лев [lʲev] ‘lion’ has a genitive singular (gen.sg.) льва [lʲva] and мох [mox] ‘moss’ has a gen.sg. [mxa].

Gouskova (henceforth G) wishes to argue that yer patterns are better analyzed using indexed constraints, thus treating morphemes with yer alternations as exceptional rather than treating the yer segments as underspecified. In terms of the constraint indexing technology, G’s analysis is straightforward. Alternating vowels are underlyingly present in all cases, and their deletion is triggered by a high-ranked constraint *Mid (which disfavors mid vowels, naturally) which is indexed to target exactly those morphemes which contain yers. Additional phonotactic constraints relating to consonant sequences are used to prevent deletion that produces word-final consonant clusters. Roughly, then, the analysis is:

(11) *CC]σ >> *Mid(yer morphemes) >> Max-V >> *Mid

As G writes (99-100, fn. 18): “In Russian, deletion is the exception rather than the rule: most morphemes do not have deletion, and neither do loanwords…”

It should be noted that G’s analysis departs from the traditional (“Lightnerian”) analysis in ways not directly to the question of localizing exceptionality (i.e., in the morpheme vs. the segment). For one, (11) seems to frame retention of a mid vowel as a default. In contrast, the traditional analysis does not seem to have any opinion on the matter. In that analysis, whether or not a mid vowel is alternating is a property of its underlying form, and should thus be arbitrary in the Saussurean sense. This is not to say that we expect to find yers in arbitrary contexts. There are historical  reasons why yers are mostly found in the final syllable—this is the one of the few places where the historical sound change called Havlík’s Law, operating more or less blindly, could introduce synchronic yer/zero alternations in the first place (in many other contexts the yers were simply lost), and in other positions it is impossible to ascertain whether or not a mid vowel is a yer. Whether or not an alternative versions of the sound change could have produced an alternative-universe Russian where yers target the first syllable is an unknowable counterfactual given that we live in our universe, with our universe’s version of Havlík’s Law. Secondly, the traditional analysis (see Bailyn & Nevins 2008 for a recent exemplar) usually conditions the retention of yers on the presence of a yer (which may or may not be itself retained) in the following syllable. In contrast, G does not seem to posit yers for this purpose nor does she condition their retention on the presence of nearby yers. In the traditional analysis, these conditioning yers are motivated by the behavior of yers in prefixes and suffixes in derivational morphology, and much of this hinges on apparent cyclicity. G provides an appendix in which she attempts to handle several of these issues in her theory, but it remains to be seen whether this has been successful in dismissing all the concerns one might raise.

G provides a few arguments as to why the exceptional morpheme analysis is superior to the traditional analysis. G wishes to establish that mid vowels are in fact marked in Russian, so that yer deletion can take a something of a “free ride” on this constraint. As such, she claims that yer deletion is related to the reduction of mid vowels in unstressed syllables. But how do we know that these these facts are connected? And, if they are in fact connected, is it possible that there is an extra-grammatical explanation? For instance, there may be a “channel bias” in production and/or perception that disfavors faithful realization of mid vowels (and thus imposes a systematic bias in favor of reduction and deletion) compared to the more extreme phonemic vowels (in her analysis, /a, i, u/) which caused the actuation of both changes. Phenomenologically speaking, it is true that there are two ways in which certain Russian mid vowels are unfaithful, but this is just one of a infinite set of true statements about Russian phonology, and there is something “just so” about this one.

Before I conclude, let us now turn briefly to Polish. Like Russian, this language has mid-vowels which alternate with zero in certain contexts. (Unlike Russian, for whatever reason, the vast majority of alternating vowels are e; there are just three morphemes which have an alternating o.)

Rubach (2013, 2016), explicitly critiques constraint indexation using data from Polish. Rubach argues that G’s analysis cannot be generalized straightforwardly to Russian. He draws attention to stems that contain multiple mid vowels, only one of which is a yer (e.g., sfeter/sfetri ‘sweater’); and concludes that it is not necessarily possible to determine which (or both) should undergo deletion in an “exceptional” morpheme. The only mechanism with which one might handle this is a rather complex series of markedness constraints on consonant sequences. Unfortunately, Polish is quite permissive of complex consonant clusters and this mechanism cannot always be relied upon to deliver the correct answer. He also draws attention to the behavior of derivational morphology such as double diminuitives. In contrast, Rysling (2016) attempts to generalize G’s indexed constraint analysis of yers to Polish. However, her analysis differs from G’s analysis of Russian in that she derives the yers from epenthesis to avoid word-final consonant clusters. Furthermore, for Rysling, epenthesis, in the relevant phonotactic contexts (to a first approximation, certain C_C#), is the default, and failure to epenthesize is exceptional.5 Sadly, there is little interaction between the Rubach and Rysling papers (the latter briefly discusses the former’s 2013 paper), so I am not prepared to say whether Rysling’s radical revision addresses Rubach’s concerns with constraint indexation.

References

  1. P and colleagues refer to these constraints as “lexically specific”, but in fact it seems the relevant structures are all morphemes, and never involve polymorphemic words or lexemes.
  2. As far as I know, though, there is no proof of convergence,  under any circumstances, for BCD.
  3. Perhaps they are deriving this from the assumption that the initial state is M >> F, but without alternation evidence, BCD would rerank this as Max >> NoCoda and cloning would not be triggered.
  4. A subsequent rule of obstruent voice assimilation, which is needed independently would give us [kɛpt] from /kip-d/, and so on.
  5. Rysling seems to derive this proposal from an analysis of lexical statistics: she counts how many Polish nouns have yer alternations in the context …C_C# and compares this to non-alternating …CeC# and …CC#. It isn’t clear to me how the proposal follows from the statistics, though: non-epenthesis and epenthesis in …C_C# are about equally common in Polish, and their relative frequencies are not much different from what she finds in Russian.

References

Bailyn, J. F. and Nevins, A. 2008. Russian genitive plurals are impostors. In A. Bachrach and A. Nevins (ed.), Inflectional Identity, pages 237-270. Oxford University Press.
Gouskova, Maria. 2012. Unexceptional segments. Natural Language & Linguistic Theory 30: 79-133.
Kenstowicz, M. 1970. Lithuanian third person future. In J. R. Sadock and A. L. Vanek (ed.), Studies Presented to Robert B. Lees by His Students, pages 95-108. Linguistic Research.
Lightner, T. Segmental phonoloy of Modern Standard Russian. Doctoral dissertation, Massachusetts Institute of Technology.
Matteson, E. 1965. The Piro (Arawakan) Language. University of California Press.
Myers, S. 1987. Vowel shortening in English. Natural Language & Linguistic Theory 5(4): 485-518.
Pater, J. and Coetzee, A. W. 2005. Lexically specific constraints: gradience, learnability, and perception. In Proceedings of the Korea International Conference on Phonology, pages 85-119.
Pater, J. 2006. The locus of exceptionality: morpheme-specific phonology as constraint indexation. In University of Massachusetts Occasional Papers 32: Papers in Optimality Theory: 1-36.
Pater. J. 2009. Morpheme-specific phonology: constraint indexation and inconsistency resolution. In S. Parker (ed.), Phonological Argumentation: Essays on Evidence and Motivation, pages 123-154. Equinox.
Prince, A. and Tesar, B. 2004. Learning phonotactic distributions. In Kager, R., Pater, J. and Zonneveld, W. (ed.), Constraints in Phonological Acquisition, pages 245-291. Cambridge University Press.
Rubach, J. 2013. Exceptional segments in Polish. Natural Language & Linguistic Theory 31: 1139-1162.
Rysling, A. 2016. Polish yers revisited. Catalan Journal of Linguistics 15:121-143.
Zonneveld, W. 1978. A Formal Theory of Exceptions in Generative Phonology. Peter de Ridder.

Learned tokenization

Conventional (i.e., non-neural, pre-BERT) NLP stacks tend to use rule-based systems for tokenizing sentences into words. One good example is Spacy, which provides rule-based tokenizers for the languages it supports. I am sort of baffled this is considered a good idea for languages other than English, since it seems to me that most languages need machine learning for even this task to properly handle phenomena like clitics. If you like the Spacy interface—I admit it’s very convenient—and work in Python, you may want to try thespacy-udpipe library, which exposes the UDPipe 1.5 models for Universal Dependencies 2.5; these in turn use learned tokenizers (and taggers, morphological analyzers, and dependency parsers, if you care) trained on high-quality Universal Dependencies data.