Rich people shouldn’t drive

I don’t understand why the filthy rich ever drive. Sure, I get why Ferdinand Habsburg gets into the Eva cockpit: an F1 race is the modern-day tournament. But driving is a dangerous, high-liability, cognitively taxing activity and it’s easy for the rich to offload those hazards to a specialist. I don’t understand why, for example:

In the unlikely event that I hit centimillion status, the first thing I’m doing is buying a black, under-the-radar towncar and hiring a chaffeur with good personal recommendations. And before that, when I enter decamillion territory, I’m just calling UberXen. No alternate-side parking, no DUIs for me. I don’t know about Justin, but surely Warren and Sam have something better to do than be behind the wheel. They could be power napping, meditating, watching the market, or catching up on X (“the everything app”) the back of their car instead.

Linguistic relativity and i-language

Elif Batuman’s autofiction novel The Idiot follows Selin, a Harvard freshman in the mid 1990s. Selin initially declares her major in linguistics and describes two classes in more detail. One is with a soft-spoken professor who is said to be passionate about Turkic phonetics (no clue who this might be: anybody?) and the other is described as a Italian semanticist who wears beautiful suits (maybe this is Gennaro Chierchia; not sure). Selin is taken aback by the stridency with which her professor (presumably the Turkic phonetician) rails against the Sapir-Whorf hypothesis—she regrets how the professor repeatedly mentions Whorf’s day job as a fire prevention specialist—and finds linguistic relativity so intuitive she changes her major at the end of the book.

Batuman is not the only person to draw a connection between rejection of the stronger forms of the Sapir-Whorf hypothesis and generativism. Here’s the thing though: there is no real connection between these two ideas! Generativism has no particular stance on any of this. The only connection I see between these two ideas is that, when you adopt the i-language view, you simply have more interesting things to study. If you truly understand, say, poverty of the stimulus arguments, you just won’t feel the need to entertain intuitive-popular views of language because you’ll recognize that the human condition vis-à-vis language is much richer and much stranger than Whorf ever imagined.

The presupposition of “recognize”

There’s an interesting pragmatics thing going on in the official statement ex-first lady Melania Trump put out after her husband was grazed by a sniper’s bullet. (The full statement is here if you care; it’s not very interesting overall.) However I was drawn to an interesting violation of presupposition in the document:

A monster who recognized my husband as an inhuman political machine attempted to ring out Donald’s passion – his laughter, ingenuity, love of music, and inspiration.

A few things are going on here; let me put aside the awkward non-parallelism of laughter vs. love of music vs. ingenuity and inspiration and note that the verb she wants in the embedded clause is wring out (figuratively, to extract by means of forceful action) not ring out. But the more interesting one is the use of recognized. To say that the shooter recognized Donald Trump as an inhuman machine presupposes that the speaker agrees with this assessment; or perhaps more generally that it is in the common ground that Donald Trump is an inhuman machine, at least in my idiolect. There is nothing in the text or subtext of the statement suggesting she views her husband as a monster, despite the long and tedious tradition of trying to “read resistance” into the wives of right-wing American politicians. For me verbs like misconstrued or mistook presupposes the opposite, that the speaker and/or common ground disagrees with this assessment, and that’s what I suppose Mrs. Trump meant to say here. I don’t blame Mrs. Trump for this; English is not her first language, though she speaks it quite well. But she’s famous and rich enough that she ought to employ a PR professional or lawyer to proof-read public statements like I’m sure Mrs. Obama or Mrs. Bush do.

Medical bills

Starting about two years ago, I got an unexpected medical bill in the mail. The amount wasn’t very high, but I was quite frustrated and annoyed. First, this was from a local College of Dentistry, where most procedures are free for the insured (and probably not insured too); there was no “explanation of benefits” that explained this was a co-pay, or that my insurance only covered some portion. Secondly, I hadn’t been to the College of Dentistry in quite a while, so I had no idea which of the various procedures this was or even what day I received the billed service. Third, there was no way to get more information: the absolute worst thing about this provider is that the administrative staff are some of the most overloaded and overworked people I have ever seen, and I have witnessed them just let the phone ring because they’re dealing with a huge line of in-person patients (some of whom are bleeding from their mouth). So I didn’t pay it. After a while though, the bills continued and I started to worry. Was I wasting paper for no reason? Would this harm my credit score? So I put about an hour into finding a way to actually get in touch with the billing office: turns out this was a Google Form buried somewhere on a website, and if you fill it out, a someone calls you (in my case, within the hour!), looks up your chart, and can tell you the date of service and why you were billed. Why they didn’t just include this in the bill in the first place? I have to imagine this makes it ever harder for the College to actually collect on these debts.

Representation vs. explanation?

I have often wondered whether detailed representational formalism is somehow in conflict with genuine explanation in linguistics. I have been tangentially involved in the cottage industry that is applying the Tolerance Principle (Yang 2005, 2016) to linguistic phenomena, most notably morphological defectivity. In our paper on the subject (Gorman & Yang 2019), we are admittedly somewhat nonchalant about the representations in question, a nonchalance which is, frankly, characteristic of this microgenre.

In my opinion, however, our treatment of Polish defectivity is representationally elegant. (See here for a summary of the data.) In this language, fused case/number suffixes show suppletion based on the gender—in the masculine, animacy—of the stem, and there is lexically conditioned suppletion between -a and -u, the two allomorphs of the gen.sg. for masculine inanimate nouns. To derive defectivity, all we need to show is that Tolerance predicts that, in the masculine inanimate, there is no default suffix to realize the gen.sg. If there are two realization rules in competition, we can implement this by making both of them lexically conditioned, and leaving nouns which are defective in the gen.sg. off both lexical “lists”. We can even imagine, in theories with late insertion, that the grammatical crash is the result of uninterpretable gen.sg. features which are, in defective nouns, still present at LF.1

It is useful to contrast this with our less-elegatn treatment of Spanish defectivity in the same paper. (See here for a summary of the data.) There we assume that there is some kind of grammatical competition for verbal stems between the rules that might be summarized as “diphthongize a stem vowel when stresssed” and “do not change”. We group the two types of diphthongization (o to ue [we] and to ie [je]) as a single change, even though it is not trivial to make these into a single change.2 This much at least has a venerable precedent, but what does it mean to treat diphthongization as a rule in the first place? The same tradition tends to treat the propensity to diphthongize as a phonological (i.e., perhaps via underspecification or prespecification, à la Harris 1985) or morphophonological property of the stem (a lexical diacritic à la Harris 1969, or competition between pseudo-suppletive stems à la Bermúdez-Otero 2013), and the phonological contents of a stem is presumably stored in the lexicon, and not generated by any sort of rule.3 Rather, our Tolerance analysis seems to imply we have thrown in our lots with Albright and colleagues (Albright et al. 2001, Albright 2003) and Bybee & Pardo (1981), who analyze diphthongization as a purely phonological rule depending solely on the surface shape of the stem. This is despite the fact that we are bitterly critical of these authors for other reasons4 and I would have preferred—aesthetically at least—to adopt an analysis where diphthongization is a latent property of particular stems.

At this point, I could say, perhaps, that the data—combined with our theoretical conception of the stem inventory portion of the lexicon as a non-generative system—is trying to tell me something about Spanish diphthongization, namely that Albright, Bybee, and colleagues are onto something, representationally speaking. But, compared with our analysis of Polish, it is not clear how these surface-oriented theories of diphthongization might generate grammatical crash. Abstracting from the details, Albright (2003) imagines that there are a series of competing rules for diphthongization, whose “strength” derives from the number of exemplars they cover. In his theory, the “best” rule can fail to apply if its strength is too low, but he does not propose any particular threshold and as we show in our paper, his notion of strength is poorly correlated with the actual gaps. Is it possible our analysis is onto something if Albright, Bybee, and colleagues are wrong about the representational basis for Spanish diphthongization?

Endnotes

  1. This case may still be a problem for Optimality Theory-style approaches to morphology, since Gen must produce some surface form.
  2. I don’t have the citation in front of me right now, but I believe J. Harris originally proposed that the two forms of diphthongization can be united insofar as both of them can be modeled as insertion of e triggering glide formation of the preceding mid vowel.
  3. For the same reason, I don’t understand what morpheme structure constraints are supposed to do exactly. Imagine, fancifully, that you had a mini-stroke and the lesion it caused damaged your grammar’s morpheme structure rule #3. How would anyone know? Presumably, you don’t have any lexical entries which violate MSC #3, and adults generally does not make up new lexical entries for the heck of it.
  4. These have to do with what we perceive as the poor quality of their experimental evidence, to be fair, not their analyses.

References

Albright, A., Andrade, A., and Hayes, B. 2001. Segmental environments of Spanish diphthongization. UCLA Working Papers in Linguistics 7: 117-151.
Albright, A. 2003. A quantitative study of Spanish paradigm gaps. In Proceedings of the 22th West Coast Conference on Formal Linguistics, pages 1-14.
Bermúdez-Otero, R. The Spanish lexicon stores stems with theme vowels, not roots with inflectional class features. Probus 25: 3-103.
Bybee, J. L. and Pardo, E. 1981. On lexical and morphological conditioning of alternations: a nonce-prob e experiment with Spanish verbs. Linguistics 19: 937-968.
Gorman,. K. and Yang, C. 2019. When nobody wins. In F. Rainer, F. Gardani, H. C. Luschützky and W. U. Dressler (ed.), Competition in Inflection and Word Formation, pages 169-193. Springer.
Harris, J. W. 1969. Spanish Phonology. MIT Press.
Harris, J. W. 1985. Spanish diphthongisation and stress: a paradox resolved. Phonology 2: 31-45.

Automatic batch sizing

Yoyodyne is my lab’s sequence-to-sequence library, intended to be a replacement for Fairseq, which is (essentially) abandonware. One matter of urgency for me in building Yoyodyne was to enable automatic hyperparameter tuning. This was accomplished by logging results to Weights & Biases (W&B). We can perform a random or Bayesian hyperparameter sweep using a “grid” specified via a YAML file, monitor progress on the W&B website, or even hit the API to grab the best hyperparameters. One issue that kept coming up, however, is that it is easy to hit out-of-memory (OOM) errors during this process. Here’s what we did about it:

OOMs are not purely due to model size: the model, batch, and gradients all need to fit into the same VRAM. PyTorch Lightning, which is a key part of the Yoyodyne backend, provides a function for automatically determining the maximum batch size that will not trigger an OOM. Basically, it works by starting with a low batch size (by default, 2), randomly drawing three batches of that size, and then attempting training (but in fact caching parameters so that no real training occurs). If this does not trigger an OOM, it doubles the batch size, and so on.1,2 You can enable this approach in Yoyodyne using the flag --find_batch_size max. You’d want to use this if you believe that a giant batch size is fine and you just want to fully saturate your GPU.

A slightly more sophisticated version of this, useful when you actually want to tune batch size, is enabled with the flag --find_batch_size opt. This again begins by doubling the size of randomly drawn batches as well, but here it halts once the doubling exceeds the value of the --batch_sizeflag. If the max batch size is larger than the requested size, it is used as is; thus this acts as a soft check against OOMs. If, however, the max batch size is smaller than --batch_size it instead solves for a new batch size, the largest batch size which is smaller than the max and which is a divisor of --batch_size`. It then enables multiple rounds of gradient accumulation per update,3 thus perfectly-losslessly simulating the desired batch size while using as much of VRAM as possible. I can assure you this is a killer feature for neural network tuning.

Endnotes

  1. This is a little imprecise, and one can refine it by doing a binary search, but in practice it’s not worth the effort when working with ragged data.
  2. Whatever batch size was requested with the --batch_size flag is ignored.
  3. More formally, given desired batch size $b$ and a max batch size $n’$, it finds $a, n$ such that $a$ is the smallest integer, and $n$ is the largest integer, where $an = b$. This is computed via brute force; my implementation of an elegant solution based on the prime factorization was a bit slower.