The rest of the details of vowel- and consonant-harmony we shall discuss later; but there is one unavoidable theoretical issue to be settled in connection with this contrast between borrowed and native lexicon. The solution we have proposed for Turkish vowels, namely that they be written “archiphonemically” in all contexts where gravity is predictable by the usual progressive assimilation rules of harmony but be split into grave/acute pairs of “phonemes” elsewhere, will have the following important theoretical consequence. The phonetic rule which ensures the insertion of a “plus” or a “minus” sense for the gravity feature under harmonic assimilation must distinguish between occurrences of columns of features in which gravity is unspecified, as in the case of the “morphophoneme” /E/, from occurrences of the otherwise identical columns of features in the utterance being generated in which gravity has already been specified in a lexical rule, as in the corresponding cases of the “phonemes” /e/ and /a/, for now both the morphophoneme /E/ and the phonemes /e/ and /a/ occur simultaneously in the transcriptions. The gravity rule is intended to apply to /E/, not to /a/ or /e/. But this is tantamount to entering, as it were, a “zero” into the feature table of /E/ to distinguish it from the columns for /a/ and /e/, in which the feature of gravity has already been determined, or perhaps from other columns in which there is simply no relevant indication of this feature.Thus the system of phonetic decisions will have been rendered trinary rather than the customary binary. The objection to this result is not based on a predilection for binary features, though there are good reasons to prefer a binary system. Rather, it arises because in a system of phonological decisions in which rules may distinguish between columns of binary features differing solely in the presence of absence of a zero for some feature one may also ipso facto always introduce vacuous reductions or simplifications without any empirical knowledge of the phonetic facts.As a brief illustration of such an empty simplification, we might note that if a rule be permitted in English phonology which distinguishes between the features of /p/ and /b/ on one hand and on the other, the set of these same features with the exception that voice is unspecified (a set which we shall designate by means of the “archiphonemic” symbol /B/), then we could easily eliminate from English phonology, without knowing anything about English pronunciation, the otherwise relevant feature of Voice from all occurrences of either /p/ or /b/, or in fact from all occurrences of any voiced stop. Clearly, if a rule could distinguish /B/ from /p/ by the presence of zero in the voice-feature position, then that feature can be restored to occurrences of /b/ automatically and is thus rendered redundant. The same could then be done for inumerable [sic] other features with no empirical justification required.Thus, we must assume that any rule which applies to a column of features like /B/ also at the same time applies to every other type of column which contains that same combination of features, such as /p/ and /b/. This is tantamount to imposing the constraint on phonological features that they never be required to identify unspecified, or zero, features. To the best of our present knowledge, there seems to be no other reasonable way to prevent the awkward consequences mentioned above.To return to Turkish this decision means that the grammar is incapable of distinguishing native vowel-harmonic morphemes from borrowed non-vowel-harmonic morphemes simply be the presence of the archiphoneme /E/ in the former versus /e/ or /a/ in the latter. (Lees 1961: 12-14)
Author: Kyle Gorman
Stop capitalizing so much
One of the absolute scourges of student writing is the tendency to capitalize just about every multi-word noun phrase. The rule in English is pretty simple: you only capitalize proper names, and these are, roughly, the names of people, locations, or organizations. Technical concepts do not qualify. It doesn’t matter if it’s part of an acronym: we capitalize the acronym but not necessarily the full phrase. Natural language processing is not a proper name; cognitive science isn’t either; logistic regression certainly is not a proper name nor is conditional random fields or hidden Markov model or support vector machine or…
SPE & Lakoff on exceptionality
Recently I have attempted to review and synthesize different theories of what we might call lexical (or morpholexical or morpheme-specific) exceptionality. I am deliberately ignoring accounts that take this to be a property of segments via underspecification (or in a few cases, pre-specification, usually of prosodic-metrical elements like timing slots or moras), since I have my own take on that sort of thing under review now. Some takeaways from my reading thus far:
- This is an understudied and undertheorized topic.
- At the same time, it seems at least possible that some of these theories are basically equivalent.
- Exceptionality and its theories play only a minor role in adjudicating between competing theories of phonological or morphological representation, despite their obvious relevance.
- Also despite their obvious relevance, theories of exceptionality make little contact with theories of productivity and defectivity.
Since most of the readings are quite old, I will include PDF links when I have a digital copy available.
Today, I’m going to start off with Chomsky & Halle’s (1968) Sound Pattern of English (SPE), which has two passages dealing with exceptionality: §4.2.2 and §8.7. While I attempt to summarize these two passages as if they are one, they are not fully consistent with one another and I suspect they may have been written at different times or by different authors. Furthermore, it seemed natural for me to address, in this same post, some minor revisions proposed by Lakoff (1970: ch. 2). Lakoff’s book is largely about syntactic exceptionality, but the second chapter, in just six pages, provides important revisions to the SPE system. I myself have also taken some liberties filling in missing details.
Chomsky & Halle give a few examples of what they have in mind when they mention exceptionality. There is in English a rule which laxes vowels before consonant clusters, as in convene/conven+tion or (more fancifully) wide/wid+th. However, this generally does not occur when the consonant cluster is split by a “#” boundary, as in restrain#t.1 The second, and more famous, example involves the trisyllabic shortening of the sort triggered by the -ity suffix. Here laxing also occurs (e.g., serene–seren+ity, obscene–obscen+ity) though not in the backformation obese-obesity.2 As Lakoff (loc. cit.:13) writes of this example, “[n]o other fact about obese is correlated to the fact that it does not undergo this rule. It is simply an isolated fact.” Note that both of these examples involve underapplication, and the latter passage gives more obesity-like examples from Lightner’s phonology of Russian, where one rule applies only to “Russian” roots and another only to “Church Slavonic” roots.
SPE supposes that by default, that there is a feature associated with each rule. So, for instance, if there is a rule R there exists a feature [±R] as well. A later passage likens these to features for syntactic category (e.g., [+Noun]), intrinsic morpho-semantic properties like animacy, declension or conjugation class features, and the lexical strata features introduced by Lees or Lightner in their grammars of Turkish and Russian. SPE imagine that URs may bear values for [R]. The conventions are then:
(1) Convention 1: If a UR is not specified [-R], introduce [+R] via redundancy rule.
(2) Convention 2: If a UR is [αR], propagate feature specification [αR] to each of its segments via redundancy rule.
(3) Convention 3: A rule R does not apply to segments which are [-R].
Returning to our two examples above, SPE proposes that obese is underlylingly [−Trisyllabic Shortening], which accounts for the lack of shortening in obesity. They also propose rules which insert these minus-rule features in the course of the derivation; for instance, it seems they imagine that the absence of laxing in restraint is the result of a rule like V → {−Laxing} / _ C#C, with a phonetic-morphological context.
Subsequent work in the theory of exceptionality has mostly considered cases like obesity, the rule features are present underlyingly but with one exception, discussed below, the restraint-type analysis, in which rule features are introduced during the derivation, do not seem to have been further studied. It seems to me that the possibility of introducing minus-rule features to a certain phonetic context could be used to derive a rule that applies to unnatural classes. For example, imagine an English rule (call it Tensing) which tenses a vowel in the context of anterior nasals {m, n} and the voiceless fricatives {f, θ, s, ʃ} but not voiced fricatives like {v, ð}.3 Under any conventional feature system, there is no natural class which includes {m, n, f, θ, s, ʃ} but not also {ŋ, v}, etc. However, one could derive the desired disjunctive effect by introducing a -Tensing specification when the vowel is followed by a dorsal, or by a voiced fricative. This might look something like this:
(4) No Tensing 1: [+Vocalic] → {−Tensing} / _ [+Dorsal]
(5) No Tensing 2: [+Vocalic] → {−Tensing} / _ [-Voice, +Obstruent, +Continuant]
This could continue for a while. For instance, I implied that Tensing does not apply before a stop so we could insert a -Tensing specification when the following segment is [+Obstruent, -Continuant], or we could do something similar with a following oral sonorant, and so on. Then, the actual Tensing rule would need little (or even no) phonetic conditioning.
To put it in other words, these rules allow the rule to apply to a set of segments which cannot be formed conjunctively from features, but can be formed via set difference.4 Is this undesirable? Is it logically distinct from the desirable “occluding” effect of bleeding in regular plural and past tense allomorphy in English (see Volonec & Reiss 2020:28f.)? I don’t know. The latter SPE passage seems to suggest this is undesirable: “…we have not found any convincing example to demonstrate the need for such rules [like my (4-5)–KBG]. Therefore we propose, tentatively, that rules such as [(4-5)], with the great increase in descriptive power that they provide, not be permitted in the phonology.” (loc cit.:375). They propose instead that only readjustment rules should be permitted to introduce rule features; otherwise rule feature specifications must be underlyingly present or introduced via redundancy rule.
As far as I can see, SPE does not give any detailed examples in which rule feature specifications are introduced via rule. Lakoff however does argue for this device. There are rules which seem to apply to only a subset of possible contexts; one example given are the umlaut-type plurals in English like foot–feet or goose–geese. Later in the book (loc. cit./, 126, fn. 59) the rules which generate such pairs are referred to these as minor rules. Let us call the English umlauting rule simply Umlaut. Lakoff notes that if one simply applies the above conventions naïvely, it will be necessary to mark a huge number of nouns—at the very least, all nouns which have a [+Back] monophthong in the final syllable and which form a non-umlauting plural—as [-Umlaut]. This, as Lakoff notes, would wreck havoc on the feature counting evaluation metric (see §8.1), and would treat what we intuitively recognize as exceptionality (forming an umlauting plural in English) as “more valued” than non-exceptionality. Even if one does not necessarily subscribe to the SPE evaluation metric, one may still feel that this has failed to truly encode the productivity distinction between minor rules and major rules that have exceptions. To address this, Lakoff proposes there is another rule which introduces [–Umlaut], and that this rule (call it No Umlaut) applies immediately before Umlaut. Morphemes which actually undergo Umlaut are underlying -No Umlaut. Thus the UR of an noun with an umlauting plural, like foot, will be specified [–No Umlaut], and this will not undergo a rule like the following:
(6) No Umlaut: [ ] → {–Umlaut}
However, a noun with a regular plural, like juice, will undergo this rule and thus the umlauting rule U will not apply to it because it was marked [-U] by (6).
One critique is in order here. It is not clear to me why SPE introduces (what I have called) Convention 2; Lakoff simply ignores it and proposes an alternative version of Convention 3 where target morphemes, rather than segments, must be [+R] to undergo rule R. Of his proposal, he writes: “This system makes the claim that exceptions to phonological rules are morphemic in nature, rather than segmental.” (loc. cit., 18) This claim, while not necessarily its 1970-era implementation, is very much in vogue today. There are some reasons to think that Convention 2 introduces unnecessary complexities, which I’ll discuss in a subsequent post. One example (SPE:374) makes it clear that for Chomsky & Halle, Convention 3 requires that that for rule R the target be [+R], but later on, they briefly consider what if anything happens if any segments in the environment (i.e., structural change) are [-R].5 They claim (loc. cit., 375) there are problems with allowing [-R] specifications in the environment to block application of R, but give no examples. To me, this seems like an issue created by Convention 2, when one could simply reject it and keep the rule features at the morpheme level.
I have since discovered that McCawley (1974:63) gives more or less the critique of this convention in his review of SPE.
A correction: after rereading Zonneveld, I think Lakoff misrepresents the SPE theory slightly, and I repeated his mispresentation. Lakoff writes that the SPE theory could have phonological rules that introduce minus-rule features. In fact C&H say (374-5) that they have found no compelling examples of such rules and that they “propose, tentatively” that such rules “not be permitted in the phonology”; any such rules must be readjustment rules, which are assumed to precede all phonological rules. This means that (4-5) are probably ruled out. Lakoff’s mistake may reflect the fact that the 1970 book is a lightly-adapted version of his 1965 dissertation, for which he drew on a pre-publication version of SPE.
[This post, then, is the first in a series on theories of lexical exceptionality.]
Endnotes
- The modern linguist would probably not regard words like restraint as subject to this rule at all. Rather, they would probably assign #t to the “word” stratum (equivalent to the earlier “Level 2”) and place the shortening rule in the “stem” stratum (roughly equivalent to “Level 1”). Arguably, C&H have stated this rule more broadly than strictly necessary to make the point.
- It is said that the exceptionality of this pair reflects its etymology: obese was backformed from the earlier obesity. I don’t really see how this explains anything synchronically, though.
- This is roughly the context in which Philadelphia short-a is tense, though the following consonant must be tautosyllabic and tautomorphemic with the vowel. Philadelphia short-a is, however, not a great example since it’s not at all clear to me that short-a tensing is a synchronic process.
- Formally, the set in question is something like [−Dorsal] ∖ [+Voice, +Consonantal, +Continuant, −Nasal].
- This issue is taken up in more detail by Kisseberth (1970); I’ll review his proposal in a subsequent post.
References
Chomsky, N. and Halle, M. 1968. The Sound Pattern of English. Harper & Row.
Kisseberth, C. W. 1970. The treatment of exceptions. Papers in Linguistics 2: 44-58.
Lakoff, G. 1970. Irregularity in Syntax. Holt, Rinehart and Winston.
McCawley, J. D. 1974. Review of Chomsky & Halle (1968), The Sound Pattern of English. International Journal of American Linguistics 40: 50-88.
Rich people shouldn’t drive
I don’t understand why the filthy rich ever drive. Sure, I get why Ferdinand Habsburg gets into the Eva cockpit: an F1 race is the modern-day tournament. But driving is a dangerous, high-liability, cognitively taxing activity and it’s easy for the rich to offload those hazards to a specialist. I don’t understand why, for example:
- Warren Buffett (alleged net worth $138B) supposedly drives his own older Cadillac to the office (though maybe not anymore, given that he’s now 93).
- “Little” Sam Altman (alleged net worth $1B) drives a low-to-the-ground sports car through stop-and-go traffic in downtown Palo Alto (it’s giving Dukakis).
- “Bumpin’ dat” Justin Timberlake (alleged net worth $250M) got busted for a DUI in Long Island.
- Alec Baldwin (alleged net worth $70m) settled out of court with a guy he allegedly punched in the jaw over a parking spot.
In the unlikely event that I hit centimillion status, the first thing I’m doing is buying a black, under-the-radar towncar and hiring a chaffeur with good personal recommendations. And before that, when I enter decamillion territory, I’m just calling UberXen. No alternate-side parking, no DUIs for me. I don’t know about Justin, but surely Warren and Sam have something better to do than be behind the wheel. They could be power napping, meditating, watching the market, or catching up on X (“the everything app”) the back of their car instead.
Linguistic relativity and i-language
Elif Batuman’s autofiction novel The Idiot follows Selin, a Harvard freshman in the mid 1990s. Selin initially declares her major in linguistics and describes two classes in more detail. One is with a soft-spoken professor who is said to be passionate about Turkic phonetics (no clue who this might be: anybody?) and the other is described as a Italian semanticist who wears beautiful suits (maybe this is Gennaro Chierchia; not sure). Selin is taken aback by the stridency with which her professor (presumably the Turkic phonetician) rails against the Sapir-Whorf hypothesis—she regrets how the professor repeatedly mentions Whorf’s day job as a fire prevention specialist—and finds linguistic relativity so intuitive she changes her major at the end of the book.
Batuman is not the only person to draw a connection between rejection of the stronger forms of the Sapir-Whorf hypothesis and generativism. Here’s the thing though: there is no real connection between these two ideas! Generativism has no particular stance on any of this. The only connection I see between these two ideas is that, when you adopt the i-language view, you simply have more interesting things to study. If you truly understand, say, poverty of the stimulus arguments, you just won’t feel the need to entertain intuitive-popular views of language because you’ll recognize that the human condition vis-à-vis language is much richer and much stranger than Whorf ever imagined.
The presupposition of “recognize”
There’s an interesting pragmatics thing going on in the official statement ex-first lady Melania Trump put out after her husband was grazed by a sniper’s bullet. (The full statement is here if you care; it’s not very interesting overall.) However I was drawn to an interesting violation of presupposition in the document:
A monster who recognized my husband as an inhuman political machine attempted to ring out Donald’s passion – his laughter, ingenuity, love of music, and inspiration.
A few things are going on here; let me put aside the awkward non-parallelism of laughter vs. love of music vs. ingenuity and inspiration and note that the verb she wants in the embedded clause is wring out (figuratively, to extract by means of forceful action) not ring out. But the more interesting one is the use of recognized. To say that the shooter recognized Donald Trump as an inhuman machine presupposes that the speaker agrees with this assessment; or perhaps more generally that it is in the common ground that Donald Trump is an inhuman machine, at least in my idiolect. There is nothing in the text or subtext of the statement suggesting she views her husband as a monster, despite the long and tedious tradition of trying to “read resistance” into the wives of right-wing American politicians. For me verbs like misconstrued or mistook presupposes the opposite, that the speaker and/or common ground disagrees with this assessment, and that’s what I suppose Mrs. Trump meant to say here. I don’t blame Mrs. Trump for this; English is not her first language, though she speaks it quite well. But she’s famous and rich enough that she ought to employ a PR professional or lawyer to proof-read public statements like I’m sure Mrs. Obama or Mrs. Bush do.
Medical bills
Starting about two years ago, I got an unexpected medical bill in the mail. The amount wasn’t very high, but I was quite frustrated and annoyed. First, this was from a local College of Dentistry, where most procedures are free for the insured (and probably not insured too); there was no “explanation of benefits” that explained this was a co-pay, or that my insurance only covered some portion. Secondly, I hadn’t been to the College of Dentistry in quite a while, so I had no idea which of the various procedures this was or even what day I received the billed service. Third, there was no way to get more information: the absolute worst thing about this provider is that the administrative staff are some of the most overloaded and overworked people I have ever seen, and I have witnessed them just let the phone ring because they’re dealing with a huge line of in-person patients (some of whom are bleeding from their mouth). So I didn’t pay it. After a while though, the bills continued and I started to worry. Was I wasting paper for no reason? Would this harm my credit score? So I put about an hour into finding a way to actually get in touch with the billing office: turns out this was a Google Form buried somewhere on a website, and if you fill it out, a someone calls you (in my case, within the hour!), looks up your chart, and can tell you the date of service and why you were billed. Why they didn’t just include this in the bill in the first place? I have to imagine this makes it ever harder for the College to actually collect on these debts.
Representation vs. explanation?
I have often wondered whether detailed representational formalism is somehow in conflict with genuine explanation in linguistics. I have been tangentially involved in the cottage industry that is applying the Tolerance Principle (Yang 2005, 2016) to linguistic phenomena, most notably morphological defectivity. In our paper on the subject (Gorman & Yang 2019), we are admittedly somewhat nonchalant about the representations in question, a nonchalance which is, frankly, characteristic of this microgenre.
In my opinion, however, our treatment of Polish defectivity is representationally elegant. (See here for a summary of the data.) In this language, fused case/number suffixes show suppletion based on the gender—in the masculine, animacy—of the stem, and there is lexically conditioned suppletion between -a and -u, the two allomorphs of the gen.sg. for masculine inanimate nouns. To derive defectivity, all we need to show is that Tolerance predicts that, in the masculine inanimate, there is no default suffix to realize the gen.sg. If there are two realization rules in competition, we can implement this by making both of them lexically conditioned, and leaving nouns which are defective in the gen.sg. off both lexical “lists”. We can even imagine, in theories with late insertion, that the grammatical crash is the result of uninterpretable gen.sg. features which are, in defective nouns, still present at LF.1
It is useful to contrast this with our less-elegatn treatment of Spanish defectivity in the same paper. (See here for a summary of the data.) There we assume that there is some kind of grammatical competition for verbal stems between the rules that might be summarized as “diphthongize a stem vowel when stresssed” and “do not change”. We group the two types of diphthongization (o to ue [we] and e to ie [je]) as a single change, even though it is not trivial to make these into a single change.2 This much at least has a venerable precedent, but what does it mean to treat diphthongization as a rule in the first place? The same tradition tends to treat the propensity to diphthongize as a phonological (i.e., perhaps via underspecification or prespecification, à la Harris 1985) or morphophonological property of the stem (a lexical diacritic à la Harris 1969, or competition between pseudo-suppletive stems à la Bermúdez-Otero 2013), and the phonological contents of a stem is presumably stored in the lexicon, and not generated by any sort of rule.3 Rather, our Tolerance analysis seems to imply we have thrown in our lots with Albright and colleagues (Albright et al. 2001, Albright 2003) and Bybee & Pardo (1981), who analyze diphthongization as a purely phonological rule depending solely on the surface shape of the stem. This is despite the fact that we are bitterly critical of these authors for other reasons4 and I would have preferred—aesthetically at least—to adopt an analysis where diphthongization is a latent property of particular stems.
At this point, I could say, perhaps, that the data—combined with our theoretical conception of the stem inventory portion of the lexicon as a non-generative system—is trying to tell me something about Spanish diphthongization, namely that Albright, Bybee, and colleagues are onto something, representationally speaking. But, compared with our analysis of Polish, it is not clear how these surface-oriented theories of diphthongization might generate grammatical crash. Abstracting from the details, Albright (2003) imagines that there are a series of competing rules for diphthongization, whose “strength” derives from the number of exemplars they cover. In his theory, the “best” rule can fail to apply if its strength is too low, but he does not propose any particular threshold and as we show in our paper, his notion of strength is poorly correlated with the actual gaps. Is it possible our analysis is onto something if Albright, Bybee, and colleagues are wrong about the representational basis for Spanish diphthongization?
Endnotes
- This case may still be a problem for Optimality Theory-style approaches to morphology, since Gen must produce some surface form.
- I don’t have the citation in front of me right now, but I believe J. Harris originally proposed that the two forms of diphthongization can be united insofar as both of them can be modeled as insertion of e triggering glide formation of the preceding mid vowel.
- For the same reason, I don’t understand what morpheme structure constraints are supposed to do exactly. Imagine, fancifully, that you had a mini-stroke and the lesion it caused damaged your grammar’s morpheme structure rule #3. How would anyone know? Presumably, you don’t have any lexical entries which violate MSC #3, and adults generally does not make up new lexical entries for the heck of it.
- These have to do with what we perceive as the poor quality of their experimental evidence, to be fair, not their analyses.
References
Albright, A., Andrade, A., and Hayes, B. 2001. Segmental environments of Spanish diphthongization. UCLA Working Papers in Linguistics 7: 117-151.
Albright, A. 2003. A quantitative study of Spanish paradigm gaps. In Proceedings of the 22th West Coast Conference on Formal Linguistics, pages 1-14.
Bermúdez-Otero, R. The Spanish lexicon stores stems with theme vowels, not roots with inflectional class features. Probus 25: 3-103.
Bybee, J. L. and Pardo, E. 1981. On lexical and morphological conditioning of alternations: a nonce-prob e experiment with Spanish verbs. Linguistics 19: 937-968.
Gorman,. K. and Yang, C. 2019. When nobody wins. In F. Rainer, F. Gardani, H. C. Luschützky and W. U. Dressler (ed.), Competition in Inflection and Word Formation, pages 169-193. Springer.
Harris, J. W. 1969. Spanish Phonology. MIT Press.
Harris, J. W. 1985. Spanish diphthongisation and stress: a paradox resolved. Phonology 2: 31-45.
Automatic batch sizing
Yoyodyne is my lab’s sequence-to-sequence library, intended to be a replacement for Fairseq, which is (essentially) abandonware. One matter of urgency for me in building Yoyodyne was to enable automatic hyperparameter tuning. This was accomplished by logging results to Weights & Biases (W&B). We can perform a random or Bayesian hyperparameter sweep using a “grid” specified via a YAML file, monitor progress on the W&B website, or even hit the API to grab the best hyperparameters. One issue that kept coming up, however, is that it is easy to hit out-of-memory (OOM) errors during this process. Here’s what we did about it:
OOMs are not purely due to model size: the model, batch, and gradients all need to fit into the same VRAM. PyTorch Lightning, which is a key part of the Yoyodyne backend, provides a function for automatically determining the maximum batch size that will not trigger an OOM. Basically, it works by starting with a low batch size (by default, 2), randomly drawing three batches of that size, and then attempting training (but in fact caching parameters so that no real training occurs). If this does not trigger an OOM, it doubles the batch size, and so on.1,2 You can enable this approach in Yoyodyne using the flag --find_batch_size max
. You’d want to use this if you believe that a giant batch size is fine and you just want to fully saturate your GPU.
A slightly more sophisticated version of this, useful when you actually want to tune batch size, is enabled with the flag --find_batch_size opt
. This again begins by doubling the size of randomly drawn batches as well, but here it halts once the doubling exceeds the value of the --batch_size
flag. If the max batch size is larger than the requested size, it is used as is; thus this acts as a soft check against OOMs. If, however, the max batch size is smaller than --batch_size
it instead solves for a new batch size, the largest batch size which is smaller than the max and which is a divisor of --batch_size`
. It then enables multiple rounds of gradient accumulation per update,3 thus perfectly-losslessly simulating the desired batch size while using as much of VRAM as possible. I can assure you this is a killer feature for neural network tuning.
Endnotes
- This is a little imprecise, and one can refine it by doing a binary search, but in practice it’s not worth the effort when working with ragged data.
- Whatever batch size was requested with the
--batch_size
flag is ignored. - More formally, given desired batch size $b$ and a max batch size $n’$, it finds $a, n$ such that $a$ is the smallest integer, and $n$ is the largest integer, where $an = b$. This is computed via brute force; my implementation of an elegant solution based on the prime factorization was a bit slower.
An interesting semantic change: “raw dogging”
The term raw-dogging is a slightly-obscene, slangy term for engaging in unprotected sex, often used to celebrate that occasionally-risky behavior. However, this term has undergone an interesting semantic change in the last five or so years. I think the actuator of this chain of events is prolific Twitter user @jaboukie:
This is a straightforward, jocular, semantic extension, generalizing the sense of danger associated with unprotected sex to life itself. In its wake (it was a very popular tweet), I also saw a tweet about “raw dogging” to refer to riding the subway without headphones or sunglasses. Years later, I read a blind item about a US senator flying commercially from the States to Israel; apparently, according to his seat mate, during the long flight, he didn’t listen to music or podcasts, read, check email, nap, or watch a movie, he just…sat there, for hours and hours, like an absolute maniac. I haven’t been able to find this story, and I don’t remember whether it referred to raw-dogging, but I have since seen several stories discussing raw-dogging flights (e.g., this recent one in GQ). Discussions of raw-dogging in the commercial aviation sense largely recognize the act’s covert prestige: it is recognized as a curious and difficult task, one associated with macho and/or maleness. The GQ article also quotes individuals who refer to stimulation-free commercial flying as barebacking, which traditionally refers to unprotected anal sex between men. (In contrast raw-dogging in its original sense does not specify the specific sex act beyond some form of genital-genital penetration, nor does it specify the gender or sexual orientation of the participants.)